In the past year, OPPO has changed a new look to show people.
In the early days, OPPO was famous for its “down-to-earth” products. They pay more attention to how the functions are specifically implemented to user needs. For example, “charging for five minutes, talking for two hours”, “multiple periscope telephoto”, and even their exploration of concept products such as scroll screens are particularly down-to-earth.
But in the past year, OPPO has successively released several “abstract” technologies, from the Mariana self-developed chip a year ago to the Pantanal in the summer. On the just-concluded OPPO INNODAY, OPPO launched another A new concept, Andes Smart Cloud.
As a mobile phone manufacturer, OPPO has been doing cloud services for a long time. This is not the focus of the industry, and the functions of each cloud service are similar. After all, there is no competition. Whichever mobile phone a user uses will use whose cloud.
But this time, OPPO seems to be making a big move: from the Mariana chip, to the Pantanal smart cross-end system, to the Andes smart cloud, OPPO will combine the three into the “three core technologies”. The importance of endowment is self-evident.
It seems that OPPO is planning to lay out a completely different “cloud system”. We might as well analyze why it came from, and what will it mean?
01
The rise of cloud services
To study a brand new cloud service, we have to review the development history of “personal cloud”.
Personal cloud services have entered the mainstream field of vision, which can be traced back to the rise of Dropbox. It is a successful example of the first generation of “cloud disk”. Since its launch in 2008, in just 7 months, Dropbox has attracted 1 million users, which is even faster than Facebook’s initial growth. A year later, that number had become 10 million.
But at the same time that Dropbox was flying, smartphones were also rapidly popularizing, and mobile phone manufacturers began to enter the game. In 2011, Apple launched iCloud, and the following year, Google launched Google Drive.
The entry of smartphone manufacturers and system vendors has brought a killer application to cloud storage: storing data generated on users’ smartphones. From photos, contacts, to calendar, mail, application data. This change has expanded the user interface of cloud services to hundreds of millions or billions.
But for a long time, the essence of cloud services was just “cloud storage”. Even today, the cloud service most used by users is still the backup and recovery of mobile phone data.
In this context, users also began to discuss, is the cloud really reliable? Is it really necessary and irreplaceable? Some “hardcore players” have been trying to use local cloud storage, that is, NAS, to manage their own data. This part of data security fundamentalists believes that data is 100% safe only if it is placed on its own hard drive.
Until 2015, the birth of Google Photos completely changed the nature of “cloud services”.
Since its inception, the “cloud services” provided by Google to users have always been biased towards “applications” rather than “storage”. For example, mailboxes, calendars, and cloud documents, but their essence is still storage, and users have the option to import all data locally.
But Google Photos is different. It is truly an application based entirely on “cloud computing power”. Google uses cloud computing and AI capabilities to dig deep into the data and provide better presentation methods. For example, Google Photos can identify the person in the user’s photo, and can easily find all photos of the same person through the face in a photo; “, “New Year” and other photos. These are capabilities that traditional cloud services do not possess.
Since then, personal cloud services have quietly undergone a qualitative change. Because if the user saves the photos locally or on a cloud disk such as Dropbox, the corresponding AI function cannot be realized. Therefore, when I saw that OPPO said it was going to create a new “smart cloud”, my first reaction was that it would accelerate its development in the direction of application and service.
Indeed, the six capabilities planned by Andes Intelligent Cloud include, in addition to traditional storage and machine learning, real-time rendering in the cloud, intelligent dialogue, and hardware simulation. Real-time rendering on the cloud solves the problem of ultra-low-latency rendering between the device and the cloud when the computing power on the device side is insufficient; intelligent dialogue can realize human-computer interaction in multiple scenarios, understand user intentions, and actively recommend services; hardware simulation includes chip Simulation, mobile phone simulation. Among them, mobile phone emulation refers to helping developers develop and test remotely through mobile phone virtualization.
These plans are ambitious and worth looking forward to.
Now, let’s turn our attention back to history. After Google released Google Photos, “photo search” has become a standard feature of most mobile phones. There is another story behind this, another key technical route.
02
The Rise and Limitations of “Device Computing”
Faced with Google’s new cloud photo album, Apple was the first to respond.
In 2016, on iOS 10, Apple launched the “Photo Search” function for the first time, apparently targeting Google Photos.
Unlike Google Photos, Apple’s photo album does not use cloud computing power to analyze photos, but uses the phone’s own NPU (Neural Network Processor) to achieve AI learning and recognition.
The reason why Apple runs everything locally is to emphasize its privacy protection on the one hand, and to improve the reliability of the service on the other hand, so that it can be used by users even when the network environment is not good.
This opened the era of mobile phone NPU popularization and the subsequent arms race. To this day, every mobile phone manufacturer will interpret NPU performance as an important module when releasing new products. The application scenarios of “machine learning” on mobile phones have gradually become more and more extensive: from speech recognition, image recognition processing, image optimization, and even “frame insertion” to games to optimize fluency.
Image source: Visual China
But the NPU is not perfect. First of all, its performance is still limited. No matter how powerful the computing power of the NPU module is, it cannot match the capabilities of Google Cloud AI in terms of model processing capabilities.
Secondly, it still takes up quite a lot of system resources. Today, many iPhone users have found that the power failure will be very fast after restoring data in the first few days after getting a new phone. Behind this is that the mobile phone has been running its NPU model to identify data such as albums and rebuild directories, which often brings about some abnormal heating problems.
Finally, the NPU model runs locally, but the data in the cloud spans different terminals. Therefore, manufacturers like Apple must equip each of their hardware with an NPU of the same computing power. Otherwise, there will be problems that can be realized on mobile phones, but cannot be realized on tablets, computers and other devices due to insufficient performance.
A recent example is very typical: Apple has newly launched the K song function of Apple Music. This function uses the local NPU to analyze and eliminate the vocal part of the song. There are a lot of old equipment, because the NPU computing power is insufficient, it is impossible to experience this function. And this application is a typical scenario that should be solved through cloud computing power, because it does not involve any privacy issues, and it only needs to be processed once and pushed to the mobile phone. Public opinion believes that this is Apple’s muscle show, but it does affect the experience of old users.
From this logic, it is not difficult to understand why OPPO started from the “imaging function” and developed its own Mariana chip. Photos may be one of the functions with the highest level of privacy protection on mobile phones, and they have extremely high requirements for real-time performance. Users press the shutter, of course, want to see the processed and optimized image immediately. The experience of Google Photo is relatively poor at this point. In many cases, after the photo is taken and uploaded, the effect of cloud AI optimization can not be seen until the next day or even later.
In any case, the final outcome is that the two giants, Google and Apple, have chosen two different routes. The two routes have different advantages in terms of data processing and presentation, and it is difficult to say which one is “right”.
After understanding these two routes, and looking back at the “device-cloud collaboration” proposed by OPPO, it will be much easier to understand.
03
The future of “device-cloud collaboration”
Obviously, the goal of “device-cloud collaboration” is to absorb the advantages of cloud and terminal computing at the same time.
When it is necessary to respond quickly and process sensitive data, use the characteristics of faster terminal computing power, more real-time, and no need to be “online”; when analyzing and training a large amount of non-sensitive data, use the characteristics of resource saving and strong computing power in the cloud. Cloud background implementation to improve service capabilities.
Ideally, “device-cloud collaboration” will also have a key advantage, which is to allow different terminals to have a unified experience. This does not have to happen in the cloud of the Internet, but can also happen in the local area network.
For example, when the user wakes up the voice assistant with a headset, the computing power of the headset is obviously not enough to recognize the voice. It can send the recognition task to other devices such as mobile phones for processing. In this way, it can also greatly improve the experience of devices with relatively weak computing power such as TVs and watches.
The advantages are obvious, but this matter is not necessarily simple. Apple, Google, and the two top giants have each only done one thing well. How will OPPO explore the three directions of self-developed chips, cross-end systems, and smart clouds at the same time?
The answer still has to be found in the history of OPPO. Looking back at the development of the domestic smartphone market, it is not difficult to find that domestic users have relatively low awareness and acceptance of cloud applications. At the same time, it is difficult for OPPO to have Apple’s self-developed chip capabilities overnight.
You will find that this should be a prudent choice made by OPPO after examining itself, observing the market, and analyzing users.
Now, Mariana’s self-developed chip is already the key AI chip in OPPO mobile phones. The core difference and advantage lies in the NPU, which is AI computing power, which lays a solid foundation for terminal performance and broader application prospects; Pantanal is The shared “middleware” on different device operating systems allows data and services to flow between different terminals, realizing a true human-centered rather than device-centered; finally, Andes Smart Cloud integrates those heavy computing power, large The task of data is taken to the cloud and becomes a “smart brain” shared by multiple devices.
For OPPO, choosing this goal itself is basically choosing a long road. Because it involves the development and coordination of three core technologies at the same time, the challenge is quite large.
However, I asked OPPO. It is said that Chen Mingyong has prepared both determination and patience strategically. “Don’t hope for miracles in chip research and development.” He once clearly expressed this point of view.
In fact, the so-called “difficult but correct thing” is to find the right thing first. In this way, even if it is necessary to “roll the stone up the mountain”, it is still meaningfully difficult.
This article is transferred from: https://www.geekpark.net/news/312705
This site is only for collection, and the copyright belongs to the original author.