Pixel 2 has Google’s First In-House Chipset Called Pixel Visual Core, Aims to Bring HDR+ to More Apps

pixel 2 camera pixel visual core

We may earn a commission when you click links to retailers and purchase goods. More info.

All of the talk in recent years about Google possibly designing its own in-house chipsets for phones has come true in a way on the Pixel 2 and Pixel 2 XL. This morning, Google announced that both phones contain Google’s first custom-designed System on Chip (SoC), called Pixel Visual Core, though this one was made to help with the outstanding camera in each rather than replace the work of Qualcomm’s Snapdragon line. In the coming weeks and months, Google will enable the Pixel Visual Core to help bring the Pixel camera’s HDR+ magic to third party camera apps.

Seriously, you are reading that correctly – the Pixel 2 and Pixel 2 XL had a secret Image Processing Unit (IPU) that none of us knew about until now and hasn’t even been turned on yet. That’s kind of cool, right?

pixel visual core

The goal here is, again, to bring the Pixel 2’s HDR+ smarts to others outside of the Google Camera app. The SoC will do so by taking advantage of its eight Google-designed custom cores that can deliver “over 3 trillion operations per second on a mobile power budget.” By using the Pixel Visual Core, Google says that HDR+ can run 5x faster and at less than 1/10th the energy than if it was trying to through the application processor (like the Snapdragon 835 in these phones). There are many more nerd details here than that, but it’s over my head and so I’ll leave those dirty details to those willing to dive into the press release below.

Here are a couple of examples of the HDR+ capabilities that are coming to third party camera apps, thanks to Pixel Visual Core:

pixel visual core

pixel visual core

Of course, since this is Google’s first chip, they won’t just stop with opening up HDR+ to others. They already have the “next set of applications” lined up that they want this Pixel Visual Core to power.

The Pixel Visual Core will be enabled first as a developer option in the “coming weeks” through an Android Oreo 8.1 (MR1) developer preview. Further down the road, likely when Android 8.1 goes stable, Google will enable it for all third party apps to use within the Android Camera API.

Press Release


Pixel Visual Core: Image processing and machine learning on Pixel 2

The camera on the new Pixel 2 is packed full of great hardware, software and machine learning (ML) so all you need to do is point and shoot to take amazing photos and videos. One of the technologies that helps you take great photos is HDR+, which makes it possible to get excellent photos of scenes with a large range of brightness levels, from dimly lit landscapes to a very sunny sky.

HDR+ produces beautiful images, and we have evolved the algorithm over the past year to utilize the Pixel 2’s application processor efficiently and enable the user to take multiple pictures in sequence by intelligently processing HDR+ in the background. In parallel with that engineering effort, we have also been working on creating capabilities which enable significantly greater computing power, beyond existing hardware, to bring HDR+ to third-party photography
applications. To expand the reach of HDR+, to handle the most challenging imaging and machine learning applications, and to deliver lower-latency and even more power-efficient HDR+ processing, we have created Pixel Visual Core.

Pixel Visual Core is Google’s first custom-designed System on Chip (SoC) for consumer products. It is built into every Pixel 2, and in the coming months, we will turn it on through a software update to enable more applications to use Pixel 2’s camera for taking HDR+ quality pictures.

Let’s delve into some of the details. The centerpiece of Pixel Visual Core is the Google-designed Image Processing Unit (IPU)—a fully programmable, domain-specific processor designed from scratch to deliver maximum performance at low power. With eight Google-designed custom cores, each with 512 arithmetic logic units (ALUs), the IPU delivers raw performance of over 3 trillion operations per second on a mobile power budget. Using Pixel Visual Core, HDR+ can run 5x faster and at less than 1/10th the energy than running on the application processor (AP). A key ingredient to the IPU’s efficiency is the tight coupling of hardware and software—our software controls many more details of the hardware than in a typical processor. Handing more control to the software makes the hardware simpler and more efficient, but it also makes the IPU challenging to program using traditional programming languages. To avoid this, the IPU leverages domain-specific languages that ease the burden on both developers and the compiler: Halide for image processing and TensorFlow for machine learning. A custom Google-made compiler optimizes the code for the underlying hardware.

In the coming weeks, we will enable Pixel Visual Core as a developer option in the developer preview of Android Oreo 8.1 (MR1). Later, we will enable it to all third-party apps using the Android Camera API, giving them access to the Pixel 2’s HDR+ technology. We can’t wait to see the beautiful HDR+ photography which you already get through your Pixel 2 camera also be available in your favorite photography apps.

HDR+ is the first application to run on Pixel Visual Core. As noted above, Pixel Visual Core is programmable and we are already preparing the next set of applications. The great thing is that as we follow up with more, new applications on Pixel Visual Core, Pixel 2 will continue to improve. We’ll keep rolling out other imaging and ML innovations over time—keep an eye out!

// Google

Category

Tags

Collapse Show Comments
14  Comments