Google’s Pixel 2 and Pixel 2 XL ship with a mysterious System on a Chip (SoC) called the Pixel Visual Core. It’s the company’s first custom co-processor for a mobile device and allows HDR+, the company’s image processing method for better photo quality, to have a discrete location to do its thing. The search giant says the chip’s meant to give some headroom to HDR+ to further improve and handle even more challenging imaging and machine learning processes.

According to Google, while the chipset ships inside each and every Pixel 2 and Pixdl 2 XL, it isn’t enabled out of the box. Rather, in the coming months, the company will release a software update which will enable it and allow third-party camera apps to take advantage of the HDR+ technology onboard. As for specifics surrounding the update in question, we’re not exactly sure when it’ll begin shipping but we do know it’ll carry the Android 8.1 Oreo version number.
In terms of what the Pixel Visual Core processor packs, rather attempting to reframe it, I’ll let Google explain it.
The centerpiece of Pixel Visual Core is the Google-designed Image Processing Unit (IPU)—a fully programmable, domain-specific processor designed from scratch to deliver maximum performance at low power. With eight Google-designed custom cores, each with 512 arithmetic logic units (ALUs), the IPU delivers raw performance of more than 3 trillion operations per second on a mobile power budget. Using Pixel Visual Core, HDR+ can run 5x faster and at less than one-tenth the energy than running on the application processor (AP). A key ingredient to the IPU’s efficiency is the tight coupling of hardware and software—our software controls many more details of the hardware than in a typical processor. Handing more control to the software makes the hardware simpler and more efficient, but it also makes the IPU challenging to program using traditional programming languages. To avoid this, the IPU leverages domain-specific languages that ease the burden on both developers and the compiler: Halide for image processing and TensorFlow for machine learning. A custom Google-made compiler optimizes the code for the underlying hardware.
Additionally, Google will also be introducing the Android Camera API in the near future for third-party camera apps to take advantage of in order to achieve the same HDR+ image processing techniques as the standard Pixel camera app.
Unfortunately, since all of these changes are hardware-based, original Pixel owners and non-Google phone users will be left out. While this stinks to at least a degree, there’s still hope; the Google Camera app from the original Pixels with HDR+ image processing was recently ported to additional Android devices. Therefore, while users won’t be able to achieve the same quality in their photographs as they can on the Pixel 2, we may very well get close.
We’ll let you know when Android 8.1 Oreo enters developer beta stages.
2 Comments
You must log in to post a comment.