Home » Technology » Google has developed an AI capable of lighting your photos in a way that is simply amazing

Google has developed an AI capable of lighting your photos in a way that is simply amazing

High end pixel

Google is preparing an artificial intelligence that can clean images of noise without losing detail or quality. The relationship between mobile phones and night photography it’s anything but pleasant. It is true that having a camera in your pocket is a great advantage and one of the coolest things that technology has brought us in the last 20 years, but the truth is that shooting in low light conditions does not always offer ideal results.

In fact, if we take a picture in the street without natural light, it is normal for the optical sensor to generate a lot of electronic noise. There are many ways to reduce it, although one of them is to gain clarity in the photo by losing detail, the norm today for many smartphones. Nevertheless, Google is training an AI that will allow us to eliminate noise without losing detail.

Detailed and pristine low-light images thanks to the Big G

Google is preparing the ultimate solution to noise in low-light photos. That’s the Mountain View idea on paper. For this they have launched a open source project known as MultiNeRF, as collected in PetaPixel. Since digital noise and its consequences are still two big areas for engineers to work on, Google’s algorithms want to solve the problem with the help of a neural network, whose first (and impressive) results you can see in the following video:

This neural network known as NeRF (Neural Radiance Fields), was originally created to generate 3D images from 2D assemblies. If Google has decided to rely on this neural network, it is because, when generating a 3D image, it is much easier for it to analyze the information contained in an image, because it can “move” through it. MultiNeRF project document its mission is clearly stated:

We modified NeRF to train the AI ​​directly on linear RAW images, preserving the full dynamic range of the scene. By rendering raw output images of the resulting NeRF, we can perform novel high dynamic range (HDR) view synthesis tasks. In addition to changing the camera’s point of view, we can manipulate focus, exposure, and tone mapping after analyzing the image.

In other words: the algorithm analyze the raw data of the RAW file and uses artificial intelligence to see what the resulting photo would look like if there were no digital noise in the scene. What is intended is to preserve the maximum detail, with the minimum of noise. For now, the AI ​​that will be in charge of carrying out this entire process is in its early stages, although there is no doubt that it is something we would like to see implemented in the Google Pixel Sooner than later.

It is still early to know if it will be possible to do it, but if so, it would not be bad for other manufacturers to join the bandwagon. In the meantime, don’t hesitate to take a look at the phones with the best camera you can find on the market and the best apps to edit your photos available for Android.

Brain Curry

Brian is the news author at Research Snipers which mainly covers Technology News, Microsoft News, Google News, Facebook, Apple, Huawei, Xiaomi, and other tech news.