web analytics
Home » Technology » Google I/O: Great improvements in all Google services

Google I/O: Great improvements in all Google services

For the first time in three years, Sundar Pichai opens the Google I/O to a large live audience. As usual, the Google boss summarizes the most important innovations for services such as search, maps, and more. The most exciting improvements in a row.

Google AI learns 24 new languages

When it comes to improving its translation service Translate, Google no longer depends solely on analyzing texts in both languages. Artificial intelligence makes it possible to learn system languages ​​for which there is little basic material. The result: Google now supports 24 new languages ​​spoken by 300 million people.

Map immersion display

Google Maps also benefits from advances in AI, such as image recognition. The company takes advantage of the opportunity to recognize more and more buildings from satellite images and to supplement them digitally. Globally, the view of buildings could be increased by 20 percent within a year. In Africa, the number of buildings depicted has increased fivefold during this period.

All these developments ensure that Google can announce the new Immersive View for Maps feature. This is a computer-generated 3D rendering, the I/O demonstrates the technique on a rather impressive overflight of London. The technology can also be used indoors: Google shows a virtual drone flight through a restaurant – very exciting.

In Europe, the feature to choose the most environmentally friendly route will be integrated this year – already available in the US.

Auto-generated chapters for videos

YouTube is drastically increasing the number of videos for which Google can automatically create chapters. The use of “Multimodal Technology from DeepMind” is expected to increase tenfold this year. This also results in the ability to significantly expand video transcripts for all iOS and Android devices.

Automatic overview in Documents

“You know that panicked feeling when you realize you have to read a 25-page document for a meeting and you haven’t read it?” said Sundar at I/O. Google wants to help with this in the future with an automatically generated summary.

Google Assistant recognizes when you look at it

In addition to the buzzword, Google is introducing a new way to activate the assistant: look and talk or look and speak. The service is activated when the system recognizes the user’s face. “Six machine learning models” should spread more than 100 signals to recognize when the user really needs the assistant’s attention. All data is processed on the device.

Assistants should finally be able to understand a natural flow of language much better. On stage, a developer asks a question, interrupted by “mhhh” and with unclear instructions to play a song without giving its exact name. The assistant masters this task thanks to the language model “LaMDA 2”, “our most spoken AI yet”.