Google

Global Accessibility: Chrome Now Recognises Typos In Web Addresses

Google’s Chrome browser now recognizes possible typos when entering web addresses. This is just one of several features that the development team wants to use to make content easier and more secure to access.

When users type a domain name into the Chrome address bar, it now detects URL typos and suggests websites based on the corrections. This is intended to improve accessibility for people with dyslexia, language learners, and anyone who simply mistypes. The company announced that this function will be rolled out immediately in the desktop versions of the browser and will also be extended to mobile devices in the coming months.

Additionally, Google has unveiled some updates to its Live Caption feature, which transcribes what someone says in real-time. In the new version, users can also type in an answer during a call and have it read out to the person they are talking to. While this feature will initially be available for the latest Pixel devices, it will later roll out to older Pixel phones and other Android devices as well. Google is also bringing an optimized subtitles box to Android tablets and will add live caption support for French, Italian, and German on the Pixel 4 and 5 and other Android devices.

Fewer barriers

Furthermore, it was decided to generally release the Google Maps function “Barrier-free places”. Previously, you had to register here and then a wheelchair symbol was displayed at various locations if barrier-free access was available. This should now also be the case for all users without registration. Appropriate labeling not only helps wheelchair users but also, for example, users who are traveling with a stroller.

Finally, Google also announced a closed beta phase for some new features in its Lookout app for blind and visually impaired users. The AI-powered app can now generate descriptions of images – regardless of whether they have alt text or a caption or not. Users can then ask additional questions about these images, which the app attempts to answer using an advanced Google DeepMind visual language model.