Google launched a standalone app for Google Lens which is an AR aid tool for users. Google first launched Lens in its Assistant and Photos features. Lens only works with devices running on Android Marshmallow or higher.
The works and features
The new technology which was announced at the company’s I/O developers conference in 2017, is a machine learning-powered tool for identifying object captured on camera. It allows users to search for information using the phone camera. Upon opening the app, users are greeted with a viewfinder that is used to target objects, both living and non-living for analysis with Lens. For example, if the users want to know about a specific object such as a book, the user gets information instantly about its reviews, author, and other details. Similarly, with landmarks and buildings such as restaurants and monuments, users can view info, price, and reviews of the camera focus on the phone’s screen. All of this information is available instantly right in front the user’s eye without making much effort. All they have to do is focus on the object through the phone screen.
Also, Google recently added a number of features to Lens including:
- “smart text selection” that lets users select certain aspects of texts on the object in real time
- “remember this” that lets users save an image with info for later use
- importing the image to keep
- identifying addresses
- Connecting to Wi-Fi by pointing towards a label having the username and password while Lens automatically connects to that network.
The purpose to release an app was to allow quick access to Lens in case it was not compatible with users’ current android devices. This also helps new users aware and accustomed to the feature who will act as de facto beta testers. The Lens app has the same functionality as Google Lens does in Assistant and Photos by using artificial intelligence to deliver information about focused images on the smartphone’s camera.