Google ML Kit brings on-device machine learning for AR apps

Modern smartphones have become “smart” in the real sense with features like machine learning and AI included in many apps. This facilitates functions like text recognition, translation in real-time, barcode scanning, precise tracking, and detection of objects. One prominent application of this technology is augmented reality apps that track the facial features to create overlay of masks or beautification elements. This all is possible due to the ML Kit which was introduced in May 2018 and now has grown its presence in more than 25,000 apps.

For now, these functionalities rely on cloud-based networks for the processing and machine learning applications. This is due to the dependence on Firebase mobile and web development platform. At times it can raise a question about security and data privacy. Also, network latency is another issue that can be worrisome.

To get over this Google has introduced a new ML Kit package which will be on the device APIs as an independent SDK, which doesn’t require Firebase project. Of course, the current cloud-based model will also be there to take advantage of, but gradually the transition from online to the locally available mode will be the norm.

To kick things off, two new APIs – entity extraction and pose detection, will be available in the ML Kit early access program. The entity detection is used for detecting the text like phone number, address, tracking numbers or date and time to make them dynamic. On the other hand, pose detection brings the ability to detect 33 skeletal points for hand and feet tracking in apps.

Developers who want to explore the new offering will have to switch to the SDK. For the end-user it means a smoother and faster experience in apps that support such functionality. The lack of need for network connectivity also means that the user will have more flexibility in engaging with such apps.

Leave a Reply

Your email address will not be published. Required fields are marked *