Android Developers Blog
The latest Android and Google Play news for app and game developers.
🔍
Platform Android Studio Google Play Jetpack Kotlin Docs News

10 mai 2023

Build smarter Android apps with on-device Machine Learning


Link copied to clipboard
Posted by Thomas Ezan, Developer Relations

In the past year, the Android team made significant improvements to on-device machine learning to help developers create smarter apps with more features to process images, sound, and text. In the Google I/O talk Build smarter Android apps with on-device Machine Learning, David Miro-Llopis PM on ML Kit and Thomas Ezan Android Developer Relation Engineer review new Android APIs and solutions and showcase applications using on-device ML.

Running ML processes on-device enables low-latency, increases data-privacy, enables offline support and potentially reduces cloud bill. Applications such as Lens AR Translate or the document scanning feature available in Files in India, benefit from the advantages of on-device ML.

To deploy ML features on Android, developers have two options:

  • ML Kit: which offers production-ready ML solutions to common user flows, via easy-to-use APIs.
  • Android’s custom ML stack: which is built on top of Tensorflow Lite, and provides control over the inference process and the user experience.

ML Kit released new APIs and improved existing features

Over the last year, the ML Kit team worked on both improving existing APIs and launching new ones: face mesh and document scanner. ML Kit is launching a new document scanner API in Q3 2023, that will provide a consistent scanning experience across apps in Android. Developers will be able to use it only with a few lines of code, without needing camera permission and with low apk size impact (given that it will be distributed via Google Play Services. In a similar fashion, Google code scanner is now generally available and provides a consistent scanning experience across apps, without needing camera permission, via Google Play Services.

Image a series of three photos of two girls smiling to show how face mesh improves facial recognition

Additionally, ML Kit improved the performance of the following APIs: barcode detection (by 17%), text recognition, digital ink recognition, pose detection, translation, and smart reply. ML Kit also integrated some APIs to Google Play Services so you don’t have to bundle the models to your application. Many developers are using ML Kit to easily integrate machine learning into their apps; for example, WPS uses ML Kit to translate text in 43 languages and save $65M a year.

Acceleration Service in Android’s custom ML stack is now in public beta

To support custom machine learning, the Android ML team is actively developing Android’s custom ML stack. Last year, TensorFlow Lite and GPU delegates were added to the Google Play Services which lets developers use TensorFlow Lite without bundling it to their app and provides automatic updates. With improved inference performance, hardware acceleration can in turn also significantly improve the user experience of your ML-enabled Android app. This year, the team is also announcing Acceleration Service, a new API enabling developers to pick the optimal hardware acceleration configuration at runtime. It is now in public beta and developers can learn more and get started here.

To learn more, watch the video: