Android Developers Blog
The latest Android and Google Play news for app and game developers.
🔍
Platform Android Studio Google Play Jetpack Kotlin Docs News

25 June 2020

Read Along: Grow child literacy with on-device ML design insights


Link copied to clipboard

Posted by Di Dang, Design Advocate

From our Machine Learning-themed week together, we’ve delved into an ML Kit x CameraX Codelab, and we learned how to train your own custom models and integrate them in your Android app. In addition to the technical considerations that go into using ML, it’s important that we design our ML-based apps in a way that enables our users to feel in control of the ML technology, and not the other way around. To help product creators understand some best practices for ML product decisions, the PAIR team published the People + AI Guidebook at Google I/O last year. Let’s take a look at some ML design considerations you can apply in your Android apps by learning from the example of Read Along.




Google recently launched Read Along, an Android app that uses on-device ML and voice UI to help children learn to read anytime, anywhere, using just their voice. According to the UN Division of Sustainable Development Goals, more than 50% of children worldwide are not achieving minimum proficiency in reading. First launched in India as “Bolo”, the “Read Along” app is now available globally. We recently went behind the scenes with the Read Along team in this episode of Centered, to learn how they made an ML- and voice-based app to improve child literacy.

Why Machine Learning and Voice UI?

Since using ML can be time- and cost-intensive, we need to find the intersection of ML strengths and user needs. To learn to read, children need time on task and one-on-one attention, which is challenging for areas where there is a lack of access to teachers or educational materials. “In many parts of the world, there are only so many schools that can be built, only so many teachers can be trained. So first and foremost, it's a scale problem,” said Nitin Kashyap, Read Along’s product manager. This creates a unique opportunity for the use of ML—to provide real-time reading feedback at scale, Read Along utilizes the Google Assistant’s text-to-speech and speech recognition capabilities. The Read Along team also added abilities on edge to preserve children’s privacy. The voice data is analyzed on-device without being sent to any Google servers, enabling children to use Read Along offline as well.
Child using app demo

False positives vs. false negatives

Since ML-based systems are inherently probabilistic, they can generate “wrong” predictions in the form of false positives and false negatives. As we create ML-based applications, we need to decide which behavior to optimize for. Within the Read Along experience, a false positive denotes that the child has misread a word, though the system fails to recognize this and does not provide corrective feedback. On the other hand, an example of a false negative is when a child reads a word correctly, but Read Along predicts the word was read incorrectly, and thus prompts the child to try again. “We spent a little time to understand what really happens when the child gets false positive and false negative, and what impact does it have on the psychology, and also on the reading experience,“ said Eshita Priyadarshini, Read Along’s UX Research Lead. “When the child is reading, we don't really tell him, "Oh, you got that word wrong. Why don't you read it again?" By unpacking the impact of false positives and false negatives on the user, the Read Along team decided to optimize for recall, thereby increasing the number of false positives, which results in a user experience that feels more encouraging for children.

Screenshot of People + AI Guidebook

To learn more about how the Read Along team made ML product decisions, check out the full Centered episode. For more guidance on how your cross-functional team (spanning UX, PM, and engineering) can come together to design ML-based applications, check out the People + AI Guidebook.