Posted by Platform Hardening Team
In Android 11 we continue to increase the security of the Android platform. We have moved to safer default settings, migrated to a hardened memory allocator, and expanded the use of compiler mitigations that defend against classes of vulnerabilities and frustrate exploitation techniques.
We’ve enabled forms of automatic memory initialization in both Android 11’s userspace and the Linux kernel. Uninitialized memory bugs occur in C/C++ when memory is used without having first been initialized to a known safe value. These types of bugs can be confusing, and even the term “uninitialized” is misleading. Uninitialized may seem to imply that a variable has a random value. In reality it isn’t random. It has whatever value was previously placed there. This value may be predictable or even attacker controlled. Unfortunately this behavior can result in a serious vulnerability such as information disclosure bugs like ASLR bypasses, or control flow hijacking via a stack or heap spray. Another possible side effect of using uninitialized values is advanced compiler optimizations may transform the code unpredictably, as this is considered undefined behavior by the relevant C standards.
In practice, uses of uninitialized memory are difficult to detect. Such errors may sit in the codebase unnoticed for years if the memory happens to be initialized with some "safe" value most of the time. When uninitialized memory results in a bug, it is often challenging to identify the source of the error, particularly if it is rarely triggered.Eliminating an entire class of such bugs is a lot more effective than hunting them down individually. Automatic stack variable initialization relies on a feature in the Clang compiler which allows choosing initializing local variables with either zeros or a pattern.Initializing to zero provides safer defaults for strings, pointers, indexes, and sizes. The downsides of zero init are less-safe defaults for return values, and exposing fewer bugs where the underlying code relies on zero initialization. Pattern initialization tends to expose more bugs and is generally safer for return values and less safe for strings, pointers, indexes, and sizes.
Automatic stack variable initialization is enabled throughout the entire Android userspace. During the development of Android 11, we initially selected pattern in order to uncover bugs relying on zero init and then moved to zero-init after a few months for increased safety. Platform OS developers can build with `AUTO_PATTERN_INITIALIZE=true m` if they want help uncovering bugs relying on zero init.
`AUTO_PATTERN_INITIALIZE=true m`
Automatic stack and heap initialization were recently merged in the upstream Linux kernel. We have made these features available on earlier versions of Android’s kernel including 4.14, 4.19, and 5.4. These features enforce initialization of local variables and heap allocations with known values that cannot be controlled by attackers and are useless when leaked. Both features result in a performance overhead, but also prevent undefined behavior improving both stability and security.
For kernel stack initialization we adopted the CONFIG_INIT_STACK_ALL from upstream Linux. It currently relies on Clang pattern initialization for stack variables, although this is subject to change in the future.
CONFIG_INIT_STACK_ALL
Heap initialization is controlled by two boot-time flags, init_on_alloc and init_on_free, with the former wiping freshly allocated heap objects with zeroes (think s/kmalloc/kzalloc in the whole kernel) and the latter doing the same before the objects are freed (this helps to reduce the lifetime of security-sensitive data). init_on_alloc is a lot more cache-friendly and has smaller performance impact (within 2%), therefore it has been chosen to protect Android kernels.
s/kmalloc/kzalloc
init_on_alloc
In Android 11, Scudo replaces jemalloc as the default native allocator for Android. Scudo is a hardened memory allocator designed to help detect and mitigate memory corruption bugs in the heap, such as:
Scudo does not fully prevent exploitation but it does add a number of sanity checks which are effective at strengthening the heap against some memory corruption bugs.
It also proactively organizes the heap in a way that makes exploitation of memory corruption more difficult, by reducing the predictability of the allocation patterns, and separating allocations by sizes.
In our internal testing, Scudo has already proven its worth by surfacing security and stability bugs that were previously undetected.
Android 11 introduces GWP-ASan, an in-production heap memory safety bug detection tool that's integrated directly into the native allocator Scudo. GWP-ASan probabilistically detects and provides actionable reports for heap memory safety bugs when they occur, works on 32-bit and 64-bit processes, and is enabled by default for system processes and system apps.
GWP-ASan is also available for developer applications via a one line opt-in in an app's AndroidManifest.xml, with no complicated build support or recompilation of prebuilt libraries necessary.
Continuing work on adopting the Arm Memory Tagging Extension (MTE) in Android, Android 11 includes support for kernel HWASAN, also known as Software Tag-Based KASAN. Userspace HWASAN is supported since Android 10.
KernelAddressSANitizer (KASAN) is a dynamic memory error detector designed to find out-of-bound and use-after-free bugs in the Linux kernel. Its Software Tag-Based mode is a software implementation of the memory tagging concept for the kernel. Software Tag-Based KASAN is available in 4.14, 4.19 and 5.4 Android kernels, and can be enabled with the CONFIG_KASAN_SW_TAGS kernel configuration option. Currently Tag-Based KASAN only supports tagging of slab memory; support for other types of memory (such as stack and globals) will be added in the future.
Compared to Generic KASAN, Tag-Based KASAN has significantly lower memory requirements (see this kernel commit for details), which makes it usable on dog food testing devices. Another use case for Software Tag-Based KASAN is checking the existing kernel code for compatibility with memory tagging. As Tag-Based KASAN is based on similar concepts as the future in-kernel MTE support, making sure that kernel code works with Tag-Based KASAN will ease in-kernel MTE integration in the future.
We’ve continued to expand the compiler mitigations that have been rolled out in prior releases as well. This includes adding both integer and bounds sanitizers to some core libraries that were lacking them. For example, the libminikin fonts library and the libui rendering library are now bounds sanitized. We’ve hardened the NFC stack by implementing both integer overflow sanitizer and bounds sanitizer in those components.
In addition to the hard mitigations like sanitizers, we also continue to expand our use of CFI as an exploit mitigation. CFI has been enabled in Android’s networking daemon, DNS resolver, and more of our core javascript libraries like libv8 and the PacProcessor.
Thank you to Jeff Vander Stoep, Alexander Potapenko, Stephen Hines, Andrey Konovalov, Mitch Phillips, Ivan Lozano, Kostya Kortchinsky, Christopher Ferris, Cindy Zhou, Evgenii Stepanov, Kevin Deus, Peter Collingbourne, Elliott Hughes, Kees Cook and Ken Chen for their contributions to this post.
Posted by:Charmaine D’Silva, Product Lead, Android Privacy and FrameworkNarayan Kamath, Engineering Lead, Android Privacy and FrameworkStephan Somogyi, Product Lead, Android SecuritySudhi Herle, Engineering Lead, Android Security
This blog post is part of a weekly series for #11WeeksOfAndroid. For each #11WeeksOfAndroid, we’re diving into a key area so you don’t miss anything. This week, we spotlighted Privacy and Security; here’s a look at what you should know.
Privacy and security is core to how we design Android, and with every new release we increase our investment in this space. Android 11 continues to make important strides in these areas, and this week we’ll be sharing a series of updates and resources about Android privacy and security. But first, let’s take a quick look at some of the most important changes we’ve made in Android 11 to protect user privacy and make the platform more secure.
As shared in the “All things privacy in Android 11” video, we’re giving users even more control over sensitive permissions. Throughout the development of this release, we have engaged deeply and frequently with our developer community to design these features in a balanced way - amplifying user privacy while minimizing developer impact. Let’s go over some of these features:
One-time permission: In Android 10, we introduced a granular location permission that allows users to limit access to location only when an app is in use (aka foreground only). When presented with the new runtime permissions options, users choose foreground only location more than 50% of the time. This demonstrated to us that users really wanted finer controls for permissions. So in Android 11, we’ve introduced one time permissions that let users give an app access to the device microphone, camera, or location, just that one time. As an app developer, there are no changes that you need to make to your app for it to work with one time permissions, and the app can request permissions again the next time the app is used. Learn more about building privacy-friendly apps with these new changes in this video.
Background location: In Android 10 we added a background location usage reminder so users can see when apps are accessing device location in the background. Users who interacted with the reminder either downgraded or denied the location permission over 75% of the time. Many apps that requested background location could actually provide the same user experience by only accessing location while the app is visible to the user.
In Android 11, background location will require additional steps from the user beyond granting a runtime prompt. If your app needs background location, the system will ensure that the app first asks for foreground location. The app can then broaden its access to background location through a separate permission request, which will cause the system to take the user to Settings in order to complete the permission grant.
In February, we announced that Google Play developers will need to get approval to access background location in their app to prevent misuse. We're giving developers more time to make changes and won't be enforcing the policy for existing apps until 2021. Check out this helpful video to find possible background location usage in your code.
Permissions auto-reset: Most users tend to download and install over 60 apps on their device but interact with only a third of these apps on a regular basis. If users haven’t used an app that targets Android 11 for an extended period of time, the system will “auto-reset” all of the granted runtime permissions associated with the app and notify the user. The app can request the permissions again the next time the app is used. If you have an app that has a legitimate need to retain permissions, you can prompt users to turn this feature OFF for your app in Settings.
Data access auditing APIs: Android encourages developers to limit their access to sensitive data, even if they have been granted permission to do so. In Android 11, developers will have access to new APIs that will give them more transparency into their app’s usage of private and protected data. The APIs will enable apps to track when the system records the app’s access to private user data. Learn more about new tools in Android 11 to make your apps more private and stable.
Scoped Storage: In Android 10, we introduced scoped storage which provides a filtered view into external storage, giving access to app-specific files and media collections. This change protects user privacy by limiting broad access to shared storage in many ways including changing the storage permission to only give read access to photos, videos and music and improving app storage attribution. Since Android 10, we’ve incorporated developer feedback and made many improvements to help developers adopt scoped storage, including: updated permission UI to enhance user experience, direct file path access to media to improve compatibility with existing libraries, updated APIs for modifying media, Manage External Storage permission to enable select use cases that need broad files access, and protected external app directories. In Android 11, scoped storage will be mandatory for all apps that target API level 30. Many developers, like Viber, have made progress to adopt the new storage paradigm. Refer to this FAQ for common implementation questions. Learn more in this video and check out the developer documentation for further details.
Google Play system updates: Google Play system updates were introduced with Android 10 as part of Project Mainline. Their main benefit is to increase the modularity and granularity of platform subsystems within Android so we can update core OS components without needing a full OTA update from your phone manufacturer. Earlier this year, thanks to Project Mainline, we were able to quickly fix a critical vulnerability in the media decoding subsystem. Android 11 adds new modules, and maintains the security properties of existing ones. For example, Conscrypt, which provides cryptographic primitives, maintained its FIPS validation in Android 11 as well.
BiometricPrompt API: Developers can now use the BiometricPrompt API to specify the biometric authenticator strength required by their app to unlock or access sensitive parts of the app. We are planning to add this to the Jetpack Biometric library to allow for backward compatibility and will share further updates on this work as it progresses.
Identity Credential API: This will unlock new use cases such as mobile drivers licences, National ID, and Digital ID. It’s being built by our security team to ensure this information is stored safely, using security hardware to secure and control access to the data, in a way that enhances user privacy as compared to traditional physical documents. We’re working with various government agencies and industry partners to make sure that Android 11 is ready for such digital-first identity experiences.
Thank you for your flexibility and feedback as we continue to build an increasingly more private and secure platform. You can learn about more features in the Android 11 Beta developer site. You can also learn about general best practices related to privacy and security.
Please follow Android Developers on Twitter and Youtube to catch helpful content and materials in this area all this week.
Resources
You can find the entire playlist of #11WeeksOfAndroid video content here, and learn more about each week here. We’ll continue to spotlight new areas each week, so keep an eye out and follow us on Twitter and YouTube. Thanks so much for letting us be a part of this experience with you!
Posted by Hoi Lam, Android Machine Learning
This blog post is part of a weekly series for #11WeeksOfAndroid. Each week we’re diving into a key area of Android so you don’t miss anything. Throughout this week, we covered various aspects of Android on-device machine learning (ML). Whichever stage of development be it starting out or an established app; whatever role you play in design, product and engineering; whatever your skill level from beginner to experts, we have a wide range of ML tools for you.
“Focus on the user and all else will follow” is a Google mantra that becomes even more relevant in our machine learning age. Our Design Advocate, Di Dang, highlighted the importance of finding the unique intersection of user problems and ML strengths. Too often, teams are so keen on the idea of machine learning that they lose sight of their user needs.
Di outlined how the People + AI Guidebook can help you make ML product decisions and used the example of the Read Along app to illustrate topics like precision and recall, which are unique to ML design and development. Check out her interview with the Read Along team together with your team for more inspiration.
When you decide that on-device machine learning is the solution, the easiest way to implement it will be through turnkey SDKs like ML Kit. Sophisticated Google-trained models and processing pipelines are offered through an easy to use interface in Kotlin / Java. ML Kit is designed and built for on-device ML: it works offline, offers enhanced privacy, unlocks high performance for real-time use cases and it is free. We recently made ML Kit a standalone SDK and it no longer requires a Firebase account. Just one line in your build.gradle file and you can start bringing ML functionality into your app.
The team has also added new functionalities such as Jetpack lifecycle support and the option to use the face contour models via Google Play Services saving as much as 20MB in app size. Another much anticipated addition is the support for swapping Google models with your own for both Image Labeling as well as Object Detection and Tracking. This provides one of the easiest ways to add TensorFlow Lite models to your applications without interacting with ByteArray!
If the base model provided by ML Kit doesn’t quite fit the bill, what should developers do? The first port of call should be TensorFlow Hub where ready-to-use TensorFlow Lite models from both Google and the wider community can be downloaded. From 100,000 US Supermarket products to tomato plant diseases classifiers, the choice is yours.
In addition to Firebase AutoML Vision Edge, you can also build your own model using TensorFlow Model Maker (image classification / text classification) with just a few lines of Python. Once you have a TensorFlow Lite model from either TensorFlow Hub, or the Model Maker, you can easily integrate it with your Android app using ML Kit Image Labelling or Object Detection and Tracking. If you prefer an open source solution, Android Studio 4.1 beta introduces ML model binding that helps wrap around the TensorFlow Lite model with an easy to use Kotlin / Java wrapper. Adding a custom model to your Android app has never been easier. Check out this blog for more details.
From the examples of the Android Developer Challenge winners, it is obvious that on-device machine learning has come of age and ML functionalities once reserved for the cloud or supercomputers are now available on your Android phone. Take a step forward with us by trying out our codelabs of the day:
Also checkout the ML Week learning pathway and take the quiz to get your very own ML badge.
Android on-device machine learning is a rapidly evolving platform, if you have any enhancement requests or feedback on how it could be improved, please let us know together with your use-case (TensorFlow Lite / ML Kit). Time for on-device ML is now.
Posted by Di Dang, Design Advocate
From our Machine Learning-themed week together, we’ve delved into an ML Kit x CameraX Codelab, and we learned how to train your own custom models and integrate them in your Android app. In addition to the technical considerations that go into using ML, it’s important that we design our ML-based apps in a way that enables our users to feel in control of the ML technology, and not the other way around. To help product creators understand some best practices for ML product decisions, the PAIR team published the People + AI Guidebook at Google I/O last year. Let’s take a look at some ML design considerations you can apply in your Android apps by learning from the example of Read Along.
Google recently launched Read Along, an Android app that uses on-device ML and voice UI to help children learn to read anytime, anywhere, using just their voice. According to the UN Division of Sustainable Development Goals, more than 50% of children worldwide are not achieving minimum proficiency in reading. First launched in India as “Bolo”, the “Read Along” app is now available globally. We recently went behind the scenes with the Read Along team in this episode of Centered, to learn how they made an ML- and voice-based app to improve child literacy.
Since ML-based systems are inherently probabilistic, they can generate “wrong” predictions in the form of false positives and false negatives. As we create ML-based applications, we need to decide which behavior to optimize for. Within the Read Along experience, a false positive denotes that the child has misread a word, though the system fails to recognize this and does not provide corrective feedback. On the other hand, an example of a false negative is when a child reads a word correctly, but Read Along predicts the word was read incorrectly, and thus prompts the child to try again. “We spent a little time to understand what really happens when the child gets false positive and false negative, and what impact does it have on the psychology, and also on the reading experience,“ said Eshita Priyadarshini, Read Along’s UX Research Lead. “When the child is reading, we don't really tell him, "Oh, you got that word wrong. Why don't you read it again?" By unpacking the impact of false positives and false negatives on the user, the Read Along team decided to optimize for recall, thereby increasing the number of false positives, which results in a user experience that feels more encouraging for children.
To learn more about how the Read Along team made ML product decisions, check out the full Centered episode. For more guidance on how your cross-functional team (spanning UX, PM, and engineering) can come together to design ML-based applications, check out the People + AI Guidebook.
Posted by Xiaodao Wu, Developer Advocate
With the rise in quality content that’s keeping us glued to the big screen, it’s no surprise watch time on the TV continues to grow. As users spend more time in their living rooms, they are also looking to get more from their smart TVs and streaming devices. To help developers meet these needs, we are always working to support the latest Android features on Android TV.
Today, we are releasing an Android 11 Developer Preview for Android TV with many privacy, performance, accessibility and connectivity features. More information can be found on the Android 11 Developer Preview web page.
The Android 11 Developer Preview on TV is for developers (not for consumer use), this image is for ADT-3 developer devices only, it is available by manual download and flash. All user data on the ADT-3 device will be wiped out after flash. Once the device has been flashed to Android 11, you will not be able to go back to the previous Android 10 build.
Downloading of the system image and use of the device software is subject to the Google Terms of Service. By downloading, you agree to the Google Terms of Service and Privacy Policy. Your downloading of the system image and use of the device software may also be subject to certain third-party terms of service, which can be found in Settings or as otherwise provided.
flash-all.sh
The flash-all script uses fastboot and adb tools to upgrade the system. The latest version of fastboot is recommended; developers can find it in the Android SDK Platform-Tools package.
We encourage you to test your Android TV app on the Android 11 Developer Preview. If you have any feedback, please reach out to us. We’d love to hear from you.
Tune in to the Android Beyond Phones week of the #11WeeksOfAndroid on August 10th for even more developer resources from Android TV.
Yesterday, we talked about turnkey machine learning (ML) solutions with ML Kit. But what if that doesn’t completely address your needs and you need to tweak it a little? Today, we will discuss how to find alternative models, and how to train and use custom ML models in your Android app.
Crop disease models from the wider research community available on tfhub.dev
If the turnkey ML solutions don't suit your needs, TensorFlow Hub should be your first port of call. It is a repository of ML models from Google and the wider research community. The models on the site are ready for use in the cloud, in a web-browser or in an app on-device. For Android developers, the most exciting models are the TensorFlow Lite (TFLite) models that are optimized for mobile.
In addition to key vision models such as MobileNet and EfficientNet, the repository also boast models powered by the latest research such as:
Many of these solutions were previously only available in the cloud, as the models are too large and too power intensive to run on-device. Today, you can run them on Android on-device, offline and live.
Besides the large repository of base models, developers can also train their own models. Developer-friendly tools are available for many common use cases. In addition to Firebase’s AutoML Vision Edge, the TensorFlow team launched TensorFlow Lite Model Maker earlier this year to give developers more choices over the base model that support more use cases. TensorFlow Lite Model Maker currently supports two common ML tasks:
The TensorFlow Lite Model Maker can run on your own developer machine or in Google Colab online machine learning notebooks. Going forward, the team plans to improve the existing offerings and to add new use cases.
New TFLite Model import screen in Android Studio 4.1 beta
Once you have selected a model or trained your model there are new easy-to-use tools to help you integrate them into your Android app without having to convert everything into ByteArrays. The first new tool is ML Model binding with Android Studio 4.1. This lets developers import any TFLite model, read the input / output signature of the model, and use it with just a few lines of code that calls the open source TensorFlow Lite Android Support Library.
Another way to implement a TensorFlow Lite model is via ML Kit. Starting in June, ML Kit no longer requires a Firebase project for on-device functionality. In addition, the image classification and object detection and tracking (ODT) APIs support custom models. The latter ODT offering is especially useful in use-cases where you need to separate out objects from a busy scene.
So how should you choose between these three solutions? If you are trying to detect a product on a busy supermarket shelf, ML Kit object detection and tracking can help your user select a specific product for processing. The API then performs image classification on just the part of the image that contains the product, which results in better detection performance. On the other hand, if the scene or the object you are trying to detect takes up most of the input image, for example, a landmark such as Big Ben, using ML Model binding or the ML Kit image classification API might be more appropriate.
TensorFlow Hub bird detection model with ML Kit Object Detection & Tracking AP
Finding, building and using custom models on Android has never been easier. As both Android and TensorFlow teams increase the coverage of machine learning use cases, please let us know how we can improve these tools for your use cases by filing an enhancement request with TensorFlow Lite or ML Kit.
Tomorrow, we will take a step back and focus on how to appropriately use and design for a machine learning first Android app. The content will be appropriate for the entire development team, so bring your product manager and designers along. See you next time.
Posted by Christiaan Prins, Product Manager, ML Kit and Shiyu Hu, Tech Lead Manager, ML Kit
Two years ago at I/O 2018 we introduced ML Kit, making it easier for mobile developers to integrate machine learning into your apps. Today, more than 25,000 applications on Android and iOS make use of ML Kit’s features. Now, we are introducing some changes that will make it even easier to use ML Kit. In addition, we have a new feature and a set of improvements we’d like to discuss.
ML Kit's APIs are built to help you tackle common challenges in the Vision and Natural Language domains. We make it easy to recognize text, scan barcodes, track and classify objects in real-time, do translation of text, and more.
The original version of ML Kit was tightly integrated with Firebase, and we heard from many of you that you wanted more flexibility when implementing it in your apps. As a result, we are now making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. You can still use both ML Kit and Firebase to get the best of both products if you choose to.
With this change, ML Kit is now fully focused on on-device machine learning, giving you access to the unique benefits that on-device versus cloud ML offers:
Naturally, you still get access to Google’s on-device models and processing pipelines, all accessible through easy-to-use APIs, and offered at no cost.
All ML Kit resources can now be found on our new website where we made it a lot easier to access sample apps, API reference docs and our community channels that are there to help you if you have questions.
If you are using ML Kit for Firebase’s on-device APIs in your app today, we recommend you to migrate to the new standalone ML Kit SDK to benefit from new features and updates. For more information and step-by-step instructions to update your app, please follow our Migration guide. The cloud-based APIs, model deployment and AutoML Vision Edge remain available through Firebase Machine Learning.
Apart from making ML Kit easier to use, developers also asked if we can ship ML Kit through Google Play Services resulting in a smaller app footprint and the model can be reused between apps. Apart from Barcode scanning and Text recognition, we have now added Face detection / contour (model size: 20MB) to the list of APIs that support this functionality.
// Face detection / Face contour model // Delivered via Google Play Services outside your app's APK… implementation 'com.google.android.gms:play-services-mlkit-face-detection:16.0.0' // …or bundled with your app's APK implementation 'com.google.mlkit:face-detection:16.0.0'
Android Jetpack Lifecycle support has been added to all APIs. Developers can use addObserver to automatically manage teardown of ML Kit APIs as the app goes through screen rotation or closure by the user / system. This makes CameraX integration easier. With this release, we are also recommending that developers adopt CameraX in their apps due to the ease of integration and image quality improvements (compared to Camera1) on a wide range of devices.
// ML Kit now supports Lifecycle val recognizer = TextRecognizer.newInstance() lifecycle.addObserver(recognizer) // ... // Just like CameraX val camera = cameraProvider.bindToLifecycle( /* lifecycleOwner= */this, cameraSelector, previewUseCase, analysisUseCase)
For an overview of all recent changes, check out the release notes for the new SDK.
To help you get started with the new ML Kit and its support for CameraX, we have created this code lab to Recognize, Identify Language and Translate text. If you have any questions regarding this code lab, please raise them at StackOverflow and tag it with [google-mlkit]. Our team will monitor this.
Through our early access program, developers have an opportunity to partner with the ML Kit team and get access to upcoming features. Two new APIs are now available as part of this program:
If you are interested, head over to our early access page for details.
ML Kit's turn-key solutions are built to help you take common challenges. However, if you needed to have a more tailored solution, one that required custom models, you typically needed to build an implementation from scratch. To help, we are now providing the option to swap out the default Google models with a custom TensorFlow Lite model. We’re starting with the Image Labeling and Object Detection and Tracking APIs, that now support custom image classification models.
Tomorrow, we will dive a bit deeper into how to find or train a TensorFlow Lite model and use it either with ML Kit, or with Android Studio’s new ML binding functionality.
Subscription continues to be one of the fastest growing business models for apps in Google Play. As your subscription business evolves and becomes more sophisticated, our platform continues to evolve to better support your needs. Today we’re excited to tell you more about the new subscription capabilities we announced at the Android 11 Beta Launch, including promotional codes to help you access new subscribers, new opportunities to remind users of your value and win back churned users. Many of these capabilities are built on top of the Play Billing Library version 3.
In addition to the new capabilities, we are also making improvements to our existing platform. Over the past few years, we have launched many features, such as account hold, restore, and pause, which have been highly effective in reducing your voluntary and involuntary churn. We want to ensure that everyone can take advantage of them, which is why we are planning on changing the default settings for these features from optional to either mandatory or on by default starting on November 1, 2020. Additional details on these features and implementation requirements can be found at the end of this post.
Here’s everything that is changing about the subscriptions platform:
Last year at I/O 2019, we launched subscription one-time promotion codes, unique alphanumeric codes that can be distributed to individual users for redemption. Now, we have launched a new frictionless redemption flow which allows users to easily redeem the code, purchase the subscription, and install the app in the Play store in a few simple steps. This greatly simplifies the user experience by reducing the friction users go through to use your code. Since this subscription is started outside of your app, it is only available to developers who are using Billing Library 2.0 or higher.
Finally, we’ve heard your feedback that requiring users to opt in to subscription price decreases was too restrictive. We are happy to announce that subscription price decreases will no longer require users to take action to opt in to keep their subscription. Users will be notified of an upcoming price decrease and be able to see the upcoming change in the Google Play subscriptions center.
Over the past few years, our platform has made strides in helping you keep your subscribers, through features aimed at decreasing both voluntary churn and involuntary churn (churn due to payment failure). For example, account hold has helped developers achieve 8% lower involuntary churn and 35% higher payment decline recovery rate compared to developers without account hold. Although these features are effective, retention may not be something that you are thinking about when starting out for the first time.
That’s why we are announcing updated defaults for several subscriptions features that have been up until now optional, which will take effect on November 1, 2020.
You may have to make modifications to your app or your server to handle these new features. Specifically, your app should:
Although not every feature will require you to make engineering changes, we highly recommend that you test each feature before November 1. To make the transition easier, Google has enabled Account Hold, Pause, Restore, and Resubscribe for all license test accounts. Learn more about testing for subscriptions.
How useful did you find this blog post?
★ ★ ★ ★ ★
Posted by Jacob Lehrbaum, Director of Developer Relations, Android
Developers like you have always played an important role in Android innovation. Over 10 years ago, when we first launched the Android SDK, we also announced the Android Developer Challenge to reward model apps and highlight new ways of solving user problems. As Android pushes the boundaries of machine learning, 5G, foldables, and more, developers continue to help shape these new frontiers. To celebrate this work, we revived the challenge in 2019, with a focus on “Helpful Innovation,” powered by on-device machine learning.
We received hundreds of creative projects, and at the end of last year, picked 10 winners who each combined a strong idea and a thirst to bring it to life. Since then, we’ve been working with those winners to help turn their ideas into reality. And today, we’re announcing the 10 winners. Some are still at the beginning of their journey but but their apps are now ready for you to download and try out! !
Increasingly, machine learning is becoming a more accessible tool to developers with limited to no background in the technology. In fact, for most of the winners of the Android Developer Challenge, this was their first foray into machine learning. That’s thanks in part to two key offerings from Google, which bring on-device machine learning into reach for millions of developers around the world.
The first is ML Kit. ML Kit brings Google’s on-device machine learning technologies to mobile app developers, so they can build customized and interactive experiences into their apps. This includes tools such as language translation, text recognition, object detection and more. Eskke, for instance, uses offline text recognition and barcode scanning from ML Kit so users can scan the QR code at a mobile money kiosk and quickly withdraw money. And MixPose uses ML Kit's forthcoming Pose detection API to detect each user’s yoga positions and movements, so teachers can provide feedback.
The other Google resource that many of the Android Dev Challenge winners used was TensorFlow Lite. This powerful machine learning framework can help run machine learning models on Android, iOS and IoT devices that would never normally be able to support them. Its set of tools can be used for all kinds of powerful neural network-related applications, from image detection to speech recognition, bringing the latest cutting-edge technology to the devices we carry around with us wherever we go. Trashly, for instance, uses a custom TensorFlow Lite model to report if an object is recyclable and how to recycle it.
Helpful innovation, such as the 10 winning apps in the Android Developer Challenge, has the potential to change the way we access, use, and interpret information, making it available when we need it, where we need it most. By working with these developers focused on helpful innovation, we hope to inspire the next wave of developers to unlock what’s possible with this new technology.
As we kick off the second week of #11WeeksOfAndroid, focused on Machine Learning, we will highlight new tools and resources available to Android developers. Here’s a taste of the rest of this week:
On Tuesday and Wednesday, we will also have a “codelab of the day” so get your Android Studio 4.1 beta today, block off an hour in your schedule and take this ML journey with us!
*The apps presented here are the projects of the developers individually, and not Google.
Posted by Dr. Stefan Frank, Senior Product Manager, Android System UI
This blog post is part of a weekly series for #11WeeksOfAndroid. Each week we’re diving into a key area of Android so you don’t miss anything. This week, we spotlight people & identity. Here's a look at what you should know.
One of the goals of Android 11 was making our phones more people-centric. Because nothing matters more to people than connecting to loved ones. It is a core human need especially during our current physical distancing constraints. We have the need to be more social than ever before. Android 11 reimagines how we have conversations on our phones by adding new capabilities to help you maintain your identity across multiple devices.
We are announcing some new features in Android 11 that allows you to easily connect with your loved ones, friends and business colleagues. At the center of this release is the Android Conversation Shortcut API and Identity Services Library. These new tools empower instant connections to your best friend, sharing funny pictures of your dogs, telling your aunt about a tasty seafood recipe that you discovered, or congratulating an office colleague about her promotion. And they also provide a new level of password management that makes it easier for your users to sign up and sign in.
One of our favorite features brings conversations from people that matter most to you right to the lock screen of your phone. You’ll easily recognize them by their avatar and instantly respond to your family, friends, or colleagues. These are people you truly want to connect to. We knew this feature would be useful, but the responses from our beta-testers made us smile. The decision to include the Conversation Shortcut API to improve the lives of our users was one of the easiest decisions for us to make in the release of Android 11.
Creating a bubble from an incoming notification and accessing the conversation from the bubble.
One of the new features building on top of shortcuts is the new conversation space at the top of your notifications. It focuses your attention on what matters most - your conversations. Right from these notifications the user can trigger another new feature in Android 11 - Bubbles. Bubbles are small representations of conversations floating over other content on the side of the screen, which can be expanded to allow quick access to conversations, without changing what you were doing on the device. They are super handy for carrying on conversations while using the device for other tasks.
The new conversations space showing how a conversation is marked as priority and will be displayed on the lock screen.
A long tap on conversation notifications enables the user to mark priority conversations in order to give special prominence to the most important people. Priority conversations will be displayed with their individual avatar right on the lock screen and move to the top of your notifications. They can even be set to break through do-not-disturb. Another use of the conversation shortcuts is that they are used as share targets in the system share sheet, which was already launched in Android 10.
Another focus of this week was identity. To tackle user and developer complexity that makes identity a challenge for developers, we've been working on One Tap and Block Store, part of our new Google Identity Services Library. One Tap is our new cross-platform sign-in mechanism for Web and Android, supporting and streamlining multiple types of credentials. Block Store is our new token-based sign-in mechanism that’s built on top of Backup and Restore. It allows you to keep your user signed in across Android devices.
We’re super excited about all these features [since] they help all of us connect, communicate and express ourselves to the people that we care about and to the apps we are dealing with -- which is as important now as it’s ever been.”
For a high level overview of the people centric functionality, we recommend that you check out the Android 11 launch highlight video on People. Earlier this week, we also launched a new talk on ‘Conversation Notifications’, where Artur describes how to implement the conversation shortcut and bubbles. There is also a great overview talk on the conversation additions and other System UI news from Dan. Finally, you can also listen to Chet’s podcasts where he interviews us on People and Bubbles.
If you’re interested in learning more about Identity, we also published “In Identity on Android: What’s new in sign-in”, this week. In this video, Vishal explains the new libraries in the Google Identity System: One Tap and Block Store."
Two of the teams that worked very early with us on these conversation specific topics are the Messenger team from Facebook and the direct messaging team from Twitter. Read the stories around both of these implementations here and here.
If you’re looking for an easy way to pick up the highlights of this week, check out the People and Identity pathway. A pathway is an ordered tutorial that allows users to complete a pre-defined module that culminates in a quiz. It may include codelabs, videos, articles and blog posts. A virtual badge is awarded to each user who passes the quiz. Test your knowledge of key takeaways about People and Identity to earn a limited edition badge.
Android 11 is the starting point of an ongoing focus on what matters most to users, people and conversations. Many of our partners in our ecosystem introduced amazing apps and services enabling these connections with people and conversations. We in Android want to elevate and surface these partners more prominently in order to support this goal. Thus if you are working on an app that fosters real time communication between people, we strongly encourage you to adopt the conversation shortcut based APIs for notifications, bubbles and sharing when targeting API 30 in order to put users’ conversations front and center and give them quick access to your app. The developer documentation can be found here.
For apps that handle user accounts, we encourage you to help our users avoid messy password hunting and forgotten credential processes by integrating One Tap to streamline credential management and Block Store to handle device updates. These integrations will work on phones back to Android M.
In the spirit of this week, I wish you meaningful and joyful connections with the people that matter to you and seamless experiences with your favorite apps. We hope you will help us in our journey of supporting these goals.
This blogpost is a collaboration between Google and Messenger from Facebook. Authored by Aaron Labiaga with support from Caleb Gomer and Samuel Guirado from Messenger.
Messenger is ubiquitous in the messaging app world and has pioneered the floating chat bubble. Bubbles help users keep conversations in view and accessible while multitasking, overlaying other UI elements in the foreground, and providing users easy access and visibility to their ongoing chats. Bubbles is one way Android 11 is making the platform more people-centric and expressive, reimagining the way we have conversations on our phones. Messenger’s early pioneering of the floating chat bubble, and the strong reception by users, helped lead to its native implementation in the framework.
The Bubbles API is built on top of the notifications API and is exclusively focused on people in Android 11. First Introduced in Android 10, what was previously an opt-in feature is now on by default. To use bubbles, the developer must create BubbleMetadata, which is set on the notification. This metadata describes the Activity to launch when a bubble is clicked, along with various behaviors relevant to the expanded bubble. The Activity must follow the criteria of being embeddable and resizable in order to use it in a bubble.
Notification bubbles are reserved for conversations with persons in context. These are MessagingStyle notifications with a set long-lived shortcut ID. Please see the following Bubbles code sample how.
The Messenger team shares their experience with the migration and prospect of the impact of the changes.
How was the migration to bubbles, technical challenges, scope, and impact on codebase?
Prior to Bubbles, Messenger used the SYSTEM_ALERT_WINDOW for its implementation of Bubbles. It achieved our purpose, but hosting complex Android UI outside of Activities is challenging to implement and maintain. Using this natively supported API allowed us to build more traditional, Activity-based Android UI that works well in Bubbles and full screen. This new Bubbles-based chat experience is much simpler and more maintainable than our SAW-based one. We are excited that Android believes that it is a user experience that will help drive improvements in the conversation space.
Ensuring that bubble shortcuts were up-to-date with the latest state of the conversation thread was a technical challenge worth noting. Picture changed, conversation deleted or contact blocked are events that require the bubble shortcut to be updated or even deleted. The Shortcuts API allows for easily registering/unregistering shortcuts and for querying and updating existing ones, which made this whole process very straight forward.
What are your future prospects on the impact of Messenger messages in the conversation space?
The conversation section will give our messages the right visibility. Given that conversation section ranks high in the notification drawer, we definitely want to be present in that space.
Bubbles are just one way that Android 11 puts people at the heart of the experience for Android; if you’re a messaging or chat app, you should consider using the Bubbles API to help your users as they multi-task. It’s great to see apps like Messenger navigate the openness of Android to create innovative new experiences, and we’re excited to make Bubbles a native experience in Android 11. For more information, please visit the Conversation API guidelines.