Posted by Janelle Kuhlman, Developer Relations Program Manager
For Women’s History Month, we’re celebrating a few of our Google Developer Experts. Meet Maryam Alhutayfi, Android GDE. The GDE program team encourages qualified candidates that identify as women or non-binary to express interest in joining the community by completing this form.
Android GDE Maryam Alhuthayfi has loved programming since high school, when she learned programming in Visual Studio and basic website development.
“We didn't get much beyond that because there weren’t many Arabic resources,” she says. “That experience got me excited to dig deeper into technology. I wanted to know how the web functions, how software is made, and more about programming languages.”
Maryam studied computer science at university and majored in information systems. For her senior year graduation project, she and her team decided to build an Android application, her first experience with Android. She graduated with honors and landed a job as a web developer, but she kept thinking about getting back to being an Android developer.
She joined Women Techmakers in Saudi Arabia in 2019, when the group launched, to connect with other women in tech to help and support. She got a job as an Android robotics developer and became a co-organizer of GDG Cloud Saudi, her local Google Developer Group. Now Maryam is a senior Android development specialist at Zain KSA, one of Saudi Arabia’s largest telecommunications companies, which she describes as “a dream come true,” and in January 2022, she became an Android GDE.
Maryam is the first Android GDE in the Middle East and the second in MENA. She contributes to the Android community by speaking about Android and Kotlin development in detail, and software development more generally. She maintains a blog and GitHub repository and gives numerous talks about Android development. She encourages Android developers to use Kotlin and Jetpack Compose, and she describes both as causing a major shift in her Android development path. She started the Kotlin Saudi User Group in 2020.
Maryam regularly mentors new Android developers and gives talks on Android for Women Techmakers and Women Who Code. She encourages Android developers at big companies like Accenture and Careem to join and contribute to the Android community.
Remembering how few Arabic resources she had as a high school student, Maryam creates both Arabic and English content to enrich Android learning resources. “I made sure those resources would be available to anyone who wants to learn Android development,” she says. “Locally, in collaboration with GDGs in Saudi Arabia, we host sessions throughout each month that cover Android, Flutter, and software development in general, and other exciting topics, like data analytics, cyber security, and machine learning.”
She regularly attends the Android developer hangouts led by Android GDE Madona Wambua and Android developer Matt McKenna to learn more and get inspired by other Android developers in the community.
In her full-time job, Maryam is immersed in her work on the official Zain KSA app.
“It’s my job and my team’s job to give our millions of customers the best experience they can have, and I’m pushing myself to the limit to achieve that” she says. “I hope they like it.”
Maryam encourages other new developers, especially women, to share their knowledge.
“Communicate your knowledge–that makes you an expert because people will ask you follow-up questions that might give you different perspectives on certain things and shift your focus on learning new things constantly ” she says. “You serve others by sharing your knowledge.”
Follow Maryam on Twitter at @Mal7othify | Learn more about Maryam on LinkedIn.
The Google Developers Experts program is a global network of highly experienced technology experts, influencers, and thought leaders who actively support developers, companies, and tech communities by speaking at events and publishing content.
The GDE program team encourages qualified candidates that identify as women or non-binary to express interest in joining the community by completing this form.
Posted by Mauricio Vergara, Product Marketing Manager, with contibutions by Thousand Ant.
Lyft is singularly committed to app excellence. As a rideshare company — providing a vital, time-sensitive service to tens of millions of riders and hundreds of thousands of drivers — they have to be. At that scale, every slowdown, frozen frame, or crash of their app can waste thousands of users’ time. Even a minor hiccup can mean a flood of people riding with (or driving for) the competition. Luckily, Lyft’s development team keeps a close eye on their app’s performance. That’s how they first noticed a slowdown in the startup time of their drivers’ Android app.
They needed to get to the bottom of the problem quickly — figure out what it would take to resolve and then justify such an investment to their leadership. That meant answering a number of tough questions. Where was the bottleneck? How was it affecting user experience? How great a priority should it be for their team at that moment? Luckily, they had a powerful tool at their disposal that could help them find answers. With the help of Android vitals, a Google Play tool for improving app stability and performance on Android devices, they located the problem, made a case for prioritizing it to their leadership, and dedicated the right amount of resources to solving it. Here’s how they did it.
The first thing Lyft’s development team needed to do was figure out whether this was a pressing enough problem to convince their leadership to dedicate resources to it. Like any proposal to improve app quality, speeding up Lyft Driver’s start-up time had to be weighed out against other competing demands on developer bandwidth: introducing new product features, making architectural improvements, and improving data science. Generally, one of the challenges to convincing leadership to invest in app quality is that it can be difficult to correlate performance improvements with business metrics.
They turned to Android vitals to get an exact picture of what was at stake. Vitals gives developers access to data about the performance of their app, including app-not-responding errors, battery drainage, rendering, and app startup time. The current and historical performance of each metric is tracked on real devices and can be compared to the performance of other apps in the category. With the help of this powerful tool, the development team discovered that the Lyft Driver app startup time was 15–20% slower than 10 other apps in their category — a pressing issue.
Next, the team needed to establish the right scope for the project, one that would be commensurate with the slowdown’s impact on business goals and user experience. The data from Android vitals made the case clear, especially because it provided a direct comparison to competitors in the rideshare space. The development team estimated that a single developer working on the problem for one month would be enough to make a measurable improvement to app startup time.
Drawing on this wealth of data, and appealing to Lyft’s commitment to app excellence, the team made the case to their leadership. Demonstrating a clear opportunity to improve customer experience, a reasonably scoped and achievable goal, and clear-cut competitive intelligence, they got the go-ahead.
Lyft uses “Time to interact” as a primary startup metric (also known as Time to full display). To understand the factors that impact it, the Lyft team profiled each of their app’s launch stages, looking for the impasse. The Lyft Driver app starts up in four stages: 1) First, start the application process 2) “Activity” kicks off the UI rendering. 3) “Bootstrap” sends network requests for the data necessary to render the home screen. 4) Finally, “Display” opens the driver’s interface. Rigorous profiling revealed that the slowdown occurred in the third, bootstrapping, phase. With the bottleneck identified, the team took several steps to resolve it.
First, they reduced unneeded network calls on the critical launch path. After decomposing their backend services, they could safely remove some network calls in the launch path entirely. When possible, they also chose to execute network calls asynchronously. If some data was still required for the application to function, but was not needed during app launch, these calls were made non-blocking to allow the launch to proceed without them. Blocking network calls were able to be safely moved to the background. Finally, they chose to cache data between sessions.
These may sound like relatively small changes, but they resulted in a dramatic 21% reduction in app startup time. This led to a 5% increase in driver sessions in Lyft Driver. With the results in hand, the team had enough buy-in from leadership to create a dedicated mobile performance workstream and add an engineer to the effort as they continued to make improvements. The success of the initiative caught on across the organization, with several managers reaching out to explore how they could make further investments in app quality.
The success of these efforts contains several broader lessons, applicable to any organization.
As an app grows and the team grows with it, app excellence becomes more important than ever. Developers are often the first to recognize performance issues as they work closely on an app, but can find it difficult to raise awareness across an entire organization. Android vitals offers a powerful tool to do this. It provides a straightforward way to back up developer observations with data, making it easier to square performance metrics with business cases.
When starting your own app excellence initiative, it pays to first aim for small wins and build from there. Carefully pick actionable projects, which deliver significant results through an appropriate resource investment.
It’s also important to communicate early and often to involve the rest of the organization in the development team’s quality efforts. These constant updates about goals, plans, and results will help you keep your whole team on board.
Android vitals is just one of the many tools in the Android ecosystem designed to help understand and improve app startup time and overall performance. Another complementary tool, Jetpack Macrobenchmark, can help provide intelligence during development and testing on a variety of metrics. In contrast to Android vitals, which provides data from real users’ devices, Macrobenchmark allows you to benchmark and test specific areas of your code locally, including app startup time.
The Jetpack App startup library provides a straightforward, performant way to initialize components at application startup. Developers can use this library to streamline startup sequences and explicitly set the order of initialization. Meanwhile, Reach and devices can help you understand your user and issue distribution to make better decisions about which specs to build for, where to launch, and what to test. The data from the tool allows your team to prioritize quality efforts and determine where improvements will have the greatest impact for the most users. Perfetto is another invaluable asset: an open-source system tracing tool which you can use to instrument your code and diagnose startup problems. In concert, these tools can help you keep your app running smoothly, your users happy, and your whole organization supportive of your quality efforts.
If you’re interested in getting your own team on board for the pursuit of App Excellence (or join Lyft), check out our condensed case study for product owners and executives linked here.
Posted by Marwa Mabrouk, Android Camera Platform Product Manager
Android Camera is an exciting space. Camera is one of the top reasons consumers purchase a phone. Android Camera empowers developers today through different tools. Camera 2 is the framework API that is included in Android since Android 5.0 Lollipop, and CameraX is a Jetpack support library that runs on top of Camera 2, and is available to all Android developers. These solutions are meant to complement each other in addressing the needs of the Android Camera ecosystem.
For developers who are starting with Android Camera, refreshing their app to the latest, or migrating their app from Camera 1, CameraX is the best tool to get started! CameraX offers key benefits that empower developers, and address the complexities of the ecosystem.
For developers who are building highly specialized functionality with Camera for low level control of the capture flow, and where device variations are to be taken into consideration, Camera 2 should be used.
Camera 2 is the common API that enables the camera hardware on every Android device and is deployed on all the billions of Android devices around the world in the market today. As a framework API, Camera 2 enables developers to utilize their deep knowledge of photography and device implementations. To ensure the quality of Camera 2, device manufacturers show compliance by testing their devices. Device variations do surface in the API based on the device manufacturer's choices, allowing custom features to take advantage of those variations on specific devices as they see fit.
To understand this more, let’s use an example. We’ll compare camera capture capabilities. Camera 2 offers special control of the individual capture pipeline for each of the cameras on the phone at the same time, in addition to very fine grained manual settings. CameraX enables capturing high-resolution, high-quality photos and provides auto-white-balance, auto-exposure, and auto-focus functionality, in addition to simple manual camera controls.
Considering application examples: Samsung uses the Camera Framework API to help the advanced pro-grade camera system to capture studio-quality photos in various lightings and settings on Samsung Galaxy devices. While the API is common, Samsung has enabled variations that are unique to each device's capabilities, and takes advantage of that in the camera app on each device. The Camera Framework API enables Samsung to reach into the low level camera capabilities, and tailor the native app for the device
Another example, Microsoft decided to integrate CameraX across all productivity apps where Microsoft Lens is used (i.e. Office, Outlook, OneDrive), to ensure high quality images are used in all these applications. By switching to CameraX, the Microsoft Lens team was able not only to improve its developer experience in view of the simpler API, but also improve performance, increase developer productivity and reduce time to go to market. You can learn more about this here.
This is a very exciting time for Android Camera, with many new features on both APIs:
As we move forward, we plan to share with you more details about the exciting features that we have planned for Android Camera. We look forward to engaging with you and hearing your feedback, through the CameraX mailing list: camerax-developers@android.com and the AOSP issue tracker.
Thank you for your continued interest in Android Camera, and we look forward to building amazing camera experiences for users in collaboration with you!
Posted by Max Bires, Software Engineer
Attestation as a feature has been mandated since Android 8.0. As releases have come and gone, it has increasingly become more and more central to trust for a variety of features and services such as SafetyNet, Identity Credential, Digital Car Key, and a variety of third party libraries. In light of this, it is time we revisited our attestation infrastructure to tighten up the security of our trust chain and increase the recoverability of device trust in the event of known vulnerabilities.
Starting in Android 12.0, we will be providing an option to replace in-factory private key provisioning with a combination of in-factory public key extraction and over-the-air certificate provisioning with short-lived certificates. This scheme will be mandated in Android 13.0. We call this new scheme Remote Key Provisioning.
Device manufacturers will no longer be provisioning attestation private keys directly to devices in the factory, removing the burden of having to manage secrets in the factory for attestation.
Described further down below, the format, algorithms, and length of the certificate chain in an attestation will be changing. If a relying party has set up their certificate validation code to very strictly fit the legacy certificate chain structure, then this code will need to be updated.
The two primary motivating factors for changing the way we provision attestation certificates to devices are to allow devices to be recovered post-compromise and to tighten up the attestation supply chain. In today’s attestation scheme, if a device model is found to be compromised in a way that affects the trust signal of an attestation, or if a key is leaked through some mechanism, the key must be revoked. Due to the increasing number of services that rely on the attestation key signal, this can have a large impact on the consumer whose device is affected.
This change allows us to stop provisioning to devices that are on known-compromised software, and remove the potential for unintentional key leakage. This will go a long way in reducing the potential for service disruption to the user.
A unique, static keypair is generated by each device, and the public portion of this keypair is extracted by the OEM in their factory. These public keys are then uploaded to Google servers, where they serve as the basis of trust for provisioning later. The private key never leaves the secure environment in which it is generated.
When a device is unboxed and connected to the internet, it will generate a certificate signing request for keys it has generated, signing it with the private key that corresponds to the public key collected in the factory. Backend servers will verify the authenticity of the request and then sign the public keys, returning the certificate chains. Keystore will then store these certificate chains, assigning them to apps whenever an attestation is requested.
This flow will happen regularly upon expiration of the certificates or exhaustion of the current key supply. The scheme is privacy preserving in that each application receives a different attestation key, and the keys themselves are rotated regularly. Additionally, Google backend servers are segmented such that the server which verifies the device’s public key does not see the attached attestation keys. This means it is not possible for Google to correlate attestation keys back to a particular device that requested them.
End users won’t notice any changes. Developers that leverage attestation will want to watch out for the following changes:
Posted by Don Turner, Developer Relations Engineer, and Francois Goldfain, Director of Android Media Framework
Today we're launching the Jetpack Core Performance library in alpha. This library enables you to easily understand what a device is capable of, and tailor your user experience accordingly. It does this by allowing you to obtain the device’s performance class on devices running Android 11 (API level 30) and above.
A performance class is a ranking that reflects both a device's level of performance and its overall capabilities. As such, it largely reflects the device’s hardware specifications, but also how it performs in certain real-world scenarios, verified by the Android Compatibility Test Suite.
The performance class requirements currently focus on media use cases. For example, a Media Performance Class 12 device is guaranteed to:
A device that meets these requirements can optimally handle many popular media use cases including the typical video pipelines in social media apps for capturing, encoding, and sharing.
As an app developer, this means you can reliably group devices with the same level of performance and tailor your app’s behavior to those different groups. This enables you to deliver an optimal experience to users with both more and less capable devices. Performance class requirements will expand with each major Android release, making it possible to easily target different levels of experience to the performance class range you find appropriate. For example, you might wish to tailor “more premium” and “more functional” experiences to certain performance classes.
To add performance class to your app, include the following dependency in your build.gradle:
implementation 'androidx.core:core-performance:1.0.0-alpha02'
Then use it to tailor your user experience. For example, to encode higher resolution video depending on Media Performance Class:
class OptimalVideoSettings(context: Context){ private val devicePerf: DevicePerformance = DevicePerformance.create(context) val encodeHeight by lazy { when (devicePerf.mediaPerformanceClass) { Build.VERSION_CODES.S -> 1080 // On performance class 12 use 1080p Build.VERSION_CODES.R -> 720 // On performance class 11 use 720p else -> 480 } } val encodeFps by lazy { when(devicePerf.mediaPerformanceClass){ Build.VERSION_CODES.S -> 60 // On performance class 12 use 60 fps Build.VERSION_CODES.R -> 30 // On performance class 11 use 30 fps else -> 30 } } }
The Android device ecosystem is very diverse. The same application code can lead to very different behaviors depending on the device’s capabilities. For example, encoding a 4K video might take a few seconds on one device but a few minutes on another. User expectations also vary greatly based on the device they purchase. To provide an optimized user experience, it is common to group devices based on some criteria, such as RAM size or year released, then tailor your app's features for each group.
The problem with using an arbitrary value such as RAM size for grouping is that it provides no guarantees of a device's performance. There will also always be outliers that perform better or worse than expected within that group. Grouping on performance class solves this problem since it provides these guarantees, backed by real-world tests.
Manually testing devices that belong to different performance classes is one option to assess and identify the changes needed to balance functionalities and usability. However, the recommended approach to validate changes in the app experience is to run A/B tests and analyze their impact on app metrics. You can do this with the support of an experimentation platform such as Firebase. Providing the device’s performance class to the experimentation platform gives an additional performance dimension to the test results. This lets you identify the right optimizations for each class of device.
Snap has been using device clustering and A/B testing to fine tune their experience for Snapchatters. By leveraging performance class, Snapchat confidently identifies device capability in a scalable way and delivers an optimal experience. For example, the visual quality of shared videos is increased by using higher resolution and bitrate on Media Performance Class 12 devices than by default. As more devices are upgraded to meet Media Performance Class, Snapchat will run additional A/B tests and deploy features better optimized for the device capabilities.
The performance class requirements are developed in collaboration with leading developers and device manufacturers, who recognize the need for a simple, reliable, class-based system to allow app optimizations at scale.
In particular, Oppo, OnePlus, realme, Vivo and Xiaomi have been first to optimize their flagship devices to ensure that they meet the Media Performance Class 12 requirements. As a result, Build.VERSION.MEDIA_PERFORMANCE_CLASS returns Build.VERSION_CODES.S (the Android 12 API level) on the following devices:
Build.VERSION.MEDIA_PERFORMANCE_CLASS
Build.VERSION_CODES.S
The Jetpack Core Performance library was introduced to extend performance class to devices not yet running Android 12 or not advertising their eligible performance class at the time they passed the Android Compatibility Test Suite.
The library, which supports devices running Android 11 and above, aims to address this. It reports the performance class of many devices based on the test results collected during the device certification or through additional testing done by the Google team. We're adding new devices regularly, so make sure you’re using the latest version of the Core Performance library to get maximum device coverage.
When using Firebase as an experimentation platform for A/B tests, it's easy to send the device performance class to Firebase Analytics using a user property. Filtering the A/B test reports by performance class can indicate which experimental values led to the best metrics for each group of devices.
Here's an example of an A/B test which varies the encoding height of a video, and reports the performance class using a user property.
class MyApplication : Application() { private lateinit var devicePerf: DevicePerformance private lateinit var firebaseAnalytics: FirebaseAnalytics override fun onCreate() { devicePerf = DevicePerformance.create(this) firebaseAnalytics = Firebase.analytics firebaseAnalytics.setUserProperty( "androidx.core.performance.DevicePerformance.mediaPerformanceClass", devicePerf.mediaPerformanceClass) } fun getVideoEncodeHeight() : Long = remoteConfig.getLong("encode_height") }
We'd love for you to try out the Core Performance library in your app. If you have any issues or feature requests please file them here.
Also, we'd be interested to hear any feedback you have on the performance class requirements. Are there specific performance criteria or hardware requirements that are important for your app's use cases? If so, please let us know using the Android issue tracker.
Posted by Sameer Samat, Vice President, Product Management
Mobile apps have transformed our lives. They help us stay informed and entertained, keep us connected to each other and have created new opportunities for billions of people around the world. We’re humbled by the role Google Play has played over the last 10 years in this global transformation.
We wouldn’t be here if it weren’t for the close partnership with our valued developers and using their feedback to keep evolving. For example, based on partner feedback and in response to competition, our pricing model has evolved to help all developers on our platform succeed and today 99% of developers qualify for a service fee of 15% or less.
Recently, a discussion has emerged around billing choice within app stores. We welcome this conversation and today we want to share an exciting pilot program we are working on in partnership with Play developers.
When users choose Google Play, it’s because they count on us to deliver a safe experience, and that includes in-app payment systems that protect users’ data and financial information. That’s why we built Google Play’s billing system to the highest standards for privacy and safety so users can be confident their sensitive payment data won’t be at risk when they make in-app purchases.
We think that users should continue to have the choice to use Play’s billing system when they install an app from Google Play. We also think it’s critical that alternative billing systems meet similarly high safety standards in protecting users’ personal data and sensitive financial information.
Building on our recent launch allowing an additional billing system alongside Play’s billing for users in South Korea and in line with our principles, we are announcing we will be exploring user choice billing in other select countries.
This pilot will allow a small number of participating developers to offer an additional billing option next to Google Play’s billing system and is designed to help us explore ways to offer this choice to users, while maintaining our ability to invest in the ecosystem. This is a significant milestone and the first on any major app store — whether on mobile, desktop, or game consoles.
We’ll be partnering with developers to explore different implementations of user-choice billing, starting with Spotify. As one of the world’s largest subscription developers with a global footprint and integrations across a wide range of device form factors, they’re a natural first partner. Together, we’ll work to innovate in how consumers make in-app purchases, deliver engaging experiences across multiple devices, and bring more consumers to the Android platform.
Spotify will be introducing Google Play’s billing system alongside their current billing system, and their perspective as our first partner will be invaluable. This pilot will help us to increase our understanding of whether and how user choice billing works for users in different countries and for developers of different sizes and categories.
Alex Norström, Chief Freemium Business Officer, commented: “Spotify is on a years-long journey to ensure app developers have the freedom to innovate and compete on a level playing field. We’re excited to be partnering with Google to explore this approach to payment choice and opportunities for developers, users and the entire internet ecosystem. We hope the work we’ll do together blazes a path that will benefit the rest of the industry.”
We understand this process will take time and require close collaboration with our developer community, but we’re thrilled about this first step and plan to share more in the coming months.
Posted by Dave Burke, VP of Engineering
Last month, we released the first developer preview of Android 13, built around our core themes of privacy and security, developer productivity, as well as tablets and large screen support. Today we’re sharing Android 13 Developer Preview 2 with more new features and changes for you to try in your apps. Your input helps us make Android a better platform for developers and users, so let us know what you think!
Today’s release also comes on the heels of the 12L feature drop moving to the Android Open Source Project (AOSP) last week, helping you better take advantage of the over 250+ million large screen Android devices. And to dive into Android 13, tablets, as well as our developer productivity investments in Jetpack Compose, check out the latest episode of #TheAndroidShow.
Before jumping into Developer Preview 2, let’s take a look at the other news from last week: we’ve officially released the 12L feature drop to AOSP and it’s rolling out to all supported Pixel devices over the next few weeks. 12L makes Android 12 even better on tablets, and includes updates like a new taskbar that lets users instantly drag and drop apps into split-screen mode, new large-screen layouts in the notification shade and lockscreen, and improved compatibility modes for apps. You can read more here.
Starting later this year, 12L will be available in planned updates on tablets and foldables from Samsung, Lenovo, and Microsoft, so now is the time to make sure your apps are ready. We highly recommend testing your apps in split-screen mode with windows of various sizes, trying it in different orientations, and checking the new compatibility mode changes if they apply. You can read more about 12L for developers here.
And the best part: the large screen features in 12L are foundational in Android 13, so you can develop and test on Android 13 knowing that you’re also covering your bases for tablets running Android 12L. We see large screens as a key surface for the future of Android, and we’re continuing to invest to give you the tools you need to build great experiences for tablets, Chromebooks, and foldables. You can learn more about how to get started optimizing for large screens, and make sure to check out our large screens developer resources.
Let’s dive into what’s new in today’s Developer Preview 2 of Android 13.
People want an OS and apps that they can trust with their most personal and sensitive information and the resources on their devices. Privacy and user trust are core to Android’s product principles, and in Android 13 we’re continuing to focus on building a responsible and high quality platform for all by providing a safer environment on the device and more controls to the user. Here’s what’s new in Developer Preview 2.
Notification permission - To help users focus on the notifications that are most important to them, Android 13 introduces a new runtime permission for sending notifications from an app: POST_NOTIFICATIONS. Apps targeting Android 13 will now need to request the notification permission from the user before posting notifications. For apps targeting Android 12 or lower, the system will handle the upgrade flow on your behalf. The flow will continue to be fine tuned. To provide more context and control for your users, we encourage you to target Android 13 as early as possible and request the notification permission in your app. More here.
Notification permission dialog in Android 13.
Developer downgradable permissions - Some apps may no longer require certain permissions which were previously granted by the user to enable a specific feature, or retain a sensitive permission from an older Android version. In Android 13, we’re providing a new API to let your app protect user privacy by downgrading previously granted runtime permissions.
Safer exporting of context-registered receivers - In Android 12 we required developers to declare the exportability of manifest-declared Intent receivers. In Android 13 we’re asking you to do the same for context-registered receivers as well, by adding either the RECEIVER_EXPORTED or RECEIVER_NOT_EXPORTED flag when registering receivers for non-system sources. This will help ensure that receivers aren’t available for other applications to send broadcasts to unless desired. While not required in Android 13, we recommend declaring exportability as a step toward securing your app.
In Android 13 we’re working to give you more tools to help you deliver a polished experience and better performance for users. Here are some of the updates in today’s release.
Improved Japanese text wrapping - TextViews can now wrap text by Bunsetsu (the smallest unit of words that sounds natural) or phrases -- instead of by character -- for more polished and readable Japanese applications. You can take advantage of this wrapping by using android:lineBreakWordStyle="phrase" with TextViews.
android:lineBreakWordStyle="phrase"
Japanese text wrapping with phrase style enabled (bottom) and without (top).
Improved line heights for non-latin scripts - Android 13 improves the display of non-Latin scripts (such as Tamil, Burmese, Telugu, and Tibetan) by using a line height that’s adapted for each language. The new line heights prevent clipping and improve the positioning of characters. Your app can take advantage of these improvements just by targeting Android 13. Make sure to test your apps when using the new line spacing, since changes may affect your UI in non-Latin languages.
Improved line height for non-Latin scripts in apps targeting Android 13 (bottom).
Text Conversion APIs - People who speak languages like Japanese and Chinese use phonetic lettering input methods, which often slow down searching and features like auto-completion. In Android 13, apps can call the new text conversion API so users can find what they're looking for faster and easier. Previously, for example, searching required a Japanese user to (1) input Hiragana as the phonetic pronunciation of their search term (i.e. a place or an app name), (2) use the keyboard to convert the Hiragana characters to Kanji, (3) re-search using the Kanji characters to (4) get their search results. With the new text conversion API, Japanese users can type in Hiragana and immediately see Kanji search results live, skipping steps 2 and 3.
Color vector fonts - Android 13 adds rendering support for COLR version 1 (spec, intro video) fonts and updates the system emoji to the COLRv1 format. COLRv1 is a new, highly compact, font format that renders quickly and crisply at any size. For most apps this will just work, the system handles everything. You can opt in to COLRv1 for your app starting in Developer Preview 2. If your app implements its own text rendering and uses the system's fonts, we recommend opting in and testing emoji rendering. Learn more about COLRv1 in the Chrome announcement.
COLRv1 vector emoji (left) and bitmap emoji.
Bluetooth LE Audio - Low Energy (LE) Audio is the next-generation wireless audio built to replace Bluetooth classic and enable new use cases and connection topologies. It will allow users to share and broadcast their audio to friends and family, or subscribe to public broadcasts for information, entertainment, or accessibility. It’s designed to ensure that users can receive high fidelity audio without sacrificing battery life and be able to seamlessly switch between different use cases that were not possible with Bluetooth Classic. Android 13 adds built-in support for LE Audio, so developers should get the new capabilities for free on compatible devices.
MIDI 2.0 - Android 13 adds support for the new MIDI 2.0 standard, including the ability to connect MIDI 2.0 hardware through USB. This updated standard offers features such as increased resolution for controllers, better support for non-Western intonation, and more expressive performance using per-note controllers.
With each platform release, we’re working to make updates faster and smoother by prioritizing app compatibility as we roll out new platform versions. In Android 13 we’ve made app-facing changes opt-in to give you more time, and we’ve updated our tools and processes to help you get ready sooner.
With Developer Preview 2, we’re well into the release and continuing to improve overall stability, so now is the time to try the new features and changes and give us your feedback. We’re especially looking for input on our APIs, as well as details on how the platform changes affect your apps. Please visit the feedback page to share your thoughts with us or report issues.
It’s also a good time to start your compatibility testing and identify any work you’ll need to do. We recommend doing the work early, so you can release a compatible update by Android 13 Beta 1. There’s no need to change your app’s targetSdkVersion at this time, but we do recommend using the behavior change toggles in Developer Options to get a preliminary idea of how your app might be affected by opt-in changes in Android 13.
As we reach Platform Stability in June 2022, all of the app-facing system behaviors, SDK/NDK APIs, and non-SDK lists will be finalized. At that point, you can wind up your final compatibility testing and release a fully compatible version of your app, SDK, or library. More on the timeline for developers is here.
App compatibility toggles in Developer Options.
The Developer Preview has everything you need to try the Android 13 features, test your apps, and give us feedback. You can get started today by flashing a device system image to a Pixel 6 Pro, Pixel 6, Pixel 5a 5G, Pixel 5, Pixel 4a (5G), Pixel 4a, Pixel 4 XL, or Pixel 4 device. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio Dolphin. For even broader testing, GSI images are available. If you’ve already installed a preview build to your Pixel device, you’ll automatically get this update and all later previews and Betas over the air. More details on how to get Android 13 are here.
For complete information, visit the Android 13 developer site.
Posted by The Android Team
Large screens are growing in reach, with now over 250M active Android tablets, foldables, and ChromeOS devices. As demand continues to accelerate, we’re seeing users doing more than ever on large screens, from socializing and playing games, to multitasking and getting things done. To help people get the most from their devices, we're making big changes in Google Play to enable users to discover and engage with high quality apps and games.
We’ll be introducing three main updates to the store: ranking and promotability changes, alerts for low quality apps, and device-specific ratings and reviews.
Ranking and Promotability Changes
We recently published our large screen app quality guidelines in addition to our core app quality guidelines to provide guidance on creating great user experiences on large screens. It encompasses a holistic set of features, from basic compatibility requirements such as portrait and landscape support, to more differentiated requirements like keyboard and stylus capabilities. In the coming months, we’ll be updating our featuring and ranking logic in Play on large screen devices to prioritize high-quality apps and games based on these app quality guidelines. This will affect how apps are surfaced in search results and recommendations on the homepage, with the goal of helping users find the apps that are best optimized for their device. We will also be deepening our investment in editorial content across Play to highlight apps that have been optimized for large screens.
Alerts for Users Installing Low Quality Apps
For apps that don’t meet basic compatibility requirements, we’ll be updating current alerts to users on large screens to help set expectations for how apps will look and function post-install. This will help notify users about apps that may not work well on their large screen devices.We are working to provide additional communications on this change, so stay tuned for further updates later this year.
Device-Specific Ratings and Reviews
Lastly, as we previously announced, users will soon be able to see ratings and reviews split by device type (e.g. tablets and foldables, Chrome OS, Wear, or Auto) to help them make better decisions about the apps that are right for them. Where applicable, the default rating shown in Play will be that of the device type the user is using, to provide a better sense of the app experience on their device. To preview your ratings and reviews by device, you can view your device-type breakdown in Play Console today.
Analyze your ratings and reviews breakdown by device type to plan large screen optimizations
Developers optimizing for large screens are already seeing positive impact to user engagement and retention. To help you get started, here are some tips and resources for optimizing your app for large screens:
Use the Device Type filter to select one or multiple device types to analyze in Reach and Devices
See device type breakdowns of your user and issue distributions to optimize your current title or plan your next title
The features in Play will roll out gradually over the coming months, so we encourage you to get a head start in planning for large screen app quality enhancements ahead of these changes. Along the way, we will continue collecting feedback to understand how we can best support large screen optimizations that improve consumer experiences and empower developers to build better apps.
Posted by Lauren Mytton, Product Manager, Google Play
Quality is foundational to your game or app’s success on Google Play, and Android vitals in Google Play Console is a great way to track how your app is performing. In fact, over 80% of the top one thousand developers check Android vitals at least once a month to monitor and troubleshoot their technical quality, and many visit dailyWhile the Android vitals overview in Play Console lets you check your app or game’s quality at a glance, many developers have told us that they want to work with their vitals data outside Play Console, too. Some of your use cases include:
Starting today, these use cases are now possible with the new Play Developer Reporting API.
The Play Developer Reporting API allows developers to work with app-level data from their developer accounts outside Play Console. In this initial launch, you get access to the four core Android vitals stability and battery metrics: crash rate, ANR rate, excessive wake-up rate, and stuck background wake-lock rate, along with crash and ANR issues and stack traces. You can also view anomalies, breakdowns (including new country filters in Vitals), and three years of metric history.
Set up access to the new Play Developer Reporting API from the API Access page in Play Console.
To enable the API, you must be an owner of your developer account in Play Console. Then you can set up access in minutes from the API Access page in Play Console. Our documentation covers everything you need to know to get started.
You can find sample requests in the API documentation, along with a list of available endpoints (for both alpha and beta releases).
Once you have enabled the API, you may wish to send some requests manually to get a sense of the API resources and operation before implementing more complex solutions. This can also help you establish query times, which will vary depending on the amount of data being processed. Queries over long time ranges, across many dimensions, and/or against very large apps will take longer to execute.
Most of our metric sets are refreshed once a day. To avoid wasting resources and request quota, we recommend you use the provided methods to check for data freshness and verify that new data is available before issuing a query.
Thank you to all the developers who requested this feature. We hope it helps you continue to improve your apps and games. We hope it helps you continue to improve your apps and games. To learn more about Android vitals and the Play Developer Reporting API, view our session from the Google for Games Developer Summit.
How useful did you find this blog post?
★ ★ ★ ★ ★
Posted by Greg Hartrell, Product Director, Games on Play/Android
Over the years, we’ve seen that apps and games are not just experiences - they’re businesses - led by talented people like yourselves. So it's our goal to continue supporting your businesses to reach even greater potential. At our recent Google for Games Developer Summit, we shared how teams across Google have been continuing to build the next generation of services, tools and features to help you create and monetize high quality experiences, more programs tailored to your needs, and more educational resources with best practices.
We want to help you throughout the game development lifecycle, by making it easier to develop high quality games and deliver these great experiences to growing audiences and devices.
Easier to bring your game to more screens To enable games on new screens and devices, we want to help you meet players where they are, giving them the convenience of playing games wherever they choose.
Easier to develop high quality games
We’re committed to supporting you build high quality Android games, by continuing to focus on tools and SDKs that simplify development and provide insights about your game, while also partnering with game engines, including homegrown native c/c++ engines. Last year, we released the Android Game Development Kit (AGDK), a set of tools and libraries to help make Android Game Development more efficient, and have made several updates based on developer feedback.
More tools to help you succeed on Google Play
The Play Console is an invaluable resource to help in your game lifecycle, with tools and insights to assist before and after launch.
Learn more about everything we shared at the Google for Games Developer Summit and by visiting g.co/android/games for additional resources and documentation. We remain committed to supporting the developer ecosystem and greatly appreciate your continued feedback and investment in creating high quality game experiences for players around the world.
Posted by Simona Stojanovic, Android Developer Relations Engineer
Now that our MAD Skills series on Jetpack DataStore is complete, let’s do a quick wrap up of all the things we’ve covered in each episode:
We started with the basics of Jetpack DataStore — how it works and the changes and improvements it brings compared to SharedPreferences. We also discussed how to decide between its two implementations, Preferences and Proto DataStore, as well as how to choose between DataStore and Room.
Check out the blog post and the video:
Go deeper into Preferences DataStore: how to create it, read, and write data and how to handle exceptions, all of which should, hopefully, provide you with enough information to decide if it’s the right choice for your app.
Here’s the blog post and the video:
Learn about Proto DataStore: how to create it, read, and write data and how to handle exceptions, to better understand the scenarios that make Proto a great choice.
If you prefer reading, here’s the blog post, otherwise, here’s the video:
Episode 4 introduces several different concepts related to DataStore to understand how it works under the hood, so that you have everything at your disposal to use it in a production environment. We focus on: Kotlin Data class serialization, synchronous work, and dependency injection with Hilt.
Take a look at our blogs and video:
DataStore and dependency injection
DataStore and Kotlin serialization
DataStore and synchronous work
Finally, in the fifth episode of our Jetpack DataStore series, we cover two additional concepts around DataStore: DataStore-to-DataStore migrations and testing. Hopefully, this will provide you all the information you need to add DataStore to your app successfully.
Check out the blogs and the video:
DataStore and data migration
DataStore and testing
Posted by Andrew Flynn & Jon Boekenoogen, Tech leads on Google Play
In 2020, Google Play Store engineering leadership made the big decision to revamp its entire storefront tech stack. The existing code was 10+ years old and had incurred tremendous tech debt over countless Android platform releases and feature updates. We needed new frameworks that would scale to the hundreds of engineers working on the product while not negatively impacting developer productivity, user experience, or the performance of the store itself.
We laid out a multi-year roadmap to update everything in the store from the network layer all the way to the pixel rendering. As part of this we also wanted to adopt a modern, declarative UI framework that would satisfy our product goals around interactivity and user delight. After analyzing the landscape of options, we made the bold (at the time) decision to commit to Jetpack Compose, which was still in pre-Alpha.
Since that time, the Google Play Store and Jetpack Compose teams at Google have worked extremely closely together to release and polish a version of Jetpack Compose that meets our specific needs. In this article we'll cover our approach to migration as well as the challenges and benefits we found along the way, to share some insight into what adopting Compose can be like for an app with many contributors.
When we were considering Jetpack Compose for our new UI rendering layer, our top two priorities were:
We have been writing UI code using Jetpack Compose for over a year now and enjoy how Jetpack Compose makes UI development more simple.
We love that writing UI requires much less code, sometimes up to 50%. This is made possible by Compose being a declarative UI framework and harnessing Kotlin’s conciseness. Custom drawing and layouts are now simple function calls instead of View subclasses with N method overrides.
Using the Ratings Table as an example:
With Views, this table consists of:
With Compose, this table consists of:
@Composable
Now you might be thinking: this all sounds great, but what about library dependencies that provide Views? It's true, not all library owners have implemented Compose-based APIs, especially when we first migrated. However, Compose provides easy View interoperability with its ComposeView and AndroidView APIs. We successfully integrated with popular libraries like ExoPlayer and YouTube’s Player in this fashion.
ComposeView
AndroidView
The Play Store and Jetpack Compose teams worked closely together to make sure Compose could run as fast and be as jank-free as the View framework. Due to how Compose is bundled within the app (rather than being included as part of the Android framework), this was a tall order. Rendering individual UI components on the screen was fast, but end to end times of loading the entire Compose framework into memory for apps was expensive.
One of the largest Compose adoption performance improvements for the Play Store came from the development of Baseline Profiles. While cloud profiles help improve app startup time and have been available for some time now, they are only available for API 28+ and are not as effective for apps with frequent (weekly) release cadences. To combat this, the Play Store and Android teams worked together on Baseline Profiles: a developer-defined, bundled profile that app owners can specify. They ship with your app, are fully compatible with cloud profiles and can be defined both at the app-level of specificity and library-level (Compose adopters will get this for free!). By rolling out baseline profiles, Play Store saw a decrease in initial page rendering time on its search results page of 40%. That’s huge!
Re-using UI components is a core mechanic of what makes Compose performant for rendering, particularly in scrolling situations. Compose does its best to skip recomposition for composables that it knows can be skipped (e.g. they are immutable), but developers can also force composables to be treated as skippable if all parameters meet the @Stable annotation requirements. The Compose compiler also provides a handy guide on what is preventing specific functions from being skippable. While creating heavily re-used UI components in Play Store that were used frequently in scrolling situations, we found that unnecessary recompositions were adding up to missed frame times and thus jank. We built a Modifier to easily spot these recompositions in our debug settings as well. By applying these techniques to our UI components, we were able to reduce jank by 10-15%.
@Stable
Modifier
Recomposition visualization Modifier in action. Blue (no recompositions), Green (1 recomposition).
Another key component to optimizing Compose for the Play Store app was having a detailed, end-to-end migration strategy for the entire app. During initial integration experiments, we ran into the Two Stack Problem: running both Compose and View rendering within a single user session was very memory intensive, especially on lower-end devices. This cropped up both during rollouts of the code on the same page, but also when two different pages (for example, the Play Store home page and the search results page) were each on a different stack. In order to ameliorate this startup latency, it was important for us to have a concrete plan for the order and timeline of pages migrating to Compose. Additionally, we found it helpful to add short-term pre-warming of common classes as stop-gaps until the app is fully migrated over.
Compose unbundling from the Android framework has reduced the overhead in our team directly contributing to Jetpack Compose, resulting in fast turnaround times for improvements that benefit all developers. We were able to collaborate with the Jetpack Compose team and launch features like LazyList item type caching as well as move quickly on lightweight fixes like extra object allocations.
The Play Store’s adoption of Compose has been a boon for our team’s developer happiness, and a big step-up for code quality and health. All new Play Store features are built on top of this framework, and Compose has been instrumental in unlocking better velocity and smoother landings for the app. Due to the nature of our Compose migration strategy, we haven’t been able to measure things like APK size changes or build speed as closely, but all signs that we can see look very positive!
Compose is the future of Android UI development, and from the Play Store’s point of view, we couldn’t be happier about that!