Posted by Serban Constantinescu, Product Manager,Pixel 7 and Pixel 7 Pro are the first Android phones to support only 64-bit apps. This configuration drops OS support for 32-bit code, reducing memory usage, improving performance, and enhancing security. Over time, we expect this device configuration to become commonplace.
Thanks to the efforts and collaboration of the entire Android community, our ecosystem is ready. Transitioning Android devices to 64-bit-only required many changes across the platform, tooling, Play, and, of course, your apps. We started by introducing 64-bit support in 2014, announcing policy changes in 2017 and requiring support for Google Play apps starting 2019.
64-bit apps run faster because they have access to extra registers and instructions that aren't available to 32-bit apps. In addition, newer CPUs deliver up to 25% better performance when running 64-bit code or even drop support for 32-bit code altogether.
64-bit can help improve security. The bigger address space makes defenses like ASLR more effective and the spare bits can be used to protect control flow integrity. These countermeasures may reduce the chance an intruder can take control of your device.
Removing support for 32-bit code saves up to 150MB of RAM, which was used by the OS even when not running 32-bit apps. These memory savings result in fewer out-of-memory conditions meaning less jank and fewer background app kills.
Developers targeting 64-bit have access to better tools such as HWASan for detecting memory errors and improving the quality of an app.
64-bit-only device configurations halve the CTS testing time. Combined with GKI, vendors can update devices faster and more easily.
With 64-bit-only devices now reaching users, we encourage developers to start paying extra attention to testing their apps and updates for 64-bit-only devices. To support this, Google Play now provides pre-launch reports that are run on 64-bit-only devices to detect and report compatibility problems.
Note: While 64-bit-only devices will grow in popularity with phones joining Android Auto in this group, 32-bit-only devices will continue to be important for Android Go, Android TV, and Android Wear. Please continue supporting 32-bit ABIs; Google Play will continue serving 32-bit apps to 32-bit-only devices.
Posted by Niharika Arora, Developer Relations Engineer
In Part 1 and Part 2 of our “Optimizing for Android Go” blog series, we discussed why we should consider building for Android Go and how to optimize your app to perform well on Go devices. In this blog, we will talk about the tools which helped Google optimize their Google apps performance.
PSS is useful for the operating system when it wants to know how much memory is used by all processes since pages don’t get counted multiple times. PSS takes a long time to calculate because the system needs to determine which pages are shared and by how many processes. RSS doesn't distinguish between shared and non-shared pages (making it faster to calculate) and is better for tracking changes in memory allocation.
So, which method should you choose? The choice depends on the usage of shared memory. For example, if the shared memory is being used by the application only then we should use the RSS approach. While, if the shared memory is taken by the Google Play Services then we should use the USS approach. For more understanding, please read here.
So, which method should you choose? The choice depends on the usage of shared memory.
For example, if the shared memory is being used by the application only then we should use the RSS approach. While, if the shared memory is taken by the Google Play Services then we should use the USS approach. For more understanding, please read here.
adb logcat | grep hprof
adb pull <output-file-name>
This will pull the generated file to your machine for analysis.
To get info on native heap, read here :
https://perfetto.dev/docs/data-sources/native-heap-profiler
To know about Java heap, read here :
https://perfetto.dev/docs/data-sources/java-heap-profiler
3. Understand low-memory killer
In Android, we have a process called low memory killer, and this will pick a process from the device and will kill that process when the device is under low RAM, the thresholds can be tuned by OEMs. By doing so, you will get back all the memory that the process was using. But what if the low memory killer kills the process that the user cares about? In Android, we have a priority list of applications and based on that priority list we remove the app when the low memory killer comes into play. Read more here. You can run this command and know :adb shell dumpsys activity oomTo check stats on low memory killer :adb shell dumpsys activity lmk For more information, please check Perfetto documentation for Memory.
This is one the best tools to find where all your app memory is consumed. Use Perfetto to get information about memory management events from the kernel. Deep dive and understand how to profile native and Java heap here.
The Memory Profiler is a component in the Android Profiler that helps you identify memory leaks and memory churn that can lead to stutter, freezes, and even app crashes. It shows a real time graph of your app's memory use and lets you capture a heap dump, force garbage collections, and track memory allocations. To learn more about inspecting performance, please check MAD skills videos here.
You may want to observe how your app's memory is divided between different types of RAM allocation with the following adb command:
adb shell dumpsys meminfo <package_name|pid> [-d]
You can view the following seven memory categories with Meminfo:
Key memory terms:
Note :
The showmap command provides a much more detailed breakdown of memory than friendly meminfo. It lists the name and sizes of memory maps used by a process. This is a summary of the information available at /proc/<pid>/smaps, which is the primary source of information used in dumpsys meminfo, except for some graphics memory.
$adb root $ adb shell pgrep <process>Output - process id$ adb shell showmap <process id>
Sample Output :
Common memory mappings are:
Malloc debug is a method of debugging native memory problems. It can help detect memory corruption, memory leaks, and use after free issues. You can check this documentation for more understanding and usage.
Beginning with Android 27, Android NDK supports Address Sanitizer which is a fast compiler-based tool for detecting memory bugs in native code. ASan detects:
ASan runs on both 32-bit and 64-bit ARM, plus x86 and x86-64. ASan's CPU overhead is roughly 2x, code size overhead is between 50% and 2x, and the memory overhead is large (dependent on your allocation patterns, but on the order of 2x). To learn more, read here. Camera from the Google team used it and automated the process that would run and get back to them in the form of alerts in case of Asan issues, and found it really convenient to fix memory issues missed during code authoring/review.
ASan runs on both 32-bit and 64-bit ARM, plus x86 and x86-64. ASan's CPU overhead is roughly 2x, code size overhead is between 50% and 2x, and the memory overhead is large (dependent on your allocation patterns, but on the order of 2x). To learn more, read here.
Camera from the Google team used it and automated the process that would run and get back to them in the form of alerts in case of Asan issues, and found it really convenient to fix memory issues missed during code authoring/review.
Once you have a complete app startup trace, look at the trace and measure time taken for major operations like bindApplication, activitystart etc. Look at overall time spent to
Look at overall time spent to
This is your app’s home page and often performance will depend on the loading of this page. For most apps, there is a lot of data displayed on this page, spanning multiple layouts and processes running in background. Check the home activity layout and specifically look at the Choreographer.onDraw method of the home activity.
The App Startup library provides a straightforward, performant way to initialize components at application startup. Both library developers and app developers can use App Startup to streamline startup sequences and explicitly set the order of initialization. Instead of defining separate content providers for each component you need to initialize, App Startup allows you to define component initializers that share a single content provider. This can significantly improve app startup time. To find how to use it in your app, refer here.
Baseline Profiles are a list of classes and methods included in an APK used by Android Runtime (ART) during installation to pre-compile critical paths to machine code. This is a form of profile guided optimization (PGO) that lets apps optimize startup, reduce jank, and improve performance for end users. Profile rules are compiled into a binary form in the APK, in assets/dexopt/baseline.prof.
During installation, ART performs Ahead-of-time (AOT) compilation of methods in the profile, resulting in those methods executing faster. If the profile contains methods used in app launch or during frame rendering, the user experiences faster launch times and/or reduced jank. For more information on usage and advantages, refer here.
You can use the CPU Profiler to inspect your app’s CPU usage and thread activity in real time while interacting with your app, or you can inspect the details in recorded method traces, function traces, and system traces. The detailed information that the CPU Profiler records and shows is determined by which recording configuration you choose:
To give apps the ability to start and stop recording CPU profiling and then inspect in CPU profiler is what Debug API is all about. It provides information about tracing and allocation counts the same way using startMethodTracing() and stopMethodTracing().
Debug.startMethodTracing("sample") - Starts recording a trace log with the name you provide
Debug.stopMethodTracing() - he system begins buffering the generated trace data, until the
application calls this method.
Usage
This sample can be used for app instrumentation.
<profileable android:shell=["true" | "false"] android:enable=["true" | "false"] />
The Jetpack Microbenchmark library allows you to quickly benchmark your Android native code (Kotlin or Java) from within Android Studio. The library handles warmup, measures your code performance and allocation counts, and outputs benchmarking results to both the Android Studio console and a JSON file with more detail. Read more here.
No user wants to download a large APK that might consume most of their Network/Wifi Bandwidth, also most importantly, space inside the mobile device.
The size of your APK has an impact on how fast your app loads, how much memory it uses, and how much power it consumes. Reducing your app's download size enables more users to download your app.
The Android Size Analyzer tool is an easy way to identify and implement many strategies for reducing the size of your app. It is available as both an Android Studio plugin as well as a standalone JAR
The lint tool, a static code analyzer included in Android Studio, detects resources in your res/ folder that your code doesn't reference. When the lint tool discovers a potentially unused resource in your project, it prints a message like the following example.
Note : Libraries that you add to your code may include unused resources. Gradle can automatically remove resources on your behalf if you enable shrinkResources in your app's build.gradle file.
In Android 12 (API level 31), the NDK ImageDecoder API has been expanded to decode all frames and timing data from images that use the animated GIF and animated WebP file formats. When it was introduced in Android 11, this API decoded only the first image from animations in these formats.
Use ImageDecoder instead of third-party libraries to further decrease APK size and benefit from future updates related to security and performance.
For more details on the API, refer to the API reference and the sample on GitHub.
The aapt tool can optimize the image resources placed in res/drawable/ with lossless compression during the build process. For example, the aapt tool can convert a true-color PNG that does not require more than 256 colors to an 8-bit PNG with a color palette. Doing so results in an image of equal quality but a smaller memory footprint. Read more here.
Note : Please check Android developer documentation for all the useful tools which can help you identify and help fix such performance issues.
This part of the blog captures the tools used by Google to identify and fix performance issues in their apps. They saw great improvements in their metrics. Most Android Go apps could benefit from applying the strategies described above. Optimize and make your app delightful and fast for your users!
Posted by The Android Team CameraX is an Android Jetpack library that makes it easy to incorporate camera functionality directly in your Android app. That’s why we focus heavily on device compatibility out-of-the-box, so you can focus on what makes your app unique.
In this post, we’ll look at three ways CameraX makes developers’ lives easier when it comes to device compatibility. First, we’ll take a peek into our CameraX Test Lab where we test over 150 physical phones every day. Second, we’ll look at Quirks, the mechanism CameraX uses to automatically handle device inconsistencies. Third, we’ll discuss the ways CameraX makes it easier to develop apps for foldable phones.
We built the CameraX Test Lab to ensure CameraX works on the Android devices most people have in their pockets. The Test Lab opened in 2019 with 52 phone models. Today, the Test Lab has 150 phone models. We prioritize devices with the most daily active users over the past 28 days (28DAUs) and devices that leverage a diverse range of systems on a chip (SoCs). The Test Lab currently covers over 750 million 28DAUs. We also test many different Android versions, going back to Android 5.1 (Lollipop).
To generate reliable test results, each phone model has its own test enclosure to control for light and other environmental factors. Each enclosure contains two phones of the same model to simplify testing the front and back cameras. On the opposite side of the test enclosure from the phones, there’s a high-resolution test chart. This chart has many industry-standard tests for camera attributes like color correctness, resolution, sharpness, and dynamic range. The chart also has some specific elements for functional tests like face detection.
When you adopt CameraX in your app, you get the assurance of this continuous testing across many devices and API levels. Additionally, we’re continuously making improvements to the Test Lab, including adding new phones based on market trends to ensure that the majority of your users are well represented. See our current test device list for the latest inventory in our Test Lab.
Google provides a Camera Image Test Suite so that OEM’s cameras meet a baseline of consistency. Still, when dealing with the wide range of devices that run Android, there can be differences in the end user camera experience. CameraX includes an abstraction layer, called Quirks, to remove these variations in behavior so that CameraX behaves consistently across all devices with no effort from app developers.
We find these quirks based on our own manual testing, the Test Lab’s automatic testing, and bug reports filed in our public CameraX issue tracker. As of today, CameraX has over 30 Quirks that automatically fix behavior inconsistencies for developers. Here are a few examples:
These are just a few examples of how CameraX automatically handles quirky device behavior. We will continue to add more corrections as we find them, so app developers won’t have to deal with these one-offs on their own. If you find inconsistent behavior on a device you’re testing, you can file an issue in the CameraX component detailing the behavior and the device it’s happening on.
Foldables continue to be the fastest growing smartphone form factor. Their flexibility in screen size adds complexity to camera development. Here are a few ways that CameraX simplifies the development of camera apps on foldables.
CameraX’s Preview use case handles differences between the aspect ratio of the camera and the aspect ratio of the screen. With traditional phone and tablet form factors, this difference should be small because Section 7.5.5 of the Android Compatibility Definition Document requires that the “long dimension of the camera aligns with the screen’s long dimension.” However, with foldable devices the screen aspect ratio can change, so this relationship might not always hold. With CameraX you can always preserve aspect ratio by filling the PreviewView (which may crop the preview image) or fitting the image into the PreviewView (which may result in letterboxing or pillarboxing). Set PreviewView.ScaleType to specify which method to use.
The increase in foldable devices also increases the possibility that your app may be used in a multi-window environment. CameraX is set up for multi-window support out-of-the-box. CameraX handles all aspects of lifecycle management for you, including the multi-window case where other apps can take priority access of singleton resources, such as the microphone or camera. This means no additional effort is required from app developers when using CameraX in a multi-window environment.
We’re always looking for more ways to improve CameraX to make it even easier to use. With respect to foldables, for example, we’re exploring ways to let developers call setTargetResolution() without having to take into account the different configurations a foldable device can be in. Keep an eye on this blog and our CameraX release notes for updates on new features!
We have a number of resources to help you get started with CameraX. The best starting place is our CameraX codelab. If you want to dig a bit deeper with CameraX, check out our camera code samples, ranging from a basic app to more advanced features like camera extensions. For an overview of everything CameraX has to offer, see our CameraX documentation. If you have any questions, feel free to reach out to us on our CameraX discussion group.
Posted by Lidia Gaymond, Product Manager, Google PlayPowered by Android App Bundles, Google Play gives all developers the benefits of modern Android distribution. As the Android ecosystem expands, it’s more important than ever to know how your app is being delivered to different devices.
Delivery insights help you better understand and analyze your app’s delivery performance and what contributes to it, and take action to optimize the experience for your users. Here are five recent Play Console updates you can use to get more insight into your delivery performance.
1. When you release your app, you’ll now see its expected app size and update size at the point of release creation, so you can determine if the size change from the previous release is acceptable.
2. If you use advanced Play delivery tools, such as Play Asset Delivery or Play Feature Delivery, detailed information about how these are shipped to users are now available on the Statistics page and in the Delivery tab in App bundle explorer. Understanding your feature modules and asset packs usage can help you make better decisions about further modularization and uncover usage patterns across your users.
3. When analyzing your existing release, you can now see how many users are on it to help you assess the “freshness” of your install base and how quickly users migrate to new releases. To improve your update rate, consider using the In-app updates API.
4. For a deeper dive into your individual app version performance, you can find information about your download size per device model, most common update sizes, and install base in App bundle explorer.
5. All of these features are also available in your App Dashboard, where you can track these measurements over time alongside other app metrics.
We hope these changes will help you make more informed decisions about your app development and provide you with a detailed view of how your app is being delivered to end user devices.
Posted by Nick Butcher, Developer Relations Engineer
The Android Developer Summit is back and the first stop on the world tour just finished! We focussed on the latest developments in Modern Android Development: our set of libraries, tools and guidance that make it faster and easier to build amazing Android apps. Here is a recap of the top 3 announcements from the conference.
The October ‘22 stable release of Jetpack Compose brings support for staggered grids, snapping behaviors, pull to refresh, drawing text directly to canvas and many bug fixes and performance improvements. It also includes the first stable release of the Compose Material 3 library, helping you to build fresh, beautiful apps with updated components and styling.
We’ve also released a new Gradle BOM to simplify how you specify compose dependencies and launched the first alphas of Compose for Android TV—in our continued efforts to bring the joys of writing Compose UIs to all Android form factors.
If you haven’t tried them yet, Baseline Profiles are a powerful way to improve app startup and runtime performance—without changing a single line of app code—and we’ve seen them yield up to 40% faster startup times.
With Jetpack Benchmark 1.1 and Android Gradle Plugin 7.3 both reaching stable, the toolchain for generating a profile is now completely stable and ready to integrate into your app and start seeing the speed benefits. See the “Making Apps Blazing Fast with Baseline Profiles” talk for all the details.
A solid app needs a strong architecture and we’ve been bikeshedding on your behalf to provide detailed guidance on building scalable, testable, high quality apps. Check out the talks and new docs on Modularization, State holders & State Production, UI Events, Flow collection, building Offline first apps and more.
Posted by Gurupreet Singh, Developer Advocate; Android
Today marks the first stable release of Compose Material 3. The library allows you to build Jetpack Compose UIs with Material Design 3, the next evolution of Material Design. You can start using Material Design 3 in your apps today!
Note: The terms "Material Design 3", "Material 3", and "M3" are used interchangeably.
You can start using Material Design 3 in your apps by adding the Compose Material 3 dependency to your build.gradle files:
// Add dependency in module build.gradleimplementation "androidx.compose.material3:material3:$material3_version"
Dynamic color derives from the user’s wallpaper. The colors can be applied to apps and the system UI.
Reply Dynamic theming from wallpaper(Left) and Default app theming (Right)
The ColorScheme class provides builder functions to create both dynamic and custom light and dark color schemes:
Theme.kt
// Dynamic color is available on Android 12+val dynamicColor = Build.VERSION.SDK_INT >= Build.VERSION_CODES.Sval colorScheme = when { dynamicColor && darkTheme -> dynamicDarkColorScheme(LocalContext.current) dynamicColor && !darkTheme -> dynamicLightColorScheme(LocalContext.current) darkTheme -> darkColorScheme(...) else -> lightColorScheme(...)}MaterialTheme( colorScheme = colorScheme, typography = typography, shapes = shapes) { // M3 App content}
Switch( checked = isChecked, onCheckedChange = { /*...*/ }, thumbContent = { Icon( imageVector = Icons.Default.Check, contentDescription = stringResource(id = R.string.switch_check) ) }, )
ModalNavigationDrawer { ModalDrawerSheet( drawerShape = MaterialTheme.shapes.small, drawerContainerColor = MaterialTheme.colorScheme.primaryContainer, drawerContentColor = MaterialTheme.colorScheme.onPrimaryContainer, drawerTonalElevation = 4.dp, ) { DESTINATIONS.forEach { destination -> NavigationDrawerItem( selected = selectedDestination == destination.route, onClick = { ... }, icon = { ... }, label = { ... } ) } }}
CenterAlignedTopAppBar( title = { Text(stringResources(R.string.top_stories)) }, scrollBehavior = scrollBehavior, navigationIcon = { /* Navigation Icon */}, actions = { /* App bar actions */} )
val typography = Typography( titleLarge = TextStyle( fontWeight = FontWeight.SemiBold, fontSize = 22.sp, lineHeight = 28.sp, letterSpacing = 0.sp ), titleMedium = TextStyle( fontWeight = FontWeight.SemiBold, fontSize = 16.sp, lineHeight = 24.sp, letterSpacing = 0.15.sp ), ...}
bodyLarge = TextStyle( fontWeight = FontWeight.Normal, fontFamily = FontFamily.SansSerif, fontStyle = FontStyle.Italic, fontSize = 16.sp, lineHeight = 24.sp, letterSpacing = 0.15.sp, baselineShift = BaselineShift.Subscript)
Each shape has a default value but you can override it:
val shapes = Shapes( extraSmall = RoundedCornerShape(4.dp), small = RoundedCornerShape(8.dp), medium = RoundedCornerShape(12.dp), large = RoundedCornerShape(16.dp), extraLarge = RoundedCornerShape(28.dp))
// Add dependency in module build.gradleimplementation "androidx.compose.material3:material3-window-size-class:$material3_version"
See the Reply Compose sample to learn more about adaptive apps and the window size classes implementation.
M3 components, like top app bars, navigation drawers, bar, and rail, include built-in support for window insets. These components, when used independently or with Scaffold, will automatically handle insets determined by the status bar, navigation bar, and other parts of the system UI.
Scaffold( contentWindowInsets = WindowInsets(16.dp)) { // Scaffold content}