Posted by Kanika Sachdeva, Product Manager, Google Play
At Google Play, we’re committed to providing a positive, safe environment for children and families. Over the last few years, we’ve helped parents find family-friendly content through the Designed for Families program and empowered them to set digital ground rules for their families with Family Link parental controls.
After taking input from users and developers we are evolving our Google Play policies to provide additional protections for children and families. These policy changes build on our existing efforts to ensure that apps for children have appropriate content, show suitable ads, and handle personally identifiable information correctly; they also reduce the chance that apps not intended for children could unintentionally attract them.
Over the next few months, we will continue to roll out additional features that will help parents make informed choices before they install apps for their kids.
We are asking every developer to thoughtfully consider whether children are part of your target audience.
As part of the new policy, all developers must complete the new target audience and content section of the Google Play Console.
The new target audience and content section of the Google Play Console.
For most developers, the target audience does not include children and this section should be relatively quick to complete. If children are part of your target audience, we will ask you follow-up questions.
We will use the information you provide in the Google Play Console, along with our own review of your app marketing assets, to categorize your app and apply policies according to the following target audience groups: children, children and older users, older users.*
We recommend you review our new policies, developer guide, and this training before starting the target audience and content section so that you clearly understand the implications of your answers.
These changes affect every developer on Play, so if your app is already live on the Google Play store, we want to give you time to make any necessary updates. Below are the key dates to keep in mind:
We’re committed to providing the resources you need to understand and implement these changes. You can view more information on the Android developers website and access training on our new policies on Google Play's Academy for App Success. We have also increased our staffing and improved our communications for app review and appeals processes to help you get timely decisions and understand any changes that are needed.
Thanks in advance for the work you are putting in. We will continue to listen to your feedback and use it to improve the way we roll out these updates and communicate with the developer community.
*Note: The word “children” can mean different things in different locales and in different contexts. It is important that you determine what obligations and/or age-based restrictions may apply for the countries where you target your app.
How useful did you find this blog post?
★ ★ ★ ★ ★
Posted by Patricia Correa, Director, Platforms & Ecosystems Developer Marketing
Back in March we opened submissions for the Indie Games Showcase, an international competition for games studios from Europe*, South Korea, and Japan who are constantly pushing the boundaries of storytelling, visual excellence, and creativity in mobile.
We were once again impressed by the diversity and creativity that the indie community is bringing to mobile, and we’re happy to announce the 20 finalists.
Check out the local websites to learn more about the finalists and the events.
AntVentor by LoopyMood (Ukraine)
CHUCHEL by Amanita Design (Czech Republic)
#DRIVE by Pixel Perfect Dude (Poland)
Fly THIS! By Northplay (Denmark)
Fobia by Tapteek (Russia)
G30 - A Memory Maze by Ivan Kovalov (Russia)
Gold Peaks by Afterburn (Poland)
Grayland by 1DER Entertainment (Slovakia)
Hexologic by MythicOwl (Poland)
Lucid Dream Adventure by Dali Games (Poland)
OCO by SPECTRUM48 (United Kingdom)
Ordia by Loju (United Kingdom)
Peep by Taw (Russia)
Photographs by EightyEight Games (United Kingdom)
Rest in Pieces by Itatake (Sweden)
Returner Zhero by Fantastic, yes (Denmark)
see/saw by Kamibox (Germany)
STAP by Overhead Game Studio (United Kingdom)
Tesla vs. Lovecraft by 10tons (Finland)
Tiny Room Stories: Town Mystery by Kiary games (Russia)
ALTER EGO by 株式会社カラメルカラム
Infection - 感染 - by CanvasSoft
Jumpion - Make a two-step jump ! by Comgate
Lunch Time Fish by SoftFunk HULABREAKS
MeltLand by 個人
ReversEstory by 個人
キグルミキノコ Q-bit -第一章- by 個人
SumoRoll - Road to the Yokozuna by Studio Kingmo
Escape Game: The Little Prince by 株式会社 Jammsworks
Kamiori - カミオリ by TeamOrigami
Bear's Restaurant by 個人
クマムシさん惑星 ミクロの地球最強伝説 by Ars Edutainment
ゴリラ!ゴリラ!ゴリラ!by Gang Gorilla Games
Girl x Sun - Terasene - Tower defence & Novel game by SleepingMuseum
タシテケス by 個人
Destination: Dragons! by GAME GABURI
Cute cat's cake shop by 個人
Persephone by Momo-pi
Hamcorollin' by illuCalab.
Food Truck Pup: Cooking Chef by 合同会社ゲームスタート
다크타운 - 온라인 by 초콜릿소프트
Bad 2 Bad: Extinction by Dawinstone
셧더펑 : 슈팅액션 by Take Five Games
Cartoon Craft by Studio NAP
Catch Idle by HalftimeStudio
Hexagon Dungeon by Bleor Games
Hexonia by Togglegear
Mahjong - Magic Fantasy by Aquagamez
Maze Cube by IAMABOY
Road to Valor: World War II by Dreamotion Inc.
Onslot Car by Wondersquad
ROOMS: The Toymaker's Mansion by HandMade Game
Rhythm Star: Music Adventure by Anbsoft
7Days - Decide your story by Buff Studio
Seoul2033: Backer by Banjiha Games
Super Jelly Pop by STARMONSTER
UNLINK Daily Puzzle by Supershock
몬스터파크 온라인 by OVENCODE
WhamBam Warriors by DrukHigh
언노운 나이츠 by teamarex
We will welcome all finalists at events in London, Seoul, and Tokyo, where they will showcase their games to an audience of players, press and industry experts, for a chance to win the top prizes.
The events are open to the public, so if you would like to meet these games developers, try out their creations, and help choose the winners, sign up on the regional websites.
Congratulations to all finalists!
* The competition is open to developers from the following European countries and Israel: Austria, Belgium, Belarus, Czech Republic, Denmark, Finland, France, Germany, Italy, Netherlands, Norway, Poland, Romania, Russia, Slovakia, Spain, Sweden, Ukraine, and the United Kingdom (including Northern Ireland).
Posted by Peiyong Lin, Software Engineer
Android is now at the point where sRGB color gamut with 8 bits per color channel is not enough to take advantage of the display and camera technology. At Android we have been working to make wide color photography happen end-to-end, e.g. more bits and bigger gamuts. This means, eventually users will be able to capture the richness of the scenes, share a wide color pictures with friends and view wide color pictures on their phones. And now with Android Q, it's starting to get really close to reality: wide color photography is coming to Android. So, it's very important to applications to be wide color gamut ready. This article will show how you can test your application to see whether it's wide color gamut ready and wide color gamut capable, and the steps you need to take to be ready for wide color gamut photography.
But before we dive in, why wide color photography? Display panels and camera sensors on mobile are getting better and better every year. More and more newly released phones will be shipped with calibrated display panels, some are wide color gamut capable. Modern camera sensors are capable of capturing scenes with a wider range of color outside of sRGB and thus produce wide color gamut pictures. And when these two come together, it creates an end-to-end photography experience with more vibrant colors of the real world.
At a technical level, this means there will be pictures coming to your application with an ICC profile that is not sRGB but some other wider color gamut: Display P3, Adobe RGB, etc. For consumers, this means their photos will look more realistic.
Display P3
SRGB
Above are images of the Display P3 version and the SRGB version respectively of the same scene. If you are reading this article on a calibrated and wide color gamut capable display, you can notice the significant difference between these them.
There are two kinds of tests you can perform to know whether your application is prepared or not. One is what we call color correctness tests, the other is wide color tests.
A wide color gamut ready application implies the application manages color proactively. This means when given images, the application always checks the color space and does conversion based on its capability of showing wide color gamut, and thus even if the application can't handle wide color gamut it can still show the sRGB color gamut of the image correctly without color distortion.
Below is a color correct example of an image with Display P3 ICC profile.
However, if your application is not color correct, then typically your application will end up manipulating/displaying the image without converting the color space correctly, resulting in color distortion. For example you may get the below image, where the color is washed-out and everything looks distorted.
A wide color gamut capable application implies when given wide color gamut images, it can show the colors outside of sRGB color space. Here's an image you can use to test whether your application is wide color gamut capable or not, if it is, then a red Android logo will show up. Note that you must run this test on a wide color gamut capable device, for example a Pixel 3 or Samsung Galaxy S10.
To prepare for wide color gamut photography, your application must at least pass the wide color gamut ready test, we call it color correctness test. If your application passes the wide color gamut ready tests, that's awesome! But if it doesn't, here are the steps to make it wide color gamut ready.
The key thing to be prepared and future proof is that your application should never assume sRGB color space of the external images it gets. This means application must check the color space of the decoded images, and do the conversion when necessary. Failure to do so will result in color distortion and color profile being discarded somewhere in your pipeline.
You must be at least color correct. If your application doesn't adopt wide color gamut, you are very likely to just want to decode every image to sRGB color space. You can do that by either using BitmapFactory or ImageDecoder.
In API 26, we added inPreferredColorSpace in BitmapFactory.Option, which allows you to specify the target color space you want the decoded bitmap to have. Let's say you want to decode a file, then below is the snippet you are very likely to use in order to manage the color:
final BitmapFactory.Options options = new BitmapFactory.Options(); // Decode this file to sRGB color space. options.inPreferredColorSpace = ColorSpace.get(Named.SRGB); Bitmap bitmap = BitmapFactory.decodeFile(FILE_PATH, options);
In Android P (API level 28), we introduced ImageDecoder, a modernized approach for decoding images. If you upgrade your apk to API level 28 or beyond, we recommend you to use it instead of the BitmapFactory and BitmapFactory.Option APIs.
Below is a snippet to decode the image to an sRGB bitmap using ImageDecoder#decodeBitmap API.
ImageDecoder.Source source = ImageDecoder.createSource(FILE_PATH); try { bitmap = ImageDecoder.decodeBitmap(source, new ImageDecoder.OnHeaderDecodedListener() { @Override public void onHeaderDecoded(ImageDecoder decoder, ImageDecoder.ImageInfo info, ImageDecoder.Source source) { decoder.setTargetColorSpace(ColorSpace.get(Named.SRGB)); } }); } catch (IOException e) { // handle exception. }
ImageDecoder also has the advantage to let you know the encoded color space of the bitmap before you get the final bitmap by passing an ImageDecoder.OnHeaderDecodedListener and checking ImageDecoder.ImageInfo#getColorSpace(). And thus, depending on how your applications handle color spaces, you can check the encoded color space of the contents and set the target color space differently.
ImageDecoder.Source source = ImageDecoder.createSource(FILE_PATH); try { bitmap = ImageDecoder.decodeBitmap(source, new ImageDecoder.OnHeaderDecodedListener() { @Override public void onHeaderDecoded(ImageDecoder decoder, ImageDecoder.ImageInfo info, ImageDecoder.Source source) { ColorSpace cs = info.getColorSpace(); // Do something... } }); } catch (IOException e) { // handle exception. }
For more detailed usage you can check out the ImageDecoder APIs here.
Some typical bad practices include but are not limited to:
All these cause a severe users perceived results: Color distortion. For example, below is a code snippet that results in the application not color correct:
// This is bad, don't do it! final BitmapFactory.Options options = new BitmapFactory.Options(); final Bitmap bitmap = BitmapFactory.decodeFile(FILE_PATH, options); glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES31.GL_RGBA, bitmap.getWidth(), bitmap.getHeight(), 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null); GLUtils.texSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, bitmap, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE);
There's no color space checking before uploading the bitmap as the texture, and thus the application will end up with the below distorted image from the color correctness test.
Besides the above changes you must make in order to handle images correctly, if your applications are heavily image based, you will want to take additional steps to display these images in the full vibrant range by enabling the wide gamut mode in your manifest or creating a Display P3 surfaces.
To enable the wide color gamut in your activity, set the colorMode attribute to wideColorGamut in your AndroidManifest.xml file. You need to do this for each activity for which you want to enable wide color mode.
wideColorGamut
android:colorMode="wideColorGamut"
You can also set the color mode programmatically in your activity by calling the setColorMode(int) method and passing in COLOR_MODE_WIDE_COLOR_GAMUT.
To render wide color gamut contents, besides the wide color contents, you will also need to create a wide color gamut surfaces to render to. In OpenGL for example, your application must first check the following extensions:
And then, request the Display P3 as the color space when creating your surfaces, as shown in the following code snippet:
private static final int EGL_GL_COLORSPACE_DISPLAY_P3_PASSTHROUGH_EXT = 0x3490; public EGLSurface createWindowSurface(EGL10 egl, EGLDisplay display, EGLConfig config, Object nativeWindow) { EGLSurface surface = null; try { int attribs[] = { EGL_GL_COLORSPACE_KHR, EGL_GL_COLORSPACE_DISPLAY_P3_PASSTHROUGH_EXT, egl.EGL_NONE }; surface = egl.eglCreateWindowSurface(display, config, nativeWindow, attribs); } catch (IllegalArgumentException e) {} return surface; }
Also check out our post about more details on how you can adopt wide color gamut in native code.
Finally, if you own or maintain an image decoding/encoding library, you will need to at least pass the color correctness tests as well. To modernize your library, there are two things we strongly recommend you to do when you extend APIs to manage color:
After you finish, go back to the above section and perform the two color tests.
Posted by Posted by Florina Muntenescu & Wojtek Kaliciński, Developer Advocates, Android
Last week at Google I/O, we announced a big step: Android development will become increasingly Kotlin-first. It’s a language that many of you already love: over 50% of professional Android developers now use Kotlin, and it’s the fastest-growing language on GitHub. As part of this announcement, many new Jetpack APIs and features will be offered first in Kotlin. So if you’re starting a new project, you should try writing it in Kotlin; code written in Kotlin often means much less code for you–less code to type, test, and maintain.
To help you dive deeper into Kotlin, we’re happy to announce a new program we’re launching together with JetBrains: Kotlin/Everywhere, a series of community-driven events focussing on the potential of Kotlin on all platforms. We are aiming to help learn the essentials and best practices of using Kotlin everywhere, be it for Android, back-end, front-end and other platforms.
Join the Kotlin/Everywhere global event series between June and December 2019.
Whether you are a developer, a speaker, a Kotlin User Group, a Google Developer Group member or any other community leader join us. Anyone interested in learning Kotlin and its ecosystem, sharing knowledge, and hosting a Kotlin-focused event is welcome to attend.
If you are a developer wanting to learn more about Kotlin, or a speaker excited to share your Kotlin experience with others, you can find events near you to join. Just go to the map on the website. More events will be added over time.
If you want to host an event in your city, you can begin by checking out the detailed organizers’ guide. It will help you to decide on the format and what kind of support you might need. All the necessary tips and tricks, materials, and branding assets are inside. Go ahead and submit your event on the official web page.
Besides the detailed organizers’ guide, we also provide you with resources such as content, codelabs, and guidance to help you maximize your success. You can also apply for support: we have speakers from Google/JetBrains and can help by providing funding for venue, food and drinks, swag, or other. We will also list your event on the official website.
Still have questions? Ask them at our hangout sessions for organizers on May 16 and 17.
Let us know if you want to take part! Apply at kotl.in/everywhere
Posted by Dan Galpin
Developing Android Apps with Kotlin, developed by Google together with Udacity, is our newly-released, free, self-paced online course. You'll learn how to build Android apps using industry-standard tools and libraries in the Kotlin programming language.
Android development fundamentals are taught in the context of an architecture that provides the scaffolding for robust, maintainable applications. The course covers why and how to use Android Jetpack components such as Room for databases, Work Manager for background processing, the Navigation component, and more. You'll use popular community libraries to simplify common tasks such as Glide for image loading, Retrofit for networking, and Moshi for JSON parsing. The course teaches key Kotlin features such as coroutines to help you write your app code more quickly and concisely.
As you work through the course, you'll build fun and interesting apps, such as a Mars photo gallery, a trivia game, a sleep tracker and much more.
This course is intended for people who have programming experience and are comfortable with Kotlin basics. If you're new to the Kotlin language, we recommend taking the Udacity Kotlin Bootcamp course first.
The course is available free, online at Udacity; take it in your own time at your own pace.
Come learn how to build Android apps in Kotlin with us at https://www.udacity.com/course/ud9012.
Posted by Rene Mayrhofer and Xiaowen Xin, Android Security & Privacy Team
With every new version of Android, one of our top priorities is raising the bar for security. Over the last few years, these improvements have led to measurable progress across the ecosystem, and 2018 was no different.
In the 4th quarter of 2018, we had 84% more devices receiving a security update than in the same quarter the prior year. At the same time, no critical security vulnerabilities affecting the Android platform were publicly disclosed without a security update or mitigation available in 2018, and we saw a 20% year-over-year decline in the proportion of devices that installed a Potentially Harmful App. In the spirit of transparency, we released this data and more in our Android Security & Privacy 2018 Year In Review.
But now you may be asking, what’s next?
Today at Google I/O we lifted the curtain on all the new security features being integrated into Android Q. We plan to go deeper on each feature in the coming weeks and months, but first wanted to share a quick summary of all the security goodness we’re adding to the platform.
Storage encryption is one of the most fundamental (and effective) security technologies, but current encryption standards require devices have cryptographic acceleration hardware. Because of this requirement many devices are not capable of using storage encryption. The launch of Adiantum changes that in the Android Q release. We announced Adiantum in February. Adiantum is designed to run efficiently without specialized hardware, and can work across everything from smart watches to internet-connected medical devices.
Our commitment to the importance of encryption continues with the Android Q release. All compatible Android devices newly launching with Android Q are required to encrypt user data, with no exceptions. This includes phones, tablets, televisions, and automotive devices. This will ensure the next generation of devices are more secure than their predecessors, and allow the next billion people coming online for the first time to do so safely.
However, storage encryption is just one half of the picture, which is why we are also enabling TLS 1.3 support by default in Android Q. TLS 1.3 is a major revision to the TLS standard finalized by the IETF in August 2018. It is faster, more secure, and more private. TLS 1.3 can often complete the handshake in fewer roundtrips, making the connection time up to 40% faster for those sessions. From a security perspective, TLS 1.3 removes support for weaker cryptographic algorithms, as well as some insecure or obsolete features. It uses a newly-designed handshake which fixes several weaknesses in TLS 1.2. The new protocol is cleaner, less error prone, and more resilient to key compromise. Finally, from a privacy perspective, TLS 1.3 encrypts more of the handshake to better protect the identities of the participating parties.
Android utilizes a strategy of defense-in-depth to ensure that individual implementation bugs are insufficient for bypassing our security systems. We apply process isolation, attack surface reduction, architectural decomposition, and exploit mitigations to render vulnerabilities more difficult or impossible to exploit, and to increase the number of vulnerabilities needed by an attacker to achieve their goals.
In Android Q, we have applied these strategies to security critical areas such as media, Bluetooth, and the kernel. We describe these improvements more extensively in a separate blog post, but some highlights include:
Android Pie introduced the BiometricPrompt API to help apps utilize biometrics, including face, fingerprint, and iris. Since the launch, we’ve seen a lot of apps embrace the new API, and now with Android Q, we’ve updated the underlying framework with robust support for face and fingerprint. Additionally, we expanded the API to support additional use-cases, including both implicit and explicit authentication.
In the explicit flow, the user must perform an action to proceed, such as tap their finger to the fingerprint sensor. If they’re using face or iris to authenticate, then the user must click an additional button to proceed. The explicit flow is the default flow and should be used for all high-value transactions such as payments.
Implicit flow does not require an additional user action. It is used to provide a lighter-weight, more seamless experience for transactions that are readily and easily reversible, such as sign-in and autofill.
Another handy new feature in BiometricPrompt is the ability to check if a device supports biometric authentication prior to invoking BiometricPrompt. This is useful when the app wants to show an “enable biometric sign-in” or similar item in their sign-in page or in-app settings menu. To support this, we’ve added a new BiometricManager class. You can now call the canAuthenticate() method in it to determine whether the device supports biometric authentication and whether the user is enrolled.
Beyond Android Q, we are looking to add Electronic ID support for mobile apps, so that your phone can be used as an ID, such as a driver’s license. Apps such as these have a lot of security requirements and involves integration between the client application on the holder’s mobile phone, a reader/verifier device, and issuing authority backend systems used for license issuance, updates, and revocation.
This initiative requires expertise around cryptography and standardization from the ISO and is being led by the Android Security and Privacy team. We will be providing APIs and a reference implementation of HALs for Android devices in order to ensure the platform provides the building blocks for similar security and privacy sensitive applications. You can expect to hear more updates from us on Electronic ID support in the near future.
Acknowledgements: This post leveraged contributions from Jeff Vander Stoep and Shawn Willden
Posted by Jeff Vander Stoep, Android Security & Privacy Team and Chong Zhang, Android Media Team
Android Q Beta versions are now publicly available. Among the various new features introduced in Android Q are some important security hardening changes. While exciting new security features are added in each Android release, hardening generally refers to security improvements made to existing components.
When prioritizing platform hardening, we analyze data from a number of sources including our vulnerability rewards program (VRP). Past security issues provide useful insight into which components can use additional hardening. Android publishes monthly security bulletins which include fixes for all the high/critical severity vulnerabilities in the Android Open Source Project (AOSP) reported through our VRP. While fixing vulnerabilities is necessary, we also get a lot of value from the metadata - analysis on the location and class of vulnerabilities. With this insight we can apply the following strategies to our existing components:
Here’s a look at high severity vulnerabilities by component and cause from 2018:
Most of Android’s vulnerabilities occur in the media and bluetooth components. Use-after-free (UAF), integer overflows, and out of bounds (OOB) reads/writes comprise 90% of vulnerabilities with OOB being the most common.
In Android Q, we moved software codecs out of the main mediacodec service into a constrained sandbox. This is a big step forward in our effort to improve security by isolating various media components into less privileged sandboxes. As Mark Brand of Project Zero points out in his Return To Libstagefright blog post, constrained sandboxes are not where an attacker wants to end up. In 2018, approximately 80% of the critical/high severity vulnerabilities in media components occurred in software codecs, meaning further isolating them is a big improvement. Due to the increased protection provided by the new mediaswcodec sandbox, these same vulnerabilities will receive a lower severity based on Android’s severity guidelines.
The following figure shows an overview of the evolution of media services layout in the recent Android releases.
With this move, we now have the two primary sources for media vulnerabilities tightly sandboxed within constrained processes. Software codecs are similar to extractors in that they both have extensive code parsing bitstreams from untrusted sources. Once a vulnerability is identified in the source code, it can be triggered by sending a crafted media file to media APIs (such as MediaExtractor or MediaCodec). Sandboxing these two services allows us to reduce the severity of potential security vulnerabilities without compromising performance.
In addition to constraining riskier codecs, a lot of work has also gone into preventing common types of vulnerabilities.
Incorrect or missing memory bounds checking on arrays account for about 34% of Android’s userspace vulnerabilities. In cases where the array size is known at compile time, LLVM’s bound sanitizer (BoundSan) can automatically instrument arrays to prevent overflows and fail safely.
BoundSan instrumentation
BoundSan is enabled in 11 media codecs and throughout the Bluetooth stack for Android Q. By optimizing away a number of unnecessary checks the performance overhead was reduced to less than 1%. BoundSan has already found/prevented potential vulnerabilities in codecs and Bluetooth.
Android pioneered the production use of sanitizers in Android Nougat when we first started rolling out integer sanization (IntSan) in the media frameworks. This work has continued with each release and has been very successful in preventing otherwise exploitable vulnerabilities. For example, new IntSan coverage in Android Pie mitigated 11 critical vulnerabilities. Enabling IntSan is challenging because overflows are generally benign and unsigned integer overflows are well defined and sometimes intentional. This is quite different from the bound sanitizer where OOB reads/writes are always unintended and often exploitable. Enabling Intsan has been a multi year project, but with Q we have fully enabled it across the media frameworks with the inclusion of 11 more codecs.
IntSan Instrumentation
IntSan works by instrumenting arithmetic operations to abort when an overflow occurs. This instrumentation can have an impact on performance, so evaluating the impact on CPU usage is necessary. In cases where performance impact was too high, we identified hot functions and individually disabled IntSan on those functions after manually reviewing them for integer safety.
BoundSan and IntSan are considered strong mitigations because (where applied) they prevent the root cause of memory safety vulnerabilities. The class of mitigations described next target common exploitation techniques. These mitigations are considered to be probabilistic because they make exploitation more difficult by limiting how a vulnerability may be used.
LLVM’s Control Flow Integrity (CFI) was enabled in the media frameworks, Bluetooth, and NFC in Android Pie. CFI makes code reuse attacks more difficult by protecting the forward-edges of the call graph, such as function pointers and virtual functions. Android Q uses LLVM’s Shadow Call Stack (SCS) to protect return addresses, protecting the backwards-edge of control flow graph. SCS accomplishes this by storing return addresses in a separate shadow stack which is protected from leakage by storing its location in the x18 register, which is now reserved by the compiler.
SCS Instrumentation
SCS has negligible performance overhead and a small memory increase due to the separate stack. In Android Q, SCS has been turned on in portions of the Bluetooth stack and is also available for the kernel. We’ll share more on that in an upcoming post.
Like SCS, eXecute-Only Memory (XOM) aims at making common exploitation techniques more expensive. It does so by strengthening the protections already provided by address space layout randomization (ASLR) which in turn makes code reuse attacks more difficult by requiring attackers to first leak the location of the code they intend to reuse. This often means that an attacker now needs two vulnerabilities, a read primitive and a write primitive, where previously just a write primitive was necessary in order to achieve their goals. XOM protects against leaks (memory disclosures of code segments) by making code unreadable. Attempts to read execute-only code results in the process aborting safely.
Tombstone from a XOM abort
Starting in Android Q, platform-provided AArch64 code segments in binaries and libraries are loaded as execute-only. Not all devices will immediately receive the benefit as this enforcement has hardware dependencies (ARMv8.2+) and kernel dependencies (Linux 4.9+, CONFIG_ARM64_UAO). For apps with a targetSdkVersion lower than Q, Android’s zygote process will relax the protection in order to avoid potential app breakage, but 64 bit system processes (for example, mediaextractor, init, vold, etc.) are protected. XOM protections are applied at compile-time and have no memory or CPU overhead.
Scudo is a dynamic heap allocator designed to be resilient against heap related vulnerabilities such as:
Scudo does not prevent exploitation but rather proactively manages memory in a way to make exploitation more difficult. It is configurable on a per-process basis depending on performance requirements. Scudo is enabled in extractors and codecs in the media frameworks.
Tombstone from Scudo aborts
AOSP makes use of a number of Open Source Projects to build and secure Android. Google is actively contributing back to these projects in a number of security critical areas:
Thank you to Ivan Lozano, Kevin Deus, Kostya Kortchinsky, Kostya Serebryany, and Mike Antares for their contributions to this post.
Posted by Karen Ng, Group Product Manager and Jisha Abubaker, Product Manager, Android
Last year, we launched Android Jetpack, a collection of software components designed to accelerate Android development and make writing high-quality apps easier. Jetpack was built with you in mind -- to take the hardest, most common developer problems on Android and make your lives easier.
Jetpack has seen incredible adoption and momentum. Today, 80% of the top 1,000 apps in the Play store are using Jetpack. We’ve also heard feedback from so many of you across our early access developer programs and user studies, as well as Reddit, Stack Overflow, and Slack, that has helped shape these APIs. Very humbly, thank you.
Today, we are excited to share with you 11 Jetpack libraries that can be used in development now and an early-development, open-source project called Jetpack Compose to simplify UI development.
CameraX
We've heard from many of you that developing camera apps or integrating camera functionality within your existing apps is hard. With the new CameraX library, we want to enable you to create great camera-driven experiences in your application without worrying about the underlying device behavior. This API is backwards compatible to Android 5.0 (API 21) or higher, ensuring that the same code works on most devices in the market. While it leverages the capabilities of camera2, it uses a simpler, use case-based approach that is lifecycle-aware eliminating significant amount of boilerplate code vs camera2. Finally, it enables you to access the same functionality as the native camera app on supported devices. These optional Extensions enable features like Portrait, Night, HDR, and Beauty.
LiveData and Lifecycles w/ coroutines
We heard you loud and clear and agree that LiveData must support your common one-shot asynchronous operations. With Lifecycle & LiveData KTX, you can do so with Kotlin coroutines that are lifecycle-aware. Kotlin coroutines have been well received by the developer community for how they simplify the way concurrency is handled within Android apps. We want to simplify it even further and enable you to use them safely by offering coroutine scopes tied to lifecycles, coroutine dispatchers that are lifecycle-aware, and support for simple asynchronous chains with the new liveData builder.
Benchmark
The Benchmark library provides you a quick way to benchmark your app code, whether it is written in Kotlin, the Java programming language or native code. We use this library to continuously benchmark Jetpack libraries we release to ensure we do not introduce any latency into your code. You can now do the same right within your development environment in Android Studio, easily measuring database queries, view inflation, or a RecyclerView scroll. The library takes care of what is needed to provide reliable and consistent results like handling warm-up periods, removing outliers, and locking CPU clocks.
Security
To maximize security of an application’s data at-rest, the new Security library implements security best practices for you. It provides strong security that balances encryption with performance for consumer apps like banking and chat. It also provides a maximum level of security for apps that require a hardware-backed keystore with user presence and simplifies many operations including key generation and validation.
ViewModel with SavedState
ViewModel provided you an easy way to save your UI data in the event of a configuration change. It did not save your app state in the event of process death, and many of you have been relying on SavedInstanceState alongside ViewModel. With the ViewModel with SavedState module, you can eliminate boilerplate code and gain the benefits of using both ViewModel and SavedState with simple APIs to save and retrieve data right from your ViewModel.
ViewPager2
ViewPager2, the next generation of ViewPager, is now based on RecyclerView and supports vertical scrolling and RTL (Right-to-Left) layouts. It also provides a much easier way to listen for page data changes with registerOnPageChangeCallback.
registerOnPageChangeCallback.
ConstraintLayout 2.0
ConstraintLayout 2.0 brings up new optimizations, and new way of customizing layouts, with the addition of helper classes. As part of ConstraintLayout 2.0, MotionLayout provides an easy way to manage motion and widget animation in your applications. You can easily describe transitions between layouts and animation of properties. MotionLayout is fully declarative in XML, allowing you to describe even complex transitions without requiring any code.
Biometrics Prompt
Users are accustomed to biometric credentials on their phones, but if your app requires a biometric login, it is important to make sure that users are provided a consistent and safe way to enter their credentials. The Biometrics library provides a simple system prompt giving the user a trustworthy experience.
Enterprise
With the Jetpack Enterprise library, your managed enterprise apps can send feedback back to Enterprise Mobility Management providers in the form of keyed app states, while taking advantage of backwards compatibility with managed configurations.
Android for Cars
With the Android for Cars libraries, you can provide your users a driver-optimized version of your app that will be automatically installed onto the vehicle’s infotainment system in vehicles equipped with the Android Automotive OS. It also allows your apps to work with the Android Auto app, providing the driver-optimized version anytime on their device.
And in case you missed it, we announced stable releases of Jetpack WorkManager (background processing) and Jetpack Navigation (in-app navigation) just a few months ago.
Today, we open-sourced an early preview of Jetpack Compose, a new unbundled toolkit designed to simplify UI development by combining a reactive programming model with the conciseness and ease-of-use of Kotlin. We have always done our best work when we did it with you - our developer community. That’s why we decided to develop Jetpack Compose in the open, starting today.
In that vein, we took a step back and chatted with many of you. We heard strong feedback from developers that they like the modern, reactive APIs that Flutter, React Native, Litho, and Vue.js represent. We also heard that developers love Kotlin, with over 53% of professional Android developers using it and with 20% higher language satisfaction ratings than the Java programming language. Kotlin has become the fastest-growing language in terms of number of contributors on GitHub.
So, we decided to invest in the reactive approach to declarative programming and create an easier way to build UIs with Kotlin.
We are building Compose with a few core principles:
A Compose application is made up of composable functions that transform application data into a UI hierarchy. A function is all you need to create a new UI component. To create a composable function just add the @Composable annotation to the function name. Under the hood, Compose uses a custom Kotlin compiler plug-in so when the underlying data changes, the composable functions can be re-invoked to generate an updated UI hierarchy. The simple example below prints a string to the screen.
We know that adopting any new framework is a big change for existing projects and codebases, which is why we’ve designed Compose like all of Jetpack -- with individual components that you can adopt at your own pace and are compatible with existing views.
If you want to learn more about Jetpack Compose or download its source to try it for yourself, check out http://d.android.com/jetpackcompose
We'd love to hear from you as we iterate on this exciting future together. Send us feedback by posting comments below, and please file any bugs you run into on AOSP or directly through the feedback buttons in the Android Studio Jetpack Compose build in AOSP. Since this is an early preview, we do not recommend trying this on any production projects.
Happy Jetpacking!
Posted by Kobi Glick, Product Lead, Google Play
Read this in العَرَبِيَّة, Bahasa Indonesia, Deutsch, español (Latinoamérica), le français, português do Brasil, tiếng Việt, русский язы́к, ไทย, Türkçe, 한국어, 中文 (简体), 中文 (繁體), or 日本語.
Over the last 10 years, we’ve worked together to build an incredible ecosystem with more than 2.5 billion active Android devices in over 190 countries. This would not be possible without you and all the fantastic apps and games you’ve built that entertain, help, and educate people around the world.
Every month, you upload more than 750,000 APKs and app bundles to the Play Console. We’ve been amazed by your enthusiasm, and it’s been our privilege to help you grow your business. This year, we want to help you go even further. So today at Google I/O, we're announcing new tools and features to help you develop, release, and grow your apps and games — many of them based on your feedback and suggestions.
Last year we introduced Android's new publishing format, the Android App Bundle, and an entirely new dynamic delivery framework on Google Play. There are now over 80,000 apps and games using app bundles in production, with an average size savings of 20%. As a result of those savings, apps have seen up to 11% install uplift. As the future of app delivery, we’re excited to share these latest enhancements to the Android App Bundle.
Dynamic features are out of beta and available to all developers, including these new delivery options:
During our beta program, many developers implemented interesting use cases with dynamic features. Netflix, for example, now delivers their customer support functionality as a dynamic feature to users who visit the support center. By making functionality available only to users who need it, Netflix reported a 33% reduction in app size. You can learn more in the video below.
We heard you loud and clear: testing bundles is hard. But with the new internal app sharing, you can now share test builds in a matter of seconds. Just upload your app bundle to Google Play and get a download URL to share with your testers. You don’t need to worry about version codes, signing keys, or most other validations that your production releases need to conform to.
In addition to efficiency and modularity, the Android App Bundle also now offers increased security with the launch of app signing key upgrade for new installs. With this feature, you can upgrade the cryptographic strength of your signing key for new installs and their updates on Google Play. Many developers sign their apps with keys generated a long time ago, and this new feature is the only backwards-compatible way to increase their strength.
Although auto-updates reach many users, you told us it was still challenging to get some users to update your apps. Now that our new in-app updates API is in general availability, users will be able to update without ever leaving your app. During our early access program, many developers used our API to create a polished upgrade flow, resulting in a median acceptance rate of about 50%.
The API currently supports two flows:
The right data can help you improve your app performance and grow your business. That’s why we’re excited to tell you about new metrics and insights that will help you better measure your app health and analyze your performance.
We’re also making big changes to another key source of performance data: your user reviews. Many of you told us that you want a rating that reflects a more current version of your app, not what it was years ago — and we agree. So instead of a lifetime cumulative value, your Google Play Store rating will be recalculated to give more weight to your most recent ratings. Users won’t see the updated rating in the Google Play Store until August, but you can preview your new rating in the Google Play Console today.
Every day, developers respond to more than 100,000 reviews in the Play Console, and when they do, we’ve seen that users update their rating by +0.7 stars on average. So in addition to the ratings change, we're making it easier to respond to reviews with suggested replies. When you go to respond to a user, you’ll see three suggested replies which have been created automatically based on the content of the review. You can choose to send one as-suggested, customize a suggestion for more personalization, or create your own message from scratch. Suggested replies are available in English now with additional languages coming later.
Your store listing is where users come to learn more about your app or game and decide whether to install. It’s important real estate, so we’re releasing new features that let you optimize your Google Play Store to address different moments in the user lifecycle.
Learn more about these and other Google Play features at Google I/O. Join us live or watch later on the Android Developers YouTube channel.
You can also take your skills and knowledge to the next level with our e-learning courses on Google Play’s Academy for App Success, and sign up for our newsletter to stay up to date with our latest features and updates.