Posted by Bram Bonné, Senior Software Engineer, Android Platform Security & Chad Brubaker, Staff Software Engineer, Android Platform Security
Android is committed to keeping users, their devices, and their data safe. One of the ways that we keep data safe is by protecting network traffic that enters or leaves an Android device with Transport Layer Security (TLS).
Android 7 (API level 24) introduced the Network Security Configuration in 2016, allowing app developers to configure the network security policy for their app through a declarative configuration file. To ensure apps are safe, apps targeting Android 9 (API level 28) or higher automatically have a policy set by default that prevents unencrypted traffic for every domain.
Today, we’re happy to announce that 80% of Android apps are encrypting traffic by default. The percentage is even greater for apps targeting Android 9 and higher, with 90% of them encrypting traffic by default.
Percentage of apps that block cleartext by default.
Since November 1 2019, all app (updates as well as all new apps on Google Play) must target at least Android 9. As a result, we expect these numbers to continue improving. Network traffic from these apps is secure by default and any use of unencrypted connections is the result of an explicit choice by the developer.
The latest releases of Android Studio and Google Play’s pre-launch report warn developers when their app includes a potentially insecure Network Security Configuration (for example, when they allow unencrypted traffic for all domains or when they accept user provided certificates outside of debug mode). This encourages the adoption of HTTPS across the Android ecosystem and ensures that developers are aware of their security configuration.
Example of a warning shown to developers in Android Studio.
Example of a warning shown to developers as part of the pre-launch report.
For apps targeting Android 9 and higher, the out-of-the-box default is to encrypt all network traffic in transit and trust only certificates issued by an authority in the standard Android CA set without requiring any extra configuration. Apps can provide an exception to this only by including a separate Network Security Config file with carefully selected exceptions.
If your app needs to allow traffic to certain domains, it can do so by including a Network Security Config file that only includes these exceptions to the default secure policy. Keep in mind that you should be cautious about the data received over insecure connections as it could have been tampered with in transit.
<network-security-config> <base-config cleartextTrafficPermitted="false" /> <domain-config cleartextTrafficPermitted="true"> <domain includeSubdomains="true">insecure.example.com</domain> <domain includeSubdomains="true">insecure.cdn.example.com</domain> </domain-config> </network-security-config>
If your app needs to be able to accept user specified certificates for testing purposes (for example, connecting to a local server during testing), make sure to wrap your element inside a element. This ensures the connections in the production version of your app are secure.
<network-security-config> <debug-overrides> <trust-anchors> <certificates src="user"/> </trust-anchors> </debug-overrides> </network-security-config>
If your library directly creates secure/insecure connections, make sure that it honors the app's cleartext settings by checking isCleartextTrafficPermitted before opening any cleartext connection.
isCleartextTrafficPermitted
Android’s built-in networking libraries and other popular HTTP libraries such as OkHttp or Volley have built-in Network Security Config support.
Giles Hogben, Nwokedi Idika, Android Platform Security, Android Studio and Pre-Launch Report teams
Billions of people rely on their Android-powered devices to securely store their sensitive information. A vital component of the Android security stack is the key attestation system. Android devices since Android 7.0 are able to generate an attestation certificate that attests to the security properties of the device’s hardware and software. OEMs producing devices with Android 8.0 or higher must install a batch attestation key provided by Google on each device at the time of manufacturing.
These keys might need to be revoked for a number of reasons including accidental disclosure, mishandling, or suspected extraction by an attacker. When this occurs, the affected keys must be immediately revoked to protect users. The security of any Public-Key Infrastructure system depends on the robustness of the key revocation process.
All of the attestation keys issued so far include an extension that embeds a certificate revocation list (CRL) URL in the certificate. We found that the CRL (and online certificate status protocol) system was not flexible enough for our needs. So we set out to replace the revocation system for Android attestation keys with something that is flexible and simple to maintain and use.
Our solution is a single TLS-secured URL (https://android.googleapis.com/attestation/status) that returns a list containing all revoked Android attestation keys. This list is encoded in JSON and follows a strict format defined by JSON schema. Only keys that have non-valid status appear in the list, so it is not an exhaustive list of all issued keys.
This system allows us to express more nuance about the status of a key and the reason for the status. A key can have a status of REVOKED or SUSPENDED, where revoked is permanent and suspended is temporary. The reason for the status is described as either KEY_COMPROMISE, CA_COMPROMISE, SUPERSEDED, or SOFTWARE_FLAW. A complete, up-to-date list of statuses and reasons can be found in the developer documentation.
REVOKED
SUSPENDED
KEY_COMPROMISE
CA_COMPROMISE
SUPERSEDED
SOFTWARE_FLAW.
The CRL URLs embedded in existing batch certificates will continue to operate. Going forward, attestation batch certificates will no longer contain a CRL extension. The status of these legacy certificates will also be included in the attestation status list, so developers can safely switch to using the attestation status list for both current and legacy certificates. An example of how to correctly verify Android attestation keys is included in the Key Attestation sample.
Posted by Adam Bacchus, Sebastian Porst, and Patrick Mutchler — Android Security & Privacy
We’re constantly looking for ways to further improve the security and privacy of our products, and the ecosystems they support. At Google, we understand the strength of open platforms and ecosystems, and that the best ideas don’t always come from within. It is for this reason that we offer a broad range of vulnerability reward programs, encouraging the community to help us improve security for everyone. Today, we’re expanding on those efforts with some big changes to Google Play Security Reward Program (GPSRP), as well as the launch of the new Developer Data Protection Reward Program (DDPRP).
We are increasing the scope of GPSRP to include all apps in Google Play with 100 million or more installs. These apps are now eligible for rewards, even if the app developers don’t have their own vulnerability disclosure or bug bounty program. In these scenarios, Google helps responsibly disclose identified vulnerabilities to the affected app developer. This opens the door for security researchers to help hundreds of organizations identify and fix vulnerabilities in their apps. If the developers already have their own programs, researchers can collect rewards directly from them on top of the rewards from Google. We encourage app developers to start their own vulnerability disclosure or bug bounty program to work directly with the security researcher community.
Vulnerability data from GPSRP helps Google create automated checks that scan all apps available in Google Play for similar vulnerabilities. Affected app developers are notified through the Play Console as part of the App Security Improvement (ASI) program, which provides information on the vulnerability and how to fix it. Over its lifetime, ASI has helped more than 300,000 developers fix more than 1,000,000 apps on Google Play. In 2018 alone, the program helped over 30,000 developers fix over 75,000 apps. The downstream effect means that those 75,000 vulnerable apps are not distributed to users until the issue is fixed.
To date, GPSRP has paid out over $265,000 in bounties. Recent scope and reward increases have resulted in $75,500 in rewards across July & August alone. With these changes, we anticipate even further engagement from the security research community to bolster the success of the program.
Today, we are also launching the Developer Data Protection Reward Program. DDPRP is a bounty program, in collaboration with HackerOne, meant to identify and mitigate data abuse issues in Android apps, OAuth projects, and Chrome extensions. It recognizes the contributions of individuals who help report apps that are violating Google Play, Google API, or Google Chrome Web Store Extensions program policies.
The program aims to reward anyone who can provide verifiably and unambiguous evidence of data abuse, in a similar model as Google’s other vulnerability reward programs. In particular, the program aims to identify situations where user data is being used or sold unexpectedly, or repurposed in an illegitimate way without user consent. If data abuse is identified related to an app or Chrome extension, that app or extension will accordingly be removed from Google Play or Google Chrome Web Store. In the case of an app developer abusing access to Gmail restricted scopes, their API access will be removed. While no reward table or maximum reward is listed at this time, depending on impact, a single report could net as large as a $50,000 bounty.
As 2019 continues, we look forward to seeing what researchers find next. Thank you to the entire community for contributing to keeping our platforms and ecosystems safe. Happy bug hunting!
Posted by Rene Mayrhofer and Xiaowen Xin, Android Security & Privacy Team
With every new version of Android, one of our top priorities is raising the bar for security. Over the last few years, these improvements have led to measurable progress across the ecosystem, and 2018 was no different.
In the 4th quarter of 2018, we had 84% more devices receiving a security update than in the same quarter the prior year. At the same time, no critical security vulnerabilities affecting the Android platform were publicly disclosed without a security update or mitigation available in 2018, and we saw a 20% year-over-year decline in the proportion of devices that installed a Potentially Harmful App. In the spirit of transparency, we released this data and more in our Android Security & Privacy 2018 Year In Review.
But now you may be asking, what’s next?
Today at Google I/O we lifted the curtain on all the new security features being integrated into Android Q. We plan to go deeper on each feature in the coming weeks and months, but first wanted to share a quick summary of all the security goodness we’re adding to the platform.
Storage encryption is one of the most fundamental (and effective) security technologies, but current encryption standards require devices have cryptographic acceleration hardware. Because of this requirement many devices are not capable of using storage encryption. The launch of Adiantum changes that in the Android Q release. We announced Adiantum in February. Adiantum is designed to run efficiently without specialized hardware, and can work across everything from smart watches to internet-connected medical devices.
Our commitment to the importance of encryption continues with the Android Q release. All compatible Android devices newly launching with Android Q are required to encrypt user data, with no exceptions. This includes phones, tablets, televisions, and automotive devices. This will ensure the next generation of devices are more secure than their predecessors, and allow the next billion people coming online for the first time to do so safely.
However, storage encryption is just one half of the picture, which is why we are also enabling TLS 1.3 support by default in Android Q. TLS 1.3 is a major revision to the TLS standard finalized by the IETF in August 2018. It is faster, more secure, and more private. TLS 1.3 can often complete the handshake in fewer roundtrips, making the connection time up to 40% faster for those sessions. From a security perspective, TLS 1.3 removes support for weaker cryptographic algorithms, as well as some insecure or obsolete features. It uses a newly-designed handshake which fixes several weaknesses in TLS 1.2. The new protocol is cleaner, less error prone, and more resilient to key compromise. Finally, from a privacy perspective, TLS 1.3 encrypts more of the handshake to better protect the identities of the participating parties.
Android utilizes a strategy of defense-in-depth to ensure that individual implementation bugs are insufficient for bypassing our security systems. We apply process isolation, attack surface reduction, architectural decomposition, and exploit mitigations to render vulnerabilities more difficult or impossible to exploit, and to increase the number of vulnerabilities needed by an attacker to achieve their goals.
In Android Q, we have applied these strategies to security critical areas such as media, Bluetooth, and the kernel. We describe these improvements more extensively in a separate blog post, but some highlights include:
Android Pie introduced the BiometricPrompt API to help apps utilize biometrics, including face, fingerprint, and iris. Since the launch, we’ve seen a lot of apps embrace the new API, and now with Android Q, we’ve updated the underlying framework with robust support for face and fingerprint. Additionally, we expanded the API to support additional use-cases, including both implicit and explicit authentication.
In the explicit flow, the user must perform an action to proceed, such as tap their finger to the fingerprint sensor. If they’re using face or iris to authenticate, then the user must click an additional button to proceed. The explicit flow is the default flow and should be used for all high-value transactions such as payments.
Implicit flow does not require an additional user action. It is used to provide a lighter-weight, more seamless experience for transactions that are readily and easily reversible, such as sign-in and autofill.
Another handy new feature in BiometricPrompt is the ability to check if a device supports biometric authentication prior to invoking BiometricPrompt. This is useful when the app wants to show an “enable biometric sign-in” or similar item in their sign-in page or in-app settings menu. To support this, we’ve added a new BiometricManager class. You can now call the canAuthenticate() method in it to determine whether the device supports biometric authentication and whether the user is enrolled.
Beyond Android Q, we are looking to add Electronic ID support for mobile apps, so that your phone can be used as an ID, such as a driver’s license. Apps such as these have a lot of security requirements and involves integration between the client application on the holder’s mobile phone, a reader/verifier device, and issuing authority backend systems used for license issuance, updates, and revocation.
This initiative requires expertise around cryptography and standardization from the ISO and is being led by the Android Security and Privacy team. We will be providing APIs and a reference implementation of HALs for Android devices in order to ensure the platform provides the building blocks for similar security and privacy sensitive applications. You can expect to hear more updates from us on Electronic ID support in the near future.
Acknowledgements: This post leveraged contributions from Jeff Vander Stoep and Shawn Willden
Posted by Jeff Vander Stoep, Android Security & Privacy Team and Chong Zhang, Android Media Team
Android Q Beta versions are now publicly available. Among the various new features introduced in Android Q are some important security hardening changes. While exciting new security features are added in each Android release, hardening generally refers to security improvements made to existing components.
When prioritizing platform hardening, we analyze data from a number of sources including our vulnerability rewards program (VRP). Past security issues provide useful insight into which components can use additional hardening. Android publishes monthly security bulletins which include fixes for all the high/critical severity vulnerabilities in the Android Open Source Project (AOSP) reported through our VRP. While fixing vulnerabilities is necessary, we also get a lot of value from the metadata - analysis on the location and class of vulnerabilities. With this insight we can apply the following strategies to our existing components:
Here’s a look at high severity vulnerabilities by component and cause from 2018:
Most of Android’s vulnerabilities occur in the media and bluetooth components. Use-after-free (UAF), integer overflows, and out of bounds (OOB) reads/writes comprise 90% of vulnerabilities with OOB being the most common.
In Android Q, we moved software codecs out of the main mediacodec service into a constrained sandbox. This is a big step forward in our effort to improve security by isolating various media components into less privileged sandboxes. As Mark Brand of Project Zero points out in his Return To Libstagefright blog post, constrained sandboxes are not where an attacker wants to end up. In 2018, approximately 80% of the critical/high severity vulnerabilities in media components occurred in software codecs, meaning further isolating them is a big improvement. Due to the increased protection provided by the new mediaswcodec sandbox, these same vulnerabilities will receive a lower severity based on Android’s severity guidelines.
The following figure shows an overview of the evolution of media services layout in the recent Android releases.
With this move, we now have the two primary sources for media vulnerabilities tightly sandboxed within constrained processes. Software codecs are similar to extractors in that they both have extensive code parsing bitstreams from untrusted sources. Once a vulnerability is identified in the source code, it can be triggered by sending a crafted media file to media APIs (such as MediaExtractor or MediaCodec). Sandboxing these two services allows us to reduce the severity of potential security vulnerabilities without compromising performance.
In addition to constraining riskier codecs, a lot of work has also gone into preventing common types of vulnerabilities.
Incorrect or missing memory bounds checking on arrays account for about 34% of Android’s userspace vulnerabilities. In cases where the array size is known at compile time, LLVM’s bound sanitizer (BoundSan) can automatically instrument arrays to prevent overflows and fail safely.
BoundSan instrumentation
BoundSan is enabled in 11 media codecs and throughout the Bluetooth stack for Android Q. By optimizing away a number of unnecessary checks the performance overhead was reduced to less than 1%. BoundSan has already found/prevented potential vulnerabilities in codecs and Bluetooth.
Android pioneered the production use of sanitizers in Android Nougat when we first started rolling out integer sanization (IntSan) in the media frameworks. This work has continued with each release and has been very successful in preventing otherwise exploitable vulnerabilities. For example, new IntSan coverage in Android Pie mitigated 11 critical vulnerabilities. Enabling IntSan is challenging because overflows are generally benign and unsigned integer overflows are well defined and sometimes intentional. This is quite different from the bound sanitizer where OOB reads/writes are always unintended and often exploitable. Enabling Intsan has been a multi year project, but with Q we have fully enabled it across the media frameworks with the inclusion of 11 more codecs.
IntSan Instrumentation
IntSan works by instrumenting arithmetic operations to abort when an overflow occurs. This instrumentation can have an impact on performance, so evaluating the impact on CPU usage is necessary. In cases where performance impact was too high, we identified hot functions and individually disabled IntSan on those functions after manually reviewing them for integer safety.
BoundSan and IntSan are considered strong mitigations because (where applied) they prevent the root cause of memory safety vulnerabilities. The class of mitigations described next target common exploitation techniques. These mitigations are considered to be probabilistic because they make exploitation more difficult by limiting how a vulnerability may be used.
LLVM’s Control Flow Integrity (CFI) was enabled in the media frameworks, Bluetooth, and NFC in Android Pie. CFI makes code reuse attacks more difficult by protecting the forward-edges of the call graph, such as function pointers and virtual functions. Android Q uses LLVM’s Shadow Call Stack (SCS) to protect return addresses, protecting the backwards-edge of control flow graph. SCS accomplishes this by storing return addresses in a separate shadow stack which is protected from leakage by storing its location in the x18 register, which is now reserved by the compiler.
SCS Instrumentation
SCS has negligible performance overhead and a small memory increase due to the separate stack. In Android Q, SCS has been turned on in portions of the Bluetooth stack and is also available for the kernel. We’ll share more on that in an upcoming post.
Like SCS, eXecute-Only Memory (XOM) aims at making common exploitation techniques more expensive. It does so by strengthening the protections already provided by address space layout randomization (ASLR) which in turn makes code reuse attacks more difficult by requiring attackers to first leak the location of the code they intend to reuse. This often means that an attacker now needs two vulnerabilities, a read primitive and a write primitive, where previously just a write primitive was necessary in order to achieve their goals. XOM protects against leaks (memory disclosures of code segments) by making code unreadable. Attempts to read execute-only code results in the process aborting safely.
Tombstone from a XOM abort
Starting in Android Q, platform-provided AArch64 code segments in binaries and libraries are loaded as execute-only. Not all devices will immediately receive the benefit as this enforcement has hardware dependencies (ARMv8.2+) and kernel dependencies (Linux 4.9+, CONFIG_ARM64_UAO). For apps with a targetSdkVersion lower than Q, Android’s zygote process will relax the protection in order to avoid potential app breakage, but 64 bit system processes (for example, mediaextractor, init, vold, etc.) are protected. XOM protections are applied at compile-time and have no memory or CPU overhead.
Scudo is a dynamic heap allocator designed to be resilient against heap related vulnerabilities such as:
Scudo does not prevent exploitation but rather proactively manages memory in a way to make exploitation more difficult. It is configurable on a per-process basis depending on performance requirements. Scudo is enabled in extractors and codecs in the media frameworks.
Tombstone from Scudo aborts
AOSP makes use of a number of Open Source Projects to build and secure Android. Google is actively contributing back to these projects in a number of security critical areas:
Thank you to Ivan Lozano, Kevin Deus, Kostya Kortchinsky, Kostya Serebryany, and Mike Antares for their contributions to this post.
Posted by Patrick Mutchler and Meghan Kelly, Android Security & Privacy Team
Helping Android app developers build secure apps, free of known vulnerabilities, means helping the overall ecosystem thrive. This is why we launched the Application Security Improvement Program five years ago, and why we're still so invested in its success today.
When an app is submitted to the Google Play store, we scan it to determine if a variety of vulnerabilities are present. If we find something concerning, we flag it to the developer and then help them to remedy the situation.
Think of it like a routine physical. If there are no problems, the app runs through our normal tests and continues on the process to being published in the Play Store. If there is a problem, however, we provide a diagnosis and next steps to get back to healthy form.
Over its lifetime, the program has helped more than 300,000 developers to fix more than 1,000,000 apps on Google Play. In 2018 alone, the program helped over 30,000 developers fix over 75,000 apps. The downstream effect means that those 75,000 vulnerable apps are not distributed to users with the same security issues present, which we consider a win.
The App Security Improvement program covers a broad range of security issues in Android apps. These can be as specific as security issues in certain versions of popular libraries (ex: CVE-2015-5256) and as broad as unsafe TLS/SSL certificate validation.
We are continuously improving this program's capabilities by improving the existing checks and launching checks for more classes of security vulnerability. In 2018, we deployed warnings for six additional security vulnerability classes including:
Ensuring that we're continuing to evolve the program as new exploits emerge is a top priority for us. We are continuing to work on this throughout 2019.
Keeping Android users safe is important to Google. We know that app security is often tricky and that developers can make mistakes. We hope to see this program grow in the years to come, helping developers worldwide build apps users can truly trust.
Posted by Rahul Mishra and Tom Watkins, Android Security & Privacy Team
In 2018, Google Play Protect made Android devices running Google Play some of the most secure smartphones available, scanning over 50 billion apps everyday for harmful behaviour.
Android devices can genuinely improve people's lives through our accessibility features, Google Assistant, digital wellbeing, Family Link, and more — but we can only do this if they are safe and secure enough to earn users' long term trust. This is Google Play Protect's charter and we're encouraged by this past year's advancements.
Google Play Protect is the technology we use to ensure that any device shipping with the Google Play Store is secured against potentially harmful applications (PHA). It is made up of a giant backend scanning engine to aid our analysts in sourcing and vetting applications made available on the Play Store, and built-in protection that scans apps on users' devices, immobilizing PHA and warning users.
This technology protects over 2 billion devices in the Android ecosystem every day.
On by default
We strongly believe that security should be a built-in feature of every device, not something a user needs to find and enable. When security features function at their best, most users do not need to be aware of them. To this end, we are pleased to announce that Google Play Protect is now enabled by default to secure all new devices, right out of the box. The user is notified that Google Play Protect is running, and has the option to turn it off whenever desired.
New and rare apps
Android is deployed in many diverse ways across many different users. We know that the ecosystem would not be as powerful and vibrant as it is today without an equally diverse array of apps to choose from. But installing new apps, especially from unknown sources, can carry risk.
Last year we launched a new feature that notifies users when they are installing new or rare apps that are rarely installed in the ecosystem. In these scenarios, the feature shows a warning, giving users pause to consider whether they want to trust this app, and advising them to take additional care and check the source of installation. Once Google has fully analyzed the app and determined that it is not harmful, the notification will no longer display. In 2018, this warning showed around 100,000 times per day
Context is everything: warning users on launch
It's easy to misunderstand alerts when presented out of context. We're trained to click through notifications without reading them and get back to what we were doing as quickly as possible. We know that providing timely and context-sensitive alerts to users is critical for them to be of value. We recently enabled a security feature first introduced in Android Oreo which warns users when they are about to launch a potentially harmful app on their device.
This new warning dialog provides in-context information about which app the user is about to launch, why we think it may be harmful and what might happen if they open the app. We also provide clear guidance on what to do next. These in-context dialogs ensure users are protected even if they accidentally missed an alert.
Auto-disabling apps
Google Play Protect has long been able to disable the most harmful categories of apps on users devices automatically, providing robust protection where we believe harm will be done.
In 2018, we extended this coverage to apps installed from Play that were later found to have violated Google Play's policies, e.g. on privacy, deceptive behavior or content. These apps have been suspended and removed from the Google Play Store.
This does not remove the app from user device, but it does notify the user and prevents them from opening the app accidentally. The notification gives the option to remove the app entirely.
Keeping the Android ecosystem secure is no easy task, but we firmly believe that Google Play Protect is an important security layer that's used to protect users devices and their data while maintaining the freedom, diversity and openness that makes Android, well, Android.
Acknowledgements: This post leveraged contributions from Meghan Kelly and William Luh.
Posted by Edward Cunningham, Android Security & Privacy Team
In a previous blog we described how API behavior changes advance the security and privacy protections of Android, and include user experience improvements that prevent apps from accidentally overusing resources like battery and memory.
Since November 2018, all app updates on Google Play have been required to target API level 26 (Android 8.0) or higher. Thanks to the efforts of thousands of app developers, Android users now enjoy more apps using modern APIs than ever before, bringing significant security and privacy benefits. For example, during 2018 over 150,000 apps added support for runtime permissions, giving users granular control over the data they share.
Today we're providing more information about the Google Play requirements for 2019, and announcing some changes that affect apps distributed via other stores.
Google Play requirements for 2019
In order to provide users with the best Android experience possible, the Google Play Console will continue to require that apps target a recent API level:
Existing apps that are not receiving updates are unaffected and can continue to be downloaded from the Play Store. Apps can still use any minSdkVersion, so there is no change to your ability to build apps for older Android versions.
minSdkVersion
For a list of changes introduced in Android 9 Pie, check out our page on behavior changes for apps targeting API level 28+.
Apps distributed via other stores
Targeting a recent API level is valuable regardless of how an app is distributed. In China, major app stores from Huawei, OPPO, Vivo, Xiaomi, Baidu, Alibaba, and Tencent will be requiring that apps target API level 26 (Android 8.0) or higher in 2019. We expect many others to introduce similar requirements – an important step to improve the security of the app ecosystem.
Over 95% of spyware we detect outside of the Play Store intentionally targets API level 22 or lower, avoiding runtime permissions even when installed on recent Android versions. To protect users from malware, and support this ecosystem initiative, Google Play Protect will warn users when they attempt to install APKs from any source that do not target a recent API level:
These Play Protect warnings will show only if the app's targetSdkVersion is lower than the device API level. For example, a user with a device running Android 6.0 (Marshmallow) will be warned when installing any new APK that targets API level 22 or lower. Users with devices running Android 8.0 (Oreo) or higher will be warned when installing any new APK that targets API level 25 or lower.
targetSdkVersion
Prior to August, Play Protect will start showing these warnings on devices with Developer options enabled to give advance notice to developers of apps outside of the Play Store. To ensure compatibility across all Android versions, developers should make sure that new versions of any apps target API level 26+.
Existing apps that have been released (via any distribution channel) and are not receiving updates will be unaffected – users will not be warned when installing them.
Getting started
For advice on how to change your app’s target API level, take a look at the migration guide and this talk from I/O 2018: Migrate your existing app to target Android Oreo and above.
We're extremely grateful to the Android developers worldwide who have already updated their apps to deliver security improvements for their users. We look forward to making great progress together in 2019.
Posted by Andrew Ahn, Product Manager, Google Play
Google Play is committed to providing a secure and safe platform for billions of Android users on their journey discovering and experiencing the apps they love and enjoy. To deliver against this commitment, we worked last year to improve our abuse detection technologies and systems, and significantly increased our team of product managers, engineers, policy experts, and operations leaders to fight against bad actors.
In 2018, we introduced a series of new policies to protect users from new abuse trends, detected and removed malicious developers faster, and stopped more malicious apps from entering the Google Play Store than ever before. The number of rejected app submissions increased by more than 55 percent, and we increased app suspensions by more than 66 percent. These increases can be attributed to our continued efforts to tighten policies to reduce the number of harmful apps on the Play Store, as well as our investments in automated protections and human review processes that play critical roles in identifying and enforcing on bad apps.
In addition to identifying and stopping bad apps from entering the Play Store, our Google Play Protect system now scans over 50 billion apps on users' devices each day to make sure apps installed on the device aren't behaving in harmful ways. With such protection, apps from Google Play are eight times less likely to harm a user's device than Android apps from other sources.
Here are some areas we've been focusing on in the last year and that will continue to be a priority for us in 2019:
Protecting users' data and privacy is a critical factor in building user trust. We've long required developers to limit their device permission requests to what's necessary to provide the features of an app. Also, to help users understand how their data is being used, we've required developers to provide prominent disclosures about the collection and use of sensitive user data. Last year, we rejected or removed tens of thousands of apps that weren't in compliance with Play's policies related to user data and privacy.
In October 2018, we announced a new policy restricting the use of the SMS and Call Log permissions to a limited number of cases, such as where an app has been selected as the user's default app for making calls or sending text messages. We've recently started to remove apps from Google Play that violate this policy. We plan to introduce additional policies for device permissions and user data throughout 2019.
We find that over 80% of severe policy violations are conducted by repeat offenders and abusive developer networks. When malicious developers are banned, they often create new accounts or buy developer accounts on the black market in order to come back to Google Play. We've further enhanced our clustering and account matching technologies, and by combining these technologies with the expertise of our human reviewers, we've made it more difficult for spammy developer networks to gain installs by blocking their apps from being published in the first place.
As mentioned in last year's blog post, we fought against hundreds of thousands of impersonators, apps with inappropriate content, and Potentially Harmful Applications (PHAs). In a continued fight against these types of apps, not only do we apply advanced machine learning models to spot suspicious apps, we also conduct static and dynamic analyses, intelligently use user engagement and feedback data, and leverage skilled human reviews, which have helped in finding more bad apps with higher accuracy and efficiency.
Despite our enhanced and added layers of defense against bad apps, we know bad actors will continue to try to evade our systems by changing their tactics and cloaking bad behaviors. We will continue to enhance our capabilities to counter such adversarial behavior, and work relentlessly to provide our users with a secure and safe app store.
How useful did you find this blog post?
★ ★ ★ ★ ★
Posted by Vikrant Nanda and René Mayrhofer, Android Security & Privacy Team
There is no better time to talk about Android dessert releases than the holidays because who doesn't love dessert? And what is one of our favorite desserts during the holiday season? Well, pie of course.
In all seriousness, pie is a great analogy because of how the various ingredients turn into multiple layers of goodness: right from the software crust on top to the hardware layer at the bottom. Read on for a summary of security and privacy features introduced in Android Pie this year.
Making Android more secure requires a combination of hardening the platform and advancing anti-exploitation techniques.
With Android Pie, we updated File-Based Encryption to support external storage media (such as, expandable storage cards). We also introduced support for metadata encryption where hardware support is present. With filesystem metadata encryption, a single key present at boot time encrypts whatever content is not encrypted by file-based encryption (such as, directory layouts, file sizes, permissions, and creation/modification times).
Android Pie also introduced a BiometricPrompt API that apps can use to provide biometric authentication dialogs (such as, fingerprint prompt) on a device in a modality-agnostic fashion. This functionality creates a standardized look, feel, and placement for the dialog. This kind of standardization gives users more confidence that they're authenticating against a trusted biometric credential checker.
New protections and test cases for the Application Sandbox help ensure all non-privileged apps targeting Android Pie (and all future releases of Android) run in stronger SELinux sandboxes. By providing per-app cryptographic authentication to the sandbox, this protection improves app separation, prevents overriding safe defaults, and (most significantly) prevents apps from making their data widely accessible.
With Android Pie, we expanded our compiler-based security mitigations, which instrument runtime operations to fail safely when undefined behavior occurs.
Control Flow Integrity (CFI) is a security mechanism that disallows changes to the original control flow graph of compiled code. In Android Pie, it has been enabled by default within the media frameworks and other security-critical components, such as for Near Field Communication (NFC) and Bluetooth protocols. We also implemented support for CFI in the Android common kernel, continuing our efforts to harden the kernel in previous Android releases.
Integer Overflow Sanitization is a security technique used to mitigate memory corruption and information disclosure vulnerabilities caused by integer operations. We've expanded our use of Integer Overflow sanitizers by enabling their use in libraries where complex untrusted input is processed or where security vulnerabilities have been reported.
One of the highlights of Android Pie is Android Protected Confirmation, the first major mobile OS API that leverages a hardware-protected user interface (Trusted UI) to perform critical transactions completely outside the main mobile operating system. Developers can use this API to display a trusted UI prompt to the user, requesting approval via a physical protected input (such as, a button on the device). The resulting cryptographically signed statement allows the relying party to reaffirm that the user would like to complete a sensitive transaction through their app.
We also introduced support for a new Keystore type that provides stronger protection for private keys by leveraging tamper-resistant hardware with dedicated CPU, RAM, and flash memory. StrongBox Keymaster is an implementation of the Keymaster hardware abstraction layer (HAL) that resides in a hardware security module. This module is designed and required to have its own processor, secure storage, True Random Number Generator (TRNG), side-channel resistance, and tamper-resistant packaging.
Other Keystore features (as part of Keymaster 4) include Keyguard-bound keys, Secure Key Import, 3DES support, and version binding. Keyguard-bound keys enable use restriction so as to protect sensitive information. Secure Key Import facilitates secure key use while protecting key material from the application or operating system. You can read more about these features in our recent blog post as well as the accompanying release notes.
User privacy has been boosted with several behavior changes, such as limiting the access background apps have to the camera, microphone, and device sensors. New permission rules and permission groups have been created for phone calls, phone state, and Wi-Fi scans, as well as restrictions around information retrieved from Wi-Fi scans. We have also added associated MAC address randomization, so that a device can use a different network address when connecting to a Wi-Fi network.
On top of that, Android Pie added support for encrypting Android backups with the user's screen lock secret (that is, PIN, pattern, or password). By design, this means that an attacker would not be able to access a user's backed-up application data without specifically knowing their passcode. Auto backup for apps has been enhanced by providing developers a way to specify conditions under which their app's data is excluded from auto backup. For example, Android Pie introduces a new flag to determine whether a user's backup is client-side encrypted.
As part of a larger effort to move all web traffic away from cleartext (unencrypted HTTP) and towards being secured with TLS (HTTPS), we changed the defaults for Network Security Configuration to block all cleartext traffic. We're protecting users with TLS by default, unless you explicitly opt-in to cleartext for specific domains. Android Pie also adds built-in support for DNS over TLS, automatically upgrading DNS queries to TLS if a network's DNS server supports it. This protects information about IP addresses visited from being sniffed or intercepted on the network level.
We believe that the features described in this post advance the security and privacy posture of Android, but you don't have to take our word for it. Year after year our continued efforts are demonstrably resulting in better protection as evidenced by increasing exploit difficulty and independent mobile security ratings. Now go and enjoy some actual pie while we get back to preparing the next Android dessert release!
Acknowledgements: This post leveraged contributions from Chad Brubaker, Janis Danisevskis, Giles Hogben, Troy Kensinger, Ivan Lozano, Vishwath Mohan, Frank Salim, Sami Tolvanen, Lilian Young, and Shawn Willden.
Posted by Lilian Young and Shawn Willden, Android Security; and Frank Salim, Google Pay
The Android Keystore provides application developers with a set of cryptographic tools that are designed to secure their users' data. Keystore moves the cryptographic primitives available in software libraries out of the Android OS and into secure hardware. Keys are protected and used only within the secure hardware to protect application secrets from various forms of attacks. Keystore gives applications the ability to specify restrictions on how and when the keys can be used.
Android Pie introduces new capabilities to Keystore. We will be discussing two of these new capabilities in this post. The first enables restrictions on key use so as to protect sensitive information. The second facilitates secure key use while protecting key material from the application or operating system.
There are times when a mobile application receives data but doesn't need to immediately access it if the user is not currently using the device. Sensitive information sent to an application while the device screen is locked must remain secure until the user wants access to it. Android Pie addresses this by introducing keyguard-bound cryptographic keys. When the screen is locked, these keys can be used in encryption or verification operations, but are unavailable for decryption or signing. If the device is currently locked with a PIN, pattern, or password, any attempt to use these keys will result in an invalid operation. Keyguard-bound keys protect the user's data while the device is locked, and only available when the user needs it.
Keyguard binding and authentication binding both function in similar ways, except with one important difference. Keyguard binding ties the availability of keys directly to the screen lock state while authentication binding uses a constant timeout. With keyguard binding, the keys become unavailable as soon as the device is locked and are only made available again when the user unlocks the device.
It is worth noting that keyguard binding is enforced by the operating system, not the secure hardware. This is because the secure hardware has no way to know when the screen is locked. Hardware-enforced Android Keystore protection features like authentication binding, can be combined with keyguard binding for a higher level of security. Furthermore, since keyguard binding is an operating system feature, it's available to any device running Android Pie.
Keys for any algorithm supported by the device can be keyguard-bound. To generate or import a key as keyguard-bound, call setUnlockedDeviceRequired(true) on the KeyGenParameterSpec or KeyProtection builder object at key generation or import.
Secure Key Import is a new feature in Android Pie that allows applications to provision existing keys into Keystore in a more secure manner. The origin of the key, a remote server that could be sitting in an on-premise data center or in the cloud, encrypts the secure key using a public wrapping key from the user's device. The encrypted key in the SecureKeyWrapper format, which also contains a description of the ways the imported key is allowed to be used, can only be decrypted in the Keystore hardware belonging to the specific device that generated the wrapping key. Keys are encrypted in transit and remain opaque to the application and operating system, meaning they're only available inside the secure hardware into which they are imported.
Secure Key Import is useful in scenarios where an application intends to share a secret key with an Android device, but wants to prevent the key from being intercepted or from leaving the device. Google Pay uses Secure Key Import to provision some keys on Pixel 3 phones, to prevent the keys from being intercepted or extracted from memory. There are also a variety of enterprise use cases such as S/MIME encryption keys being recovered from a Certificate Authorities escrow so that the same key can be used to decrypt emails on multiple devices.
To take advantage of this feature, please review this training article. Please note that Secure Key Import is a secure hardware feature, and is therefore only available on select Android Pie devices. To find out if the device supports it, applications can generate a KeyPair with PURPOSE_WRAP_KEY.
Posted by Mo Yu, Damien Octeau, and Chuangang Ren, Android Security & Privacy Team
In a previous blog post, we talked about using machine learning to combat Potentially Harmful Applications (PHAs). This blog post covers how Google uses machine learning techniques to detect and classify PHAs. We'll discuss the challenges in the PHA detection space, including the scale of data, the correct identification of PHA behaviors, and the evolution of PHA families. Next, we will introduce two of the datasets that make the training and implementation of machine learning models possible, such as app analysis data and Google Play data. Finally, we will present some of the approaches we use, including logistic regression and deep neural networks.
Detecting PHAs is challenging and requires a lot of resources. Our security experts need to understand how apps interact with the system and the user, analyze complex signals to find PHA behavior, and evolve their tactics to stay ahead of PHA authors. Every day, Google Play Protect (GPP) analyzes over half a million apps, which makes a lot of new data for our security experts to process.
Leveraging machine learning helps us detect PHAs faster and at a larger scale. We can detect more PHAs just by adding additional computing resources. In many cases, machine learning can find PHA signals in the training data without human intervention. Sometimes, those signals are different than signals found by security experts. Machine learning can take better advantage of this data, and discover hidden relationships between signals more effectively.
There are two major parts of Google Play Protect's machine learning protections: the data and the machine learning models.
The quality and quantity of the data used to create a model are crucial to the success of the system. For the purpose of PHA detection and classification, our system mainly uses two anonymous data sources: data from analyzing apps and data from how users experience apps.
Google Play Protect analyzes every app that it can find on the internet. We created a dataset by decomposing each app's APK and extracting PHA signals with deep analysis. We execute various processes on each app to find particular features and behaviors that are relevant to the PHA categories in scope (for example, SMS fraud, phishing, privilege escalation). Static analysis examines the different resources inside an APK file while dynamic analysis checks the behavior of the app when it's actually running. These two approaches complement each other. For example, dynamic analysis requires the execution of the app regardless of how obfuscated its code is (obfuscation hinders static analysis), and static analysis can help detect cloaking attempts in the code that may in practice bypass dynamic analysis-based detection. In the end, this analysis produces information about the app's characteristics, which serve as a fundamental data source for machine learning algorithms.
In addition to analyzing each app, we also try to understand how users perceive that app. User feedback (such as the number of installs, uninstalls, user ratings, and comments) collected from Google Play can help us identify problematic apps. Similarly, information about the developer (such as the certificates they use and their history of published apps) contribute valuable knowledge that can be used to identify PHAs. All these metrics are generated when developers submit a new app (or new version of an app) and by millions of Google Play users every day. This information helps us to understand the quality, behavior, and purpose of an app so that we can identify new PHA behaviors or identify similar apps.
In general, our data sources yield raw signals, which then need to be transformed into machine learning features for use by our algorithms. Some signals, such as the permissions that an app requests, have a clear semantic meaning and can be directly used. In other cases, we need to engineer our data to make new, more powerful features. For example, we can aggregate the ratings of all apps that a particular developer owns, so we can calculate a rating per developer and use it to validate future apps. We also employ several techniques to focus in on interesting data.To create compact representations for sparse data, we use embedding. To help streamline the data to make it more useful to models, we use feature selection. Depending on the target, feature selection helps us keep the most relevant signals and remove irrelevant ones.
By combining our different datasets and investing in feature engineering and feature selection, we improve the quality of the data that can be fed to various types of machine learning models.
Building a good machine learning model is like building a skyscraper: quality materials are important, but a great design is also essential. Like the materials in a skyscraper, good datasets and features are important to machine learning, but a great algorithm is essential to identify PHA behaviors effectively and efficiently.
We train models to identify PHAs that belong to a specific category, such as SMS-fraud or phishing. Such categories are quite broad and contain a large number of samples given the number of PHA families that fit the definition. Alternatively, we also have models focusing on a much smaller scale, such as a family, which is composed of a group of apps that are part of the same PHA campaign and that share similar source code and behaviors. On the one hand, having a single model to tackle an entire PHA category may be attractive in terms of simplicity but precision may be an issue as the model will have to generalize the behaviors of a large number of PHAs believed to have something in common. On the other hand, developing multiple PHA models may require additional engineering efforts, but may result in better precision at the cost of reduced scope.
We use a variety of modeling techniques to modify our machine learning approach, including supervised and unsupervised ones.
One supervised technique we use is logistic regression, which has been widely adopted in the industry. These models have a simple structure and can be trained quickly. Logistic regression models can be analyzed to understand the importance of the different PHA and app features they are built with, allowing us to improve our feature engineering process. After a few cycles of training, evaluation, and improvement, we can launch the best models in production and monitor their performance.
For more complex cases, we employ deep learning. Compared to logistic regression, deep learning is good at capturing complicated interactions between different features and extracting hidden patterns. The millions of apps in Google Play provide a rich dataset, which is advantageous to deep learning.
In addition to our targeted feature engineering efforts, we experiment with many aspects of deep neural networks. For example, a deep neural network can have multiple layers and each layer has several neurons to process signals. We can experiment with the number of layers and neurons per layer to change model behaviors.
We also adopt unsupervised machine learning methods. Many PHAs use similar abuse techniques and tricks, so they look almost identical to each other. An unsupervised approach helps define clusters of apps that look or behave similarly, which allows us to mitigate and identify PHAs more effectively. We can automate the process of categorizing that type of app if we are confident in the model or can request help from a human expert to validate what the model found.
PHAs are constantly evolving, so our models need constant updating and monitoring. In production, models are fed with data from recent apps, which help them stay relevant. However, new abuse techniques and behaviors need to be continuously detected and fed into our machine learning models to be able to catch new PHAs and stay on top of recent trends. This is a continuous cycle of model creation and updating that also requires tuning to ensure that the precision and coverage of the system as a whole matches our detection goals.
As part of Google's AI-first strategy, our work leverages many machine learning resources across the company, such as tools and infrastructures developed by Google Brain and Google Research. In 2017, our machine learning models successfully detected 60.3% of PHAs identified by Google Play Protect, covering over 2 billion Android devices. We continue to research and invest in machine learning to scale and simplify the detection of PHAs in the Android ecosystem.
This work was developed in joint collaboration with Google Play Protect, Safe Browsing and Play Abuse teams with contributions from Andrew Ahn, Hrishikesh Aradhye, Daniel Bali, Hongji Bao, Yajie Hu, Arthur Kaiser, Elena Kovakina, Salvador Mandujano, Melinda Miller, Rahul Mishra, Sebastian Porst, Monirul Sharif, Sri Somanchi, Sai Deep Tetali, and Zhikun Wang.
Posted by Janis Danisevskis, Information Security Engineer, Android Security
In Android Pie, we introduced Android Protected Confirmation, the first major mobile OS API that leverages a hardware protected user interface (Trusted UI) to perform critical transactions completely outside the main mobile operating system. This Trusted UI protects the choices you make from fraudulent apps or a compromised operating system. When an app invokes Protected Confirmation, control is passed to the Trusted UI, where transaction data is displayed and user confirmation of that data's correctness is obtained.
Once confirmed, your intention is cryptographically authenticated and unforgeable when conveyed to the relying party, for example, your bank. Protected Confirmation increases the bank's confidence that it acts on your behalf, providing a higher level of protection for the transaction.
Protected Confirmation also adds additional security relative to other forms of secondary authentication, such as a One Time Password or Transaction Authentication Number. These mechanisms can be frustrating for mobile users and also fail to protect against a compromised device that can corrupt transaction data or intercept one-time confirmation text messages.
Once the user approves a transaction, Protected Confirmation digitally signs the confirmation message. Because the signing key never leaves the Trusted UI's hardware sandbox, neither app malware nor a compromised operating system can fool the user into authorizing anything. Protected Confirmation signing keys are created using Android's standard AndroidKeyStore API. Before it can start using Android Protected Confirmation for end-to-end secure transactions, the app must enroll the public KeyStore key and its Keystore Attestation certificate with the remote relying party. The attestation certificate certifies that the key can only be used to sign Protected Confirmations.
There are many possible use cases for Android Protected Confirmation. At Google I/O 2018, the What's new in Android security session showcased partners planning to leverage Android Protected Confirmation in a variety of ways, including Royal Bank of Canada person to person money transfers; Duo Security, Nok Nok Labs, and ProxToMe for user authentication; and Insulet Corporation and Bigfoot Biomedical, for medical device control.
Insulet, a global leading manufacturer of tubeless patch insulin pumps, has demonstrated how they can modify their FDA cleared Omnipod DASH TM Insulin management system in a test environment to leverage Protected Confirmation to confirm the amount of insulin to be injected. This technology holds the promise for improved quality of life and reduced cost by enabling a person with diabetes to leverage their convenient, familiar, and secure smartphone for control rather than having to rely on a secondary, obtrusive, and expensive remote control device. (Note: The Omnipod DASH™ System is not cleared for use with Pixel 3 mobile device or Protected Confirmation).
This work is fulfilling an important need in the industry. Since smartphones do not fit the mold of an FDA approved medical device, we've been working with FDA as part of DTMoSt, an industry-wide consortium, to define a standard for phones to safely control medical devices, such as insulin pumps. A technology like Protected Confirmation plays an important role in gaining higher assurance of user intent and medical safety.
To integrate Protected Confirmation into your app, check out the Android Protected Confirmation training article. Android Protected Confirmation is an optional feature in Android Pie. Because it has low-level hardware dependencies, Protected Confirmation may not be supported by all devices running Android Pie. Google Pixel 3 and 3XL devices are the first to support Protected Confirmation, and we are working closely with other manufacturers to adopt this market-leading security innovation on more devices.
Posted by Nagendra Modadugu and Bill Richardson, Google Device Security Group
At the Made by Google event last week, we talked about the combination of AI + Software + Hardware to help organize your information. To better protect that information at a hardware level, our new Pixel 3 and Pixel 3 XL devices include a Titan M chip.We briefly introduced Titan M and some of its benefits on our Keyword Blog, and with this post we dive into some of its technical details.
Titan M is a second-generation, low-power security module designed and manufactured by Google, and is a part of the Titan family. As described in the Keyword Blog post, Titan M performs several security sensitive functions, including:
Including Titan M in Pixel 3 devices substantially reduces the attack surface. Because Titan M is a separate chip, the physical isolation mitigates against entire classes of hardware-level exploits such as Rowhammer, Spectre, and Meltdown. Titan M's processor, caches, memory, and persistent storage are not shared with the rest of the phone's system, so side channel attacks like these—which rely on subtle, unplanned interactions between internal circuits of a single component—are nearly impossible. In addition to its physical isolation, the Titan M chip contains many defenses to protect against external attacks.
But Titan M is not just a hardened security microcontroller, but rather a full-lifecycle approach to security with Pixel devices in mind. Titan M's security takes into consideration all the features visible to Android down to the lowest level physical and electrical circuit design and extends beyond each physical device to our supply chain and manufacturing processes. At the physical level, we incorporated essential features optimized for the mobile experience: low power usage, low-latency, hardware crypto acceleration, tamper detection, and secure, timely firmware updates. We improved and invested in the supply chain for Titan M by creating a custom provisioning process, which provides us with transparency and control starting from the earliest silicon stages.
Finally, in the interest of transparency, the Titan M firmware source code will be publicly available soon. While Google holds the root keys necessary to sign Titan M firmware, it will be possible to reproduce binary builds based on the public source for the purpose of binary transparency.
Titan (left) and Titan M (right)
Titan M's CPU is an ARM Cortex-M3 microprocessor specially hardened against side-channel attacks and augmented with defensive features to detect and respond to abnormal conditions. The Titan M CPU core also exposes several control registers, which can be used to taper access to chip configuration settings and peripherals. Once powered on, Titan M verifies the signature of its flash-based firmware using a public key built into the chip's silicon. If the signature is valid, the flash is locked so it can't be modified, and then the firmware begins executing.
Titan M also features several hardware accelerators: AES, SHA, and a programmable big number coprocessor for public key algorithms. These accelerators are flexible and can either be initialized with keys provided by firmware or with chip-specific and hardware-bound keys generated by the Key Manager module. Chip-specific keys are generated internally based on entropy derived from the True Random Number Generator (TRNG), and thus such keys are never externally available outside the chip over its entire lifetime.
While implementing Titan M firmware, we had to take many system constraints into consideration. For example, packing as many security features into Titan M's 64 Kbytes of RAM required all firmware to execute exclusively off the stack. And to reduce flash-wear, RAM contents can be preserved even during low-power mode when most hardware modules are turned off.
The diagram below provides a high-level view of the chip components described here.
At the heart of our implementation of Titan M are two broader trends: transparency and building a platform for future innovation.
Transparency around every step of the design process — from logic gates to boot code to the applications — gives us confidence in the defenses we're providing for our users. We know what's inside, how it got there, how it works, and who can make changes.
Custom hardware allows us to provide new features, capabilities, and performance not readily available in off-the-shelf components. These changes allow higher assurance use cases like two-factor authentication, medical device control, P2P payments, and others that we will help develop down the road.
As more of our lives are bound up in our phones, keeping those phones secure and trustworthy is increasingly important. Google takes that responsibility seriously. Titan M is just the latest step in our continuing efforts to improve the privacy and security of all our users.