Last November, we released Fast Pair with the Jaybird Tarah Bluetooth headphones. Since then, we’ve engaged with dozens of OEMs, ODMs, and silicon partners to bring Fast Pair to even more devices. Last month, we held a talk at I/O announcing 10+ certified devices, support for Qualcomm’s Smart Headset Development Kit, and upcoming experiences for Fast Pair devices.
Fast Pair makes pairing seamless across Android phones - this year, we are introducing additional features to improve Bluetooth device management.
• Find My Device. Fast Pair devices will soon be surfaced in the Find My Device app and website, allowing users to easily track down lost devices. Headset owners can view the location and time of last use, as well as unpair or ring the buds to locate when they are in range.
• Connected Device Details. In Android Q, Fast Pair devices will have an enhanced Bluetooth device details page to centralize management and key settings. This includes links to Find My Device, Assistant settings (if available), and additional OEM-specified settings that will link to the OEM’s companion app.
Compatible Devices
Below is a list of devices that were showcased during our I/O talk:
Interested in Fast Pair?
If you are interested in creating Fast Pair compatible Bluetooth devices, please take a look at:
Once you have selected devices to integrate, head to our Nearby Devices console to register your product. Reach out to us at fast-pair-integrations@google.com if you have any questions.
Posted by Don Turner, Developer Advocate for Android Media
In Android Q there's a new API which allows applications to capture the audio of other applications. It's called the AudioPlaybackCapture API and it enables some important use cases for easier content sharing and accessibility.
Some examples include:
There may be some situations where a developer wishes to disallow the capture of their app's audio. This article explains how audio capture works for users and how developers can disallow their app's audio from being captured if they need to.
In order to capture the audio of other apps the user must grant the record audio permission to the app doing the capturing.
AUDIO_RECORD permissions dialog
Additionally, before a capture session can be started the capturing app must call MediaProjectionManager.createScreenCaptureIntent(). This will display the following dialog to the user:
MediaProjectionManager.createScreenCaptureIntent()
Screen capture intent dialog
The user must tap "Start now" in order for a capturing session to be started. This will allow both video and audio to be captured.
Cast icon showing red in status bar
During a capture session the cast icon is shown in red in the status bar.
Whether your app's audio can be captured by default depends on your target API. Here's a table which summarizes the default behaviour:
There may be situations where an app wishes to disallow its audio from being captured by other apps. This could be because the audio contains:
An app's audio capturing policy can be set either for all audio or for each individual audio player.
To disallow capture of all audio by third party apps you can do either of the following:
AndroidManifest.xml
<application
...
android:allowAudioPlaybackCapture="false"/>
AudioManager.setAllowedCapturePolicy(ALLOW_CAPTURE_BY_SYSTEM)
To disallow capture for an individual player you can set the capture policy for the audio player when it is built by calling:
AudioAttributes.Builder.setAllowedCapturePolicy(ALLOW_CAPTURE_BY_SYSTEM)
This approach may be useful if your app plays content with differing licenses. For example, both copyrighted and royalty-free content.
By default system apps and components are allowed to capture an app's audio if its usage is MEDIA, GAME and UNKNOWN, as this enables important accessibility use cases, such as live captioning.
In rare cases where a developer wishes to disallow audio capture for system apps as well they can do so in a similar way to the approach for third party apps. Note that this will also disallow capture by third party apps.
This can only be done programmatically by running the following code before any audio is played:
AudioManager.setAllowedCapturePolicy(ALLOW_CAPTURE_BY_NONE)
To disallow capture for an individual player you can set the capture policy for the audio player when it is built:
AudioAttributes.Builder.setAllowedCapturePolicy(ALLOW_CAPTURE_BY_NONE)
If your app is targeting API 28 or below and you would like to enable audio capture add android:allowAudioPlaybackCapture="true" to your app's manifest.xml.
android:allowAudioPlaybackCapture="true"
manifest.xml
If you would like to disallow some or all of your audio from being captured then update your app according to the instructions above.
For more information check out the Audio Playback Capture API documentation.
Posted by Seang Chau (VP Engineering)
Last year we announced Fast Pair, a set of specs that make it easier to connect Bluetooth headsets and speakers to Android devices.
Today, we're making it easier for people to connect Fast Pair compatible accessories to devices associated with the same Google Account. Fast Pair will connect accessories to a user's current and future Android phones (6.0+), and we're adding support for Chromebooks in 2019.
Fast Pair provides stress-free Bluetooth pairing for your Android phone.
We have been working closely with dozens of manufacturers, many of which are bringing new Fast Pair compatible devices to market over the coming months. This includes Jaybird, who is already selling the Tarah Wireless Sport Headphones, as well as upcoming products from prominent brands such as Anker SoundCore, Bose, and many more.
The Jaybird Tarah, Fast Pair compatible sport headphones already available in the market.
We also want to make it easy for manufacturers to ship compatible products with minimal additional engineering effort. We collaborated with industry leading Bluetooth audio companies such as Airoha Technology Corp., BES and Qualcomm Technologies International, Ltd. (QTIL) to add native Fast Pair support to their software development kits.
If you are a manufacturer interested in creating Fast Pair compatible Bluetooth devices, just head to our Nearby Devices console to register your product and validate that it has correctly implemented the Fast Pair specs.
Posted by Don Turner, Developer Advocate, Android Audio Framework
This week we released the first production-ready version of Oboe - a C++ library for building real-time audio apps. Oboe provides the lowest possible audio latency across the widest range of Android devices, as well as several other benefits.
Oboe takes advantage of the improved performance and features of AAudio on Oreo MR1 (API 27+) whilst maintaining backward compatibility (using OpenSL ES) on API 16+. It's kind of like AndroidX for native audio.
Using Oboe you can create an audio stream in just 3 lines of code (vs 50+ lines in OpenSL ES):
AudioStreamBuilder builder; AudioStream *stream = nullptr; Result result = builder.openStream(&stream);
Take a look at the short video introduction:
Check out the documentation, code samples and API reference. There's even a codelab which shows you how to build a rhythm-based game.
If you have any issues, please file them here, we'd love to hear how you get on.
Posted by Mark Koudritsky, software engineer
There is a new addition in the arsenal of instruments used by Android and ChromeOS teams in the battle to measure and minimize touch and audio latency: the WALT Latency Timer.
When you use a mobile device, you expect it to respond instantly to your touch or voice: the more immediate the response, the more you feel directly connected to the device. Over the past few years, we have been trying to measure, understand, and reduce latency in our Chromebook and Android products.
Before we can reduce latency, we must first understand where it comes from. In the case of tapping a touchscreen, the time for a response includes the touch-sensing hardware and driver, the application, and the display and graphics output. For a voice command, there is time spent in sampling input audio, the application, and in audio output. Sometimes we have a mixture of these (for example, a piano app would include touch input and audio output).
Most previous work to study latency has focused on measuring a single round-trip latency number. For example, to measure audio latency, an app would measure time from app to speaker/mic and back to the app using the Dr. Rick O'Rang loopback audio dongle together with an appropriate app such as the Dr Rick O’Rang Loopback app or Superpowered Mobile Audio Latency Test App. Similarly, the TouchBot uses a fast camera to measure the round-trip delay from physical touch until a change on the screen is visible. While valuable, the problem with such a setup is that it’s very difficult to break down the latency into input vs output components.
An important innovation in WALT (a descendant of QuickStep) is that it synchronizes an external hardware clock with the Android device or Chromebook to within a millisecond. This allows it to measure input and output latencies separately as opposed to measuring a round-trip latency.
WALT is simple. The parts cost less than $50 and with some basic hobby electronics skills, you can build it yourself.
We’ve been using WALT within Google for Nexus and Chromebook development. We’re now opening this tool to app developers and anyone who wants to precisely measure real-world latencies. We hope that having readily accessible tools will help the industry as a whole improve and make all our devices more responsive to touch and voice.
Posted by Alice Ching, Google Engineer
We are pleased to announce the release of Games in Motion, an open source game sample to demonstrate how developers can make fun games using Google Fit and Android Wear. Do you ever go on a jog and feel like there is a lack of incentive to help you run better? What if you were a secret agent and had to use your speed and your nifty gadget to complete missions?
With Games in Motion, you can enhance your exercise with missions and actions on your Android Wear device, while logging your jogs to the cloud.
Games in Motion is written in Java programming language using Android Studio. It demonstrates multiple Android technologies.
You can download the latest open source release from GitHub. We hope to inspire similar Android games, where multiple different form factors are combined for a fun experience.
Posted by Kristan Uccello, Google Developer Relations
It’s rude to talk during a presentation, it disrespects the speaker and annoys the audience. If your application doesn’t respect the rules of audio focus then it’s disrespecting other applications and annoying the user. If you have never heard of audio focus you should take a look at the Android developer training material.
With multiple apps potentially playing audio it's important to think about how they should interact. To avoid every music app playing at the same time, Android uses audio focus to moderate audio playback—your app should only play audio when it holds audio focus. This post provides some tips on how to handle changes in audio focus properly, to ensure the best possible experience for the user.
Audio focus should not be requested when your application starts (don’t get greedy), instead delay requesting it until your application is about to do something with an audio stream. By requesting audio focus through the AudioManager system service, an application can use one of the AUDIOFOCUS_GAIN* constants (see Table 1) to indicate the desired level of focus.
AUDIOFOCUS_GAIN
Listing 1. Requesting audio focus.
1. AudioManager am = (AudioManager) mContext.getSystemService(Context.AUDIO_SERVICE); 2. 3. int result = am.requestAudioFocus(mOnAudioFocusChangeListener, 4. // Hint: the music stream. 5. AudioManager.STREAM_MUSIC, 6. // Request permanent focus. 7. AudioManager.AUDIOFOCUS_GAIN); 8. if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) { 9. mState.audioFocusGranted = true; 10. } else if (result == AudioManager.AUDIOFOCUS_REQUEST_FAILED) { 11. mState.audioFocusGranted = false; 12. }
In line 7 above, you can see that we have requested permanent audio focus. An application could instead request transient focus using AUDIOFOCUS_GAIN_TRANSIENT which is appropriate when using the audio system for less than 45 seconds.
AUDIOFOCUS_GAIN_TRANSIENT
Alternatively, the app could use AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK, which is appropriate when the use of the audio system may be shared with another application that is currently playing audio (e.g. for playing a "keep it up" prompt in a fitness application and expecting background music to duck during the prompt). The app requesting AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK should not use the audio system for more than 15 seconds before releasing focus.
AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK
In order to handle audio focus change events, an application should create an instance of OnAudioFocusChangeListener. In the listener, the application will need to handle theAUDIOFOCUS_GAIN* event and AUDIOFOCUS_LOSS* events (see Table 1). It should be noted that AUDIOFOCUS_GAIN has some nuances which are highlighted in Listing 2, below.
OnAudioFocusChangeListener
AUDIOFOCUS_GAIN*
AUDIOFOCUS_LOSS*
Listing 2. Handling audio focus changes.
1. mOnAudioFocusChangeListener = new AudioManager.OnAudioFocusChangeListener() { 2. 3. @Override 4. public void onAudioFocusChange(int focusChange) { 5. switch (focusChange) { 6. case AudioManager.AUDIOFOCUS_GAIN: 7. mState.audioFocusGranted = true; 8. 9. if(mState.released) { 10. initializeMediaPlayer(); 11. } 12. 13. switch(mState.lastKnownAudioFocusState) { 14. case UNKNOWN: 15. if(mState.state == PlayState.PLAY && !mPlayer.isPlaying()) { 16. mPlayer.start(); 17. } 18. break; 19. case AudioManager.AUDIOFOCUS_LOSS_TRANSIENT: 20. if(mState.wasPlayingWhenTransientLoss) { 21. mPlayer.start(); 22. } 23. break; 24. case AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK: 25. restoreVolume(); 26. break; 27. } 28. 29. break; 30. case AudioManager.AUDIOFOCUS_LOSS: 31. mState.userInitiatedState = false; 32. mState.audioFocusGranted = false; 33. teardown(); 34. break; 35. case AudioManager.AUDIOFOCUS_LOSS_TRANSIENT: 36. mState.userInitiatedState = false; 37. mState.audioFocusGranted = false; 38. mState.wasPlayingWhenTransientLoss = mPlayer.isPlaying(); 39. mPlayer.pause(); 40. break; 41. case AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK: 42. mState.userInitiatedState = false; 43. mState.audioFocusGranted = false; 44. lowerVolume(); 45. break; 46. } 47. mState.lastKnownAudioFocusState = focusChange; 48. } 49.};
AUDIOFOCUS_GAIN is used in two distinct scopes of an applications code. First, it can be used when registering for audio focus as shown in Listing 1. This does NOT translate to an event for the registered OnAudioFocusChangeListener, meaning that on a successful audio focus request the listener will NOT receive an AUDIOFOCUS_GAIN event for the registration.
AUDIOFOCUS_GAIN is also used in the implementation of an OnAudioFocusChangeListener as an event condition. As stated above, the AUDIOFOCUS_GAIN event will not be triggered on audio focus requests. Instead the AUDIOFOCUS_GAIN event will occur only after an AUDIOFOCUS_LOSS* event has occurred. This is the only constant in the set shown Table 1 that is used in both scopes.
There are four cases that need to be handled by the focus change listener. When the application receives an AUDIOFOCUS_LOSS this usually means it will not be getting its focus back. In this case the app should release assets associated with the audio system and stop playback. As an example, imagine a user is playing music using an app and then launches a game which takes audio focus away from the music app. There is no predictable time for when the user will exit the game. More likely, the user will navigate to the home launcher (leaving the game in the background) and launch yet another application or return to the music app causing a resume which would then request audio focus again.
AUDIOFOCUS_LOSS
However another case exists that warrants some discussion. There is a difference between losing audio focus permanently (as described above) and temporarily. When an application receives an AUDIOFOCUS_LOSS_TRANSIENT, the behavior of the app should be that it suspends its use of the audio system until it receives an AUDIOFOCUS_GAIN event. When the AUDIOFOCUS_LOSS_TRANSIENT occurs, the application should make a note that the loss is temporary, that way on audio focus gain it can reason about what the correct behavior should be (see lines 13-27 of Listing 2).
AUDIOFOCUS_LOSS_TRANSIENT
Sometimes an app loses audio focus (receives an AUDIOFOCUS_LOSS) and the interrupting application terminates or otherwise abandons audio focus. In this case the last application that had audio focus may receive an AUDIOFOCUS_GAIN event. On the subsequent AUDIOFOCUS_GAIN event the app should check and see if it is receiving the gain after a temporary loss and can thus resume use of the audio system or if recovering from an permanent loss, setup for playback.
If an application will only be using the audio capabilities for a short time (less than 45 seconds), it should use an AUDIOFOCUS_GAIN_TRANSIENT focus request and abandon focus after it has completed its playback or capture. Audio focus is handled as a stack on the system — as such the last process to request audio focus wins.
When audio focus has been gained this is the appropriate time to create a MediaPlayer or MediaRecorder instance and allocate resources. Likewise when an app receives AUDIOFOCUS_LOSS it is good practice to clean up any resources allocated. Gaining audio focus has three possibilities that also correspond to the three audio focus loss cases in Table 1. It is a good practice to always explicitly handle all the loss cases in the OnAudioFocusChangeListener.
MediaPlayer
MediaRecorder
Table 1. Audio focus gain and loss implication.
AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK
Note: AUDIOFOCUS_GAIN is used in two places. When requesting audio focus it is passed in as a hint to the AudioManager and it is used as an event case in the OnAudioFocusChangeListener. The gain events highlighted in green are only used when requesting audio focus. The loss events are only used in the OnAudioFocusChangeListener.
AudioManager
Table 2. Audio stream types.
STREAM_ALARM
STREAM_DTMF
STREAM_MUSIC
STREAM_NOTIFICATION
An app will request audio focus (see an example in the sample source code linked below) from the AudioManager (Listing 1, line 1). The three arguments it provides are an audio focus change listener object (optional), a hint as to what audio channel to use (Table 2, most apps should use STREAM_MUSIC) and the type of audio focus from Table 1, column 1. If audio focus is granted by the system (AUDIOFOCUS_REQUEST_GRANTED), only then handle any initialization (see Listing 1, line 9).
AUDIOFOCUS_REQUEST_GRANTED
Note: The system will not grant audio focus (AUDIOFOCUS_REQUEST_FAILED) if there is a phone call currently in process and the application will not receive AUDIOFOCUS_GAIN after the call ends.
AUDIOFOCUS_REQUEST_FAILED
Within an implementation of OnAudioFocusChange(), understanding what to do when an application receives an onAudioFocusChange() event is summarized in Table 3.
OnAudioFocusChange()
onAudioFocusChange()
In the cases of losing audio focus be sure to check that the loss is in fact final. If the app receives an AUDIOFOCUS_LOSS_TRANSIENT or AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK it can hold onto the media resources it has created (don’t call release()) as there will likely be another audio focus change event very soon thereafter. The app should take note that it has received a transient loss using some sort of state flag or simple state machine.
release()
If an application were to request permanent audio focus with AUDIOFOCUS_GAIN and then receive an AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK an appropriate action for the application would be to lower its stream volume (make sure to store the original volume state somewhere) and then raise the volume upon receiving an AUDIOFOCUS_GAIN event (see Figure 1, below).
Table 3. Appropriate actions by focus change type.
Understanding how to be a good audio citizen application on an Android device means respecting the system's audio focus rules and handling each case appropriately. Try to make your application behave in a consistent manner and not negatively surprise the user. There is a lot more that can be talked about within the audio system on Android and in the material below you will find some additional discussions.
Example source code is available here:
https://android.googlesource.com/platform/development/+/master/samples/RandomMusicPlayer
[This post is by Ian Ni-Lewis, a Developer Advocate who devotes most of his time to making Android games more awesome. — Tim Bray]
Making a game on Android is easy. Making a great game for a mobile, multitasking, often multi-core, multi-purpose system like Android is trickier. Even the best developers frequently make mistakes in the way they interact with the Android system and with other applications — mistakes that don’t affect the quality of gameplay, but which affect the quality of the user’s experience in other ways.
A truly great Android game knows how to play nice: how to fit seamlessly into the system of apps, services, and UI features that run on Android devices. In this multi-part series of posts, Android Developer Relations engineers who specialize in games explain what it takes to make your game play nice.
One of the most awesome things about Android is that it can do so much stuff in the background. But when apps aren’t careful about their background behaviors, it can get annoying. Take my own personal pet peeve: game audio that doesn’t know when to quit.
I’m on the bus to work, passing the time with a great Android game. I’m completely entranced by whatever combination of birds, ropes, and ninjas is popular this week. Suddenly I panic: I’ve almost missed my stop! I leap up, quickly locking my phone as I shove it into a pocket.
I arrive breathless at my first meeting of the day. The boss, perhaps sensing my vulnerability, asks me a tough question. Not tough enough to stump me, though — I’ve got the answer to that right here on my Android phone! I whip out my phone and press the unlock button... and the room dissolves in laughter as a certain well-known game ditty blares out from the device.
The initial embarrassment is bad enough, but what’s this? I can’t even mute the thing! The phone is showing the lock screen and the volume buttons are inactive. My stress level is climbing and it takes me three tries to successfully type in my unlock code. Finally I get the thing unlocked, jam my finger on the home button and breathe a sigh of relief as the music stops. But the damage is done — my boss is glowering and for the rest of the week my co-workers make video game noises whenever they pass my desk.
It’s a common mistake: the developer of the game assumed that if the game received an onResume() message, it was safe to resume audio. The problem is that onResume() doesn’t necessarily mean your app is visible — only that it’s active. In the case of a locked phone, onResume() is sent as soon as the screen turns on, even though the phone’s display is on the lock screen and the volume buttons aren’t enabled.
Fixing this is trickier than it sounds. Some games wait for onWindowFocusChanged() instead of onResume(), which works pretty well on Gingerbread. But on Honeycomb and higher, onWindowFocusChanged() is sent when certain foreground windows — like, ironically, the volume control display window — take focus. The result is that when the user changes the volume, all of the sound is muted. Not the developer’s original intent!
Waiting for onResume() and onFocusChanged() seems like a possible fix, and it works pretty well in a large number of cases. But even this approach has its Achilles’ heel. If the device falls asleep on its own, or if the user locks the phone and then immediately unlocks it, your app may not receive any focus changed messages at all.
Here’s the easy two-step way to avoid user embarrassment:
Pause the game (and all sound effects) whenever you receive an onPause() message. When gameplay is interrupted — whether because the phone is locked, or the user received a call, or for some other reason — the game should be paused.
After the game is paused, require user input to continue. The biggest mistake most game developers make is to automatically restart gameplay and audio as soon as the user returns to the game. This isn’t just a question of solving the “music over lock screen” issue. Users like to come back to a paused game. It’s no fun to switch back to a game, only to realize you’re about to die because gameplay has resumed before you expected it.
Some game designers don’t like the idea of pausing the background music when the game is paused. If you absolutely must resume music as soon as your game regains focus, then you should do the following:
Pause playback when you receive onPause().
When you receive onResume():
If you have previously received an onFocusChanged(false) message, wait for an onFocusChanged(true) message to arrive before resuming playback.
If you have not previously received an onFocusChanged(false) message, then resume audio immediately.
Test thoroughly!
Fixing audio embarrassments is almost always a quick and easy process. Take the time to do it right, and your users will thank you.