Posted by Ellie Powers, Google Play team
Fueled by the Nexus 7 and other great devices, more than 70 million Android tablets have been activated. Thousands of developers have already designed their apps to look great on tablets, and with the holidays fast approaching, we’re making it even easier for the next wave of tablet owners to discover great apps and games.
Last year, Google Play added a “designed for tablets” section, where users could easily discover apps that look great on their 7”- and 10”-tablets. This section includes only apps and games which meet criteria and guidelines we established last year. (Here’s an overview if you missed it.) Developers who invest the time to meet the criteria are seeing great results; take Remember The Milk, which saw an 83% increase in tablet downloads from being in this section. (see the whole story here).
On November 21 2013, the Play Store made a series of changes so it’s even easier for tablet users to find those apps that are best for their devices. First, by default, users browsing Google Play on a tablet will now see apps and games that are designed for tablets on the top lists (Top Paid, Top Free, Top Grossing, Top New Paid, Top New Free, and Trending). Tablet users will still be able to switch the view so they can see all apps or games if they choose. Also starting November 21, apps and games that do not meet the “designed for tablets” criteria will be marked as “designed for phones” for users who browse the Play Store on tablets.
You’ll want to make sure that your app is designed for tablets; read more about how to do this at the end of this blog post.
If you want to be sure your app is included in the “Designed for tablets” view, go to the Developer Console to check your tablet optimization tips. If you see any issues listed there, you’ll need to address them in your app and upload a new binary for distribution. If there are no issues listed, your app is eligible to be included in the “Designed for tablets" view in the top lists.
Also, make sure to read the full tablet quality checklist to understand how to build outstanding tablet experiences.
Everyday, thousands of Android developers are taking advantage of the tremendous Android tablet opportunity. The flood of new users coupled with the increased screen size means new user experiences, more engagement and more monetization opportunities. We’re excited to see what you do!
Posted by Greg Hartrell, Google Play Games team
Mobile games are on fire right now; in fact, three out of every four Android users are playing games. Earlier in the year we launched Google Play Games — Google’s platform for gaming across Android, iOS, and the web — to help you take advantage of this wave of users. Building on Google Play Services, you can quickly add new social features to your games, for richer game experiences that drive user acquisition and engagement across platforms.
Today we’re announcing three new features in Google Play Games that make it easier to understand what players are doing in your game, manage your game features more effectively, and store more game data in the Google cloud.
Now you can see stats about your game’s player activity in Google Play Games right in the Google Play Developer Console. You can see how many players have signed into your game through Google, the percentage of players who unlocked an achievement, and how many scores are posted to your leaderboards.
Did you mangle the ID for an achievement or leaderboard? Forget to hit the publish button? Do you know if your game is getting throttled because you accidentally called a method in a tight loop? Fear not! New alerts will now show up in the Developer Console to warn you when these mistakes happen, and guide you quickly to the answers on how to fix them.
Cloud Save is one of our most popular features for game developers, providing up to 512KB of data per user, per game, since it was introduced. You asked for more storage, and we are delivering on that request. Starting October 14th, 2013, you’ll be able to store up to 256KB per slot, for a total of 1MB per user. Game saves have never been happier!
If you want learn more about what Google Play Games offers and how to get started, take a look at the Google Play Games Services developer documentation.
A key part of growing your app’s installed base is knowing more about your users — how they discover your app, what devices they use, what they do when they use your app, and how often they return to it. Understanding your users is now made easier through a new integration between Google Analytics and the Google Play Developer Console.
Starting today, you can link your Google Analytics account with your Google Play Developer Console to get powerful new insights into your app’s user acquisition and engagement. In Google Analytics, you’ll get a new report highlighting which campaigns are driving the most views, installs, and new users in Google Play. In the Developer Console, you’ll get new app stats that let you easily see your app’s engagement based on Analytics data.
This combined data can help you take your app business to the next level, especially if you’re using multiple campaigns or monetizing through advertisements and in-app products that depend on high engagement. Linking Google Analytics to your Developer Console is straightforward — the sections below explain the new types of data you can get and how to get started.
Once you’ve linked your Analytics account to your Developer Console, you’ll see a new report in Google Analytics called Google Play Referral Flow. This report details each of your campaigns and the user traffic that they drive. For each campaign, you can see how many users viewed listing page in Google Play and how many went on to install your app and ultimately launch it on their mobile devices.
With this data you can track the effectiveness of a wide range of campaigns — such as blogs, news articles, and ad campaigns — and get insight into which marketing activities are most effective for your business. You can find the Google Play report by going to Google Analytics and clicking on Acquisitions > Google Play > Referral Flow.
If you’re already using Google Analytics, you know how important it is to see how users are interacting with your app. How often do they launch it? How much do they do with it? What are they doing inside the app?
Once you link your Analytics account, you’ll be able to see your app’s engagement data from Google Analytics right in the Statistics page in your Developer Console. You’ll be able to select two new metrics from the drop-down menu at the top of the page:
These engagement metrics are integrated with your other app statistics, so you can analyze them further across other dimensions, such as by country, language, device, Android version, app version, and carrier.
To get started, you first need to integrate Google Analytics into your app. If you haven’t done this already, download the Google Analytics SDK for Android and then take a look at the developer documentation to learn how to add Analytics to your app. Once you’ve integrated Analytics into your app, upload the app to the Developer Console.
Next, you’ll need to link your Developer Console to Google Analytics. To do this, go to the Developer Console and select the app. At the bottom of the Statistics page, you’ll see directions about how to complete the linking. The process takes just a few moments.
That’s it! You can now see both the Google Play Referral Flow report in Google Analytics and the new engagement metrics in the Developer Console.
Posted by Hak Matsuda and Dirk Dougherty, Android Developer Relations team
If you develop a performance-intensive 3D game, you’re always looking for ways to give users richer graphics, higher frame rates, and better responsiveness. You also want to conserve the user’s battery and keep the device from getting too warm during play. To help you optimize in all of these areas, consider taking advantage of the hardware scaler that’s available on almost all Android devices in the market today.
Virtually all modern Android devices use a CPU/GPU chipset that includes a hardware video scaler. Android provides the higher-level integration and makes the scaler available to apps through standard Android APIs, from Java or native (C++) code. To take advantage of the hardware scaler, all you have to do is render to a fixed-size graphics buffer, rather than using the system-provided default buffers, which are sized to the device's full screen resolution.
When you render to a fixed-size buffer, the device hardware does the work of scaling your scene up (or down) to match the device's screen resolution, including making any adjustments to aspect ratio. Typically, you would create a fixed-size buffer that's smaller than the device's full screen resolution, which lets you render more efficiently — especially on today's high-resolution screens.
Using the hardware scaler is more efficient for several reasons. First, hardware scalers are extremely fast and can produce great visual results through multi-tap and other algorithms that reduce artifacts. Second, because your app is rendering to a smaller buffer, the computation load on the GPU is reduced and performance improves. Third, with less computation work to do, the GPU runs cooler and uses less battery. And finally, you can choose what size rendering buffer you want to use, and that buffer can be the same on all devices, regardless of the actual screen resolution.
In a mobile GPU, the pixel fill rate is one of the major sources of performance bottlenecks for performance game applications. With newer phones and tablets offering higher and higher screen resolutions, rendering your 2D or 3D graphics on those those devices can significantly reduce your frame rate. The GPU hits its maximum fill rate, and with so many pixels to fill, your frame rate drops.
Power consumed in the GPU at different rendering resolutions, across several popular chipsets in use on Android devices. (Data provided by Qualcomm).
To avoid these bottlenecks, you need to reduce the number of pixels that your game is drawing in each frame. There are several techniques for achieving that, such as using depth-prepass optimizations and others, but a really simple and effective way is making use of the hardware scaler.
Instead of rendering to a full-size buffer that could be as large as 2560x1600, your game can instead render to a smaller buffer — for example 1280x720 or 1920x1080 — and let the hardware scaler expand your scene without any additional cost and minimal loss in visual quality.
A performance-intensive game can tend to consume too much battery and generate too much heat. The game’s power consumption and thermal conditions are important to users, and they are important considerations to developers as well.
As shown in the diagram, the power consumed in the device GPU increases significantly as rendering resolution rises. In most cases, any heavy use of power in GPU will end up reducing battery life in the device.
In addition, as CPU/GPU rendering load increases, heat is generated that can make the device uncomfortable to hold. The heat can even trigger CPU/GPU speed adjustments designed to cool the CPU/GPU, and these in turn can throttle the processing power that’s available to your game.
For both minimizing power consumption and thermal effects, using the hardware scaler can be very useful. Because you are rendering to a smaller buffer, the GPU spends less energy rendering and generates less heat.
Android gives you easy access to the hardware scaler through standard APIs, available from your Java code or from your native (C++) code through the Android NDK.
All you need to do is use the APIs to create a fixed-size buffer and render into it. You don’t need to consider the actual size of the device screen, however in cases where you want to preserve the original aspect ratio, you can either match the aspect ratio of the buffer to that of the screen, or you can adjust your rendering into the buffer.
From your Java code, you access the scaler through SurfaceView, introduced in API level 1. Here’s how you would create a fixed-size buffer at 1280x720 resolution:
SurfaceView
surfaceView = new GLSurfaceView(this); surfaceView.getHolder().setFixedSize(1280, 720);
If you want to use the scaler from native code, you can do so through the NativeActivity class, introduced in Android 2.3 (API level 9). Here’s how you would create a fixed-size buffer at 1280x720 resolution using NativeActivity:
NativeActivity
int32_t ret = ANativeWindow_setBuffersGeometry(window, 1280, 720, 0);
By specifying a size for the buffer, the hardware scaler is enabled and you benefit in your rendering to the specified window.
If you will use a fixed-size graphics buffer, it's important to choose a size that balances visual quality across targeted devices with performance and efficiency gains.
For most performance 3D games that use the hardware scaler, the recommended size for rendering is 1080p. As illustrated in the diagram, 1080p is a sweet spot that balances a visual quality, frame rate, and power consumption. If you are satisfied with 720p, of course you can use that size for even more efficient operations.
If you’d like to take advantage of the hardware scaler in your app, take a look at the class documentation for SurfaceView or NativeActivity, depending on whether you are rendering through the Android framework or native APIs.
The RenderScript Support Library lets you take advantage of the latest RenderScript features on devices running Android 2.2 and later.
One of the requests we hear most commonly from developers is to enable more devices to run the latest features of RenderScript. Over the past several releases of Android, we’ve added a ton of functionality to the RenderScript runtime, but the runtime's dependence on the core Android platform version has limited the range of devices that can support that new functionality. We’ve been working on a solution to this since last year, and we’re now ready to share it with all Android developers.
Today we're announcing a new RenderScript Support Library and updated SDK tools that together let you take advantage of RenderScript on plaform versions all the way back to Android 2.2 (Froyo).
With ADT v22.2, SDK Tools v22.2, and Android Build Tools v18.1.0, apps targeting Android 2.2 and later can now make use of almost all of the functionality available natively in RenderScript with Android 4.3. This includes access to the newest RenderScript features such as high-performance intrinsics and the new performance optimizations available to scripts.
Using the RenderScript Support Library is straightforward. Once you've updated ADT and your SDK tools, there are only two things that you have to do to start using Renderscript in your apps:
android.support.v8.renderscript
android.renderscript
import android.support.v8.renderscript.*;
renderscript.target=18 renderscript.support.mode=true sdk.buildtools=18.1.0
That’s it! With the RenderScript Support Library, you can continue to use the same APIs from your app as with the native RenderScript package (with a few minor exceptions that we’ll talk about below), and you can use the same features in your own scripts as you would with the latest RenderScript toolchain.
For complete details on how to set up the RenderScript Support Library, see Accessing RenderScript Java APIs.
If you'd like to use RenderScript Support Library in your app, there are few things you should know:
Allocation.USAGE_IO_INPUT
Allocation.USAGE_IO_OUTPUT
We’re really pleased with how the RenderScript Support Library has turned out. We've already seen how it performs in a shipping app — it's been part of the photo editor in the Google+ Android app since May 2013, and it’s definitely proven itself in a large and widely used application. We hope you’ll be happy with it too.
Posted by R. Jason Sams, Android RenderScript Tech Lead
RenderScript has a very powerful ability called Intrinsics. Intrinsics are built-in functions that perform well-defined operations often seen in image processing. Intrinsics can be very helpful to you because they provide extremely high-performance implementations of standard functions with a minimal amount of code.
RenderScript intrinsics will usually be the fastest possible way for a developer to perform these operations. We’ve worked closely with our partners to ensure that the intrinsics perform as fast as possible on their architectures — often far beyond anything that can be achieved in a general-purpose language.
Table 1. RenderScript intrinsics and the operations they provide.
ScriptIntrinsicConvolve3x3
ScriptIntrinsicConvolve5x5
ScriptIntrinsicBlur
ScriptIntrinsicYuvToRGB
ScriptIntrinsicColorMatrix
ScriptIntrinsicBlend
ScriptIntrinsicLUT
ScriptIntrinsic3DLUT
Your application can use one of these intrinsics with very little code. For example, to perform a Gaussian blur, the application can do the following:
RenderScript rs = RenderScript.create(theActivity); ScriptIntrinsicBlur theIntrinsic = ScriptIntrinsicBlur.create(mRS, Element.U8_4(rs));; Allocation tmpIn = Allocation.createFromBitmap(rs, inputBitmap); Allocation tmpOut = Allocation.createFromBitmap(rs, outputBitmap); theIntrinsic.setRadius(25.f); theIntrinsic.setInput(tmpIn); theIntrinsic.forEach(tmpOut); tmpOut.copyTo(outputBitmap);
This example creates a RenderScript context and a Blur intrinsic. It then uses the intrinsic to perform a Gaussian blur with a 25-pixel radius on the allocation. The default implementation of blur uses carefully hand-tuned assembly code, but on some hardware it will instead use hand-tuned GPU code.
What do developers get from the tuning that we’ve done? On the new Nexus 7, running that same 25-pixel radius Gaussian blur on a 1.6 megapixel image takes about 176ms. A simpler intrinsic like the color matrix operation takes under 4ms. The intrinsics are typically 2-3x faster than a multithreaded C implementation and often 10x+ faster than a Java implementation. Pretty good for eight lines of code.
Figure 1. Performance gains with RenderScript intrinsics, relative to equivalent multithreaded C implementations.
Applications that need additional functionality can mix these intrinsics with their own RenderScript kernels. An example of this would be an application that is taking camera preview data, converting it from YUV to RGB, adding a vignette effect, and uploading the final image to a SurfaceView for display.
In this example, we’ve got a stream of data flowing between a source device (the camera) and an output device (the display) with a number of possible processors along the way. Today, these operations can all run on the CPU, but as architectures become more advanced, using other processors becomes possible.
For example, the vignette operation can happen on a compute-capable GPU (like the ARM Mali T604 in the Nexus 10), while the YUV to RGB conversion could happen directly on the camera’s image signal processor (ISP). Using these different processors could significantly improve power consumption and performance. As more these processors become available, future Android updates will enable RenderScript to run on these processors, and applications written for RenderScript today will begin to make use of those processors transparently, without any additional work for developers.
Intrinsics provide developers a powerful tool they can leverage with minimal effort to achieve great performance across a wide variety of hardware. They can be mixed and matched with general purpose developer code allowing great flexibility in application design. So next time you have performance issues with image manipulation, I hope you give them a look to see if they can help.
Posted by Kristan Uccello, Google Developer Relations
It’s rude to talk during a presentation, it disrespects the speaker and annoys the audience. If your application doesn’t respect the rules of audio focus then it’s disrespecting other applications and annoying the user. If you have never heard of audio focus you should take a look at the Android developer training material.
With multiple apps potentially playing audio it's important to think about how they should interact. To avoid every music app playing at the same time, Android uses audio focus to moderate audio playback—your app should only play audio when it holds audio focus. This post provides some tips on how to handle changes in audio focus properly, to ensure the best possible experience for the user.
Audio focus should not be requested when your application starts (don’t get greedy), instead delay requesting it until your application is about to do something with an audio stream. By requesting audio focus through the AudioManager system service, an application can use one of the AUDIOFOCUS_GAIN* constants (see Table 1) to indicate the desired level of focus.
AUDIOFOCUS_GAIN
Listing 1. Requesting audio focus.
1. AudioManager am = (AudioManager) mContext.getSystemService(Context.AUDIO_SERVICE); 2. 3. int result = am.requestAudioFocus(mOnAudioFocusChangeListener, 4. // Hint: the music stream. 5. AudioManager.STREAM_MUSIC, 6. // Request permanent focus. 7. AudioManager.AUDIOFOCUS_GAIN); 8. if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) { 9. mState.audioFocusGranted = true; 10. } else if (result == AudioManager.AUDIOFOCUS_REQUEST_FAILED) { 11. mState.audioFocusGranted = false; 12. }
In line 7 above, you can see that we have requested permanent audio focus. An application could instead request transient focus using AUDIOFOCUS_GAIN_TRANSIENT which is appropriate when using the audio system for less than 45 seconds.
AUDIOFOCUS_GAIN_TRANSIENT
Alternatively, the app could use AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK, which is appropriate when the use of the audio system may be shared with another application that is currently playing audio (e.g. for playing a "keep it up" prompt in a fitness application and expecting background music to duck during the prompt). The app requesting AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK should not use the audio system for more than 15 seconds before releasing focus.
AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK
In order to handle audio focus change events, an application should create an instance of OnAudioFocusChangeListener. In the listener, the application will need to handle theAUDIOFOCUS_GAIN* event and AUDIOFOCUS_LOSS* events (see Table 1). It should be noted that AUDIOFOCUS_GAIN has some nuances which are highlighted in Listing 2, below.
OnAudioFocusChangeListener
AUDIOFOCUS_GAIN*
AUDIOFOCUS_LOSS*
Listing 2. Handling audio focus changes.
1. mOnAudioFocusChangeListener = new AudioManager.OnAudioFocusChangeListener() { 2. 3. @Override 4. public void onAudioFocusChange(int focusChange) { 5. switch (focusChange) { 6. case AudioManager.AUDIOFOCUS_GAIN: 7. mState.audioFocusGranted = true; 8. 9. if(mState.released) { 10. initializeMediaPlayer(); 11. } 12. 13. switch(mState.lastKnownAudioFocusState) { 14. case UNKNOWN: 15. if(mState.state == PlayState.PLAY && !mPlayer.isPlaying()) { 16. mPlayer.start(); 17. } 18. break; 19. case AudioManager.AUDIOFOCUS_LOSS_TRANSIENT: 20. if(mState.wasPlayingWhenTransientLoss) { 21. mPlayer.start(); 22. } 23. break; 24. case AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK: 25. restoreVolume(); 26. break; 27. } 28. 29. break; 30. case AudioManager.AUDIOFOCUS_LOSS: 31. mState.userInitiatedState = false; 32. mState.audioFocusGranted = false; 33. teardown(); 34. break; 35. case AudioManager.AUDIOFOCUS_LOSS_TRANSIENT: 36. mState.userInitiatedState = false; 37. mState.audioFocusGranted = false; 38. mState.wasPlayingWhenTransientLoss = mPlayer.isPlaying(); 39. mPlayer.pause(); 40. break; 41. case AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK: 42. mState.userInitiatedState = false; 43. mState.audioFocusGranted = false; 44. lowerVolume(); 45. break; 46. } 47. mState.lastKnownAudioFocusState = focusChange; 48. } 49.};
AUDIOFOCUS_GAIN is used in two distinct scopes of an applications code. First, it can be used when registering for audio focus as shown in Listing 1. This does NOT translate to an event for the registered OnAudioFocusChangeListener, meaning that on a successful audio focus request the listener will NOT receive an AUDIOFOCUS_GAIN event for the registration.
AUDIOFOCUS_GAIN is also used in the implementation of an OnAudioFocusChangeListener as an event condition. As stated above, the AUDIOFOCUS_GAIN event will not be triggered on audio focus requests. Instead the AUDIOFOCUS_GAIN event will occur only after an AUDIOFOCUS_LOSS* event has occurred. This is the only constant in the set shown Table 1 that is used in both scopes.
There are four cases that need to be handled by the focus change listener. When the application receives an AUDIOFOCUS_LOSS this usually means it will not be getting its focus back. In this case the app should release assets associated with the audio system and stop playback. As an example, imagine a user is playing music using an app and then launches a game which takes audio focus away from the music app. There is no predictable time for when the user will exit the game. More likely, the user will navigate to the home launcher (leaving the game in the background) and launch yet another application or return to the music app causing a resume which would then request audio focus again.
AUDIOFOCUS_LOSS
However another case exists that warrants some discussion. There is a difference between losing audio focus permanently (as described above) and temporarily. When an application receives an AUDIOFOCUS_LOSS_TRANSIENT, the behavior of the app should be that it suspends its use of the audio system until it receives an AUDIOFOCUS_GAIN event. When the AUDIOFOCUS_LOSS_TRANSIENT occurs, the application should make a note that the loss is temporary, that way on audio focus gain it can reason about what the correct behavior should be (see lines 13-27 of Listing 2).
AUDIOFOCUS_LOSS_TRANSIENT
Sometimes an app loses audio focus (receives an AUDIOFOCUS_LOSS) and the interrupting application terminates or otherwise abandons audio focus. In this case the last application that had audio focus may receive an AUDIOFOCUS_GAIN event. On the subsequent AUDIOFOCUS_GAIN event the app should check and see if it is receiving the gain after a temporary loss and can thus resume use of the audio system or if recovering from an permanent loss, setup for playback.
If an application will only be using the audio capabilities for a short time (less than 45 seconds), it should use an AUDIOFOCUS_GAIN_TRANSIENT focus request and abandon focus after it has completed its playback or capture. Audio focus is handled as a stack on the system — as such the last process to request audio focus wins.
When audio focus has been gained this is the appropriate time to create a MediaPlayer or MediaRecorder instance and allocate resources. Likewise when an app receives AUDIOFOCUS_LOSS it is good practice to clean up any resources allocated. Gaining audio focus has three possibilities that also correspond to the three audio focus loss cases in Table 1. It is a good practice to always explicitly handle all the loss cases in the OnAudioFocusChangeListener.
MediaPlayer
MediaRecorder
Table 1. Audio focus gain and loss implication.
AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK
Note: AUDIOFOCUS_GAIN is used in two places. When requesting audio focus it is passed in as a hint to the AudioManager and it is used as an event case in the OnAudioFocusChangeListener. The gain events highlighted in green are only used when requesting audio focus. The loss events are only used in the OnAudioFocusChangeListener.
AudioManager
Table 2. Audio stream types.
STREAM_ALARM
STREAM_DTMF
STREAM_MUSIC
STREAM_NOTIFICATION
An app will request audio focus (see an example in the sample source code linked below) from the AudioManager (Listing 1, line 1). The three arguments it provides are an audio focus change listener object (optional), a hint as to what audio channel to use (Table 2, most apps should use STREAM_MUSIC) and the type of audio focus from Table 1, column 1. If audio focus is granted by the system (AUDIOFOCUS_REQUEST_GRANTED), only then handle any initialization (see Listing 1, line 9).
AUDIOFOCUS_REQUEST_GRANTED
Note: The system will not grant audio focus (AUDIOFOCUS_REQUEST_FAILED) if there is a phone call currently in process and the application will not receive AUDIOFOCUS_GAIN after the call ends.
AUDIOFOCUS_REQUEST_FAILED
Within an implementation of OnAudioFocusChange(), understanding what to do when an application receives an onAudioFocusChange() event is summarized in Table 3.
OnAudioFocusChange()
onAudioFocusChange()
In the cases of losing audio focus be sure to check that the loss is in fact final. If the app receives an AUDIOFOCUS_LOSS_TRANSIENT or AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK it can hold onto the media resources it has created (don’t call release()) as there will likely be another audio focus change event very soon thereafter. The app should take note that it has received a transient loss using some sort of state flag or simple state machine.
release()
If an application were to request permanent audio focus with AUDIOFOCUS_GAIN and then receive an AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK an appropriate action for the application would be to lower its stream volume (make sure to store the original volume state somewhere) and then raise the volume upon receiving an AUDIOFOCUS_GAIN event (see Figure 1, below).
Table 3. Appropriate actions by focus change type.
Understanding how to be a good audio citizen application on an Android device means respecting the system's audio focus rules and handling each case appropriately. Try to make your application behave in a consistent manner and not negatively surprise the user. There is a lot more that can be talked about within the audio system on Android and in the material below you will find some additional discussions.
Example source code is available here:
https://android.googlesource.com/platform/development/+/master/samples/RandomMusicPlayer