Posted by Paul Lammertsma, Developer Advocate
Technology has changed the way media and entertainment is accessed and consumed in the home. While the living room experience is evolving with the addition of smart devices, TVs still remain the largest and most frequently used screen for watching content.
When Android TV was first introduced in 2014, we set out to bring the best of Android into the connected home on the TV. We worked closely with the developer community to grow our content and app ecosystem and bring users the content they want. Since then, we’ve seen tremendous momentum with OEM and operator partners as well as consumer adoption worldwide.
Today, we are bringing Android API level 29 with the recent performance and security updates made with Android 10 to Android TV. We’re excited to provide faster updates through Project Treble and more secure storage with encrypted user data. TLS 1.3 by default also brings better performance benefits and is up to date with the TLS standard. In addition, Android 10 includes hardening for several security-critical areas of the platform.
To make sure developers have the ability to build and test Android TV app implementations on Android 10 prior to rollout, we’re introducing a new, developer-focused streaming media device called ADT-3.
With a quad-core A53, 2GB of DDR3 memory and 4Kp60 HDR HDMI 2.1 output, we’ve designed this pre-certified TV dongle with updates and security patches to help developers design for the next generation of Android TV devices. By providing a way to test on physical and up to date hardware, developers can better validate their Android TV app’s compatibility.
ADT-3 will be made available to developers in the coming months for purchase online through an OEM partner.
Posted by Benjamin Baxter, Developer Advocate and Bacon Connoisseur
All TVs have the same problem with keyboard input: It is very cumbersome to hunt and peck for each letter using a D-pad with a remote. And if you make a mistake, trying to correct it compounds the problem.
APIs like Smart Lock and Autofill, can ease user's frustrations, but for certain types of input, like login, you need to collect complex input that is difficult for users using the on-screen keyboard.
With the Nearby Connections API, you can use a second screen to gather input from the user with less friction.
From the documentation:
"Nearby Connections is an offline peer-to-peer socket model for communication based on advertising and discovering devices in proximity.
Usage of the API falls into two phases: pre-connection, and post-connection.
In the pre-connection phase, Advertisers advertise themselves, while Discoverers discover nearby Advertisers and send connection requests. A connection request from a Discoverer to an Advertiser initiates a symmetric authentication flow that results in both sides independently accepting (or rejecting) the connection request.
After a connection request is accepted by both sides, the connection is established and the devices enter the post-connection phase, during which both sides can exchange data."
In most cases the TV is the advertiser and the phone is the discoverer. In the example below, the assumed second device is a phone. The API and patterns described in this article are not limited to a phone. For example, a tablet could also be the second screen device.
There are many times when keyboard input is required. Authenticating users and collecting billing information (like zip codes and name on card) are common cases. This example handles a login flow that uses a second screen to see how Nearby Connections can help reduce friction.
1. The user opens your app on her TV and needs to login. You can show a screen of options similar to the setup flow for a new TV.
2. After the user chooses to login with their phone, the TV should start advertising and send the user to the associated login app on their phone, which should start discovering.
There are a variety of solutions to open the app on the phone. As an example, Android TV's setup flow has the user open the corresponding app on their mobile device. Initiating the hand-off is a more a UX concern than a technology concern.
3. The phone app should display the advertising TV and prompt the user to initiate the connection. After the (encrypted -- see Security Considerations below for more on this) connection is established the TV can stop advertising and the phone can stop discovering.
"Advertising/Discovery using Nearby Connections for hours on end can affect a device's battery. While this is not usually an issue for a plugged-in TV, it can be for mobile devices, so be conscious about stopping advertising and discovery once they're no longer needed."
4. Next, the phone can start collecting the user's input. Once the user enters their login information, the phone should send it to the TV in a BYTES payload over the secure connection.
5. When the TV receives the message it should send an ACK (using a BYTES payload) back to the phone to confirm delivery.
6. When the phone receives the ACK, it can safely close the connection.
The following diagram summarizes the sequence of events:
Nearby Connections needs location permissions to be able to discover nearby devices. Be transparent with your users. Tell them why they need to grant the location permission on their phone.
Since the TV is advertising, it does not need location permissions.
After the user chooses to login on the phone, the TV should start advertising. This is a very simple process with the Nearby API.
override fun onGuidedActionClicked(action: GuidedAction?) { super.onGuidedActionClicked(action) if( action == loginAction ) { // Update the UI so the user knows to check their phone navigationFlowCallback.navigateToConnectionDialog() doStartAdvertising(requireContext()) { payload -> handlePayload(payload) } } }
When the user clicks a button, update the UI to tell them to look at their phone to continue. Be sure to offer a way to cancel the remote login and try manually with the cumbersome onscreen keyboard.
This example uses a GuidedStepFragment but the same UX pattern applies to whatever design you choose.
Advertising is straightforward. You need to supply a name, a service id (typically the package name), and a `ConnectionLifeCycleCallback`.
You also need to choose a strategy that both the TV and the phone use. Since it is possible that the users has multiple TVs (living room, bedroom, etc) the best strategy to use is P2P_CLUSTER.
Then start advertising. The onSuccessListener and onFailureListener tell you whether or not the device was able to start advertising, they do not indicate a device has been discovered.
fun doStartAdvertising(context: Context) { Nearby.getConnectionsClient(context).startAdvertising( context.getString(R.string.tv_name), context.packageName, connectionLifecycleCallback, AdvertisingOptions.Builder().setStrategy(Strategy.P2P_CLUSTER).build() ) .addOnSuccessListener { Log.d(LoginStepFragment.TAG, "We are advertising!") } .addOnFailureListener { Log.d(LoginStepFragment.TAG, "We cannot start advertising.") Toast.makeText( context, "We cannot start advertising.", Toast.LENGTH_LONG) .show() } }
The real magic happens in the `connectionLifecycleCallback` that is triggered when devices start to initiate a connection. The TV should accept the handshake from the phone (after performing the necessary authentication -- see Security Considerations below for more) and supply a payload listener.
val connectionLifecycleCallback = object : ConnectionLifecycleCallback() { override fun onConnectionInitiated( endpointId: String, connectionInfo: ConnectionInfo ) { Log.d(TAG, "Connection initialized to endpoint: $endpointId") // Make sure to authenticate using `connectionInfo.authenticationToken` // before accepting Nearby.getConnectionsClient(context) .acceptConnection(endpointId, payloadCallback) } override fun onConnectionResult( endpointId: String, connectionResolution: ConnectionResolution ) { Log.d(TAG, "Received result from connection: ${connectionResolution.status.statusCode}") doStopAdvertising() when (connectionResolution.status.statusCode) { ConnectionsStatusCodes.STATUS_OK -> { Log.d(TAG, "Connected to endpoint: $endpointId") otherDeviceEndpointId = endpointId } else -> { otherDeviceEndpointId = null } } } override fun onDisconnected(endpointId: String) { Log.d(TAG, "Disconnected from endpoint: $endpointId") otherDeviceEndpointId = null } }
The payloadCallback listens for the phone to send the login information needed. After receiving the login information, the connection is no longer needed. We go into more detail later in the Ending the Conversation section.
Nearby Connections does not require the user's consent. However, the location permission must be granted in order for discovery with Nearby Connections to work its magic. (It uses BLE scanning under the covers.) After opening the app on the phone, start by prompting the user for location permission if not already granted on devices running Marshmallow and higher.
Once the permission is granted, start discovering, confirm the connection, collect the credentials, and send a message to the TV app.
Discovering is as simple as advertising. You need a service id (typically the package name -- this should be the same on the Discoverer and Advertiser for them to see each other), a name, and a `EndpointDiscoveryCallback`. Similar to the TV code, the flow is triggered by callbacks based on the connection status.
Nearby.getConnectionsClient(context).startDiscovery( context.packageName, mobileEndpointDiscoveryCallback, DiscoveryOptions.Builder().setStrategy(Strategy.P2P_CLUSTER).build() ) .addOnSuccessListener { // We're discovering! Log.d(TAG, "We are discovering!") } .addOnFailureListener { // We were unable to start discovering. Log.d(TAG, "We cannot start discovering!") }
The Discoverer's listeners are similar to the Advertiser's success and failure listeners; they signal if the request to start discovery was successful or not.
Once you discover an advertiser, the `EndpointDiscoveryCallback` is triggered. You need to keep track of the other endpoint to know who to send the payload, e.g.: the user's credentials, to later.
val mobileEndpointDiscoveryCallback = object : EndpointDiscoveryCallback() { override fun onEndpointFound( endpointId: String, discoveredEndpointInfo: DiscoveredEndpointInfo ) { // An endpoint was found! Log.d(TAG, "An endpoint was found, ${discoveredEndpointInfo.endpointName}") Nearby.getConnectionsClient(context) .requestConnection( context.getString(R.string.phone_name), endpointId, connectionLifecycleCallback) } override fun onEndpointLost(endpointId: String) { // A previously discovered endpoint has gone away. Log.d(TAG, "An endpoint was lost, $endpointId") } }
One of the devices must initiate the connection. Since the Discoverer has a callback for endpoint discovery, it makes sense for the phone to request the connection to the TV.
The phone asks for a connection supplying a `connectionLifecycleCallback` which is symmetric to the callback in the TV code.
val connectionLifecycleCallback = object : ConnectionLifecycleCallback() { override fun onConnectionInitiated( endpointId: String, connectionInfo: ConnectionInfo ) { Log.d(TAG, "Connection initialized to endpoint: $endpointId") // Make sure to authenticate using `connectionInfo.authenticationToken` before accepting Nearby.getConnectionsClient(context) .acceptConnection(endpointId, payloadCallback) } override fun onConnectionResult( endpointId: String, connectionResolution: ConnectionResolution ) { Log.d(TAG, "Connection result from endpoint: $endpointId") when (connectionResolution.status.statusCode) { ConnectionsStatusCodes.STATUS_OK -> { Log.d(TAG, "Connected to endpoint: $endpointId") otherDeviceEndpointId = endpointId waitingIndicator.visibility = View.GONE emailInput.editText?.isEnabled = true passwordInput.editText?.isEnabled = true Nearby.getConnectionsClient(this).stopDiscovery() } else -> { otherDeviceEndpointId = null } } } override fun onDisconnected(endpointId: String) { Log.d(TAG, "Disconnected from endpoint: $endpointId") otherDeviceEndpointId = null } }
Once the connection is established, stop discovery to avoid keeping this battery-intensive operation running longer than needed. The example stops discovery after the connection is established, but it is possible for a user to leave the activity before that happens. Be sure to stop the discovery/advertising in onStop() on both the TV and phone.
override fun onStop() { super.onStop() Nearby.getConnectionsClient(this).stopDiscovery() }
Just like a TV app, when you accept the connection you supply a payload callback. The callback listens for messages from the TV app such as the ACK described above to clean up the connection.
After the devices are connected, the user can use the keyboard and send their authentication information to the TV by calling `sendPayload()`.
fun sendCreditials() { val email = emailInput.editText?.text.toString() val password = passwordInput.editText?.text.toString() val creds = "$email:$password" val payload = Payload.fromBytes(creds.toByteArray()) Log.d(TAG, "sending payload: $creds") if (otherDeviceEndpointId != null) { Nearby.getConnectionsClient(this) .sendPayload(otherDeviceEndpointId, payload) } }
After the phone sends the payload to the TV (and the login is successful), there is no reason for the devices to remain connected. The TV can initiate the disconnection with a simple shutdown protocol.
The TV should send an ACK to the phone after it receives the credential payload.
val payloadCallback = object : PayloadCallback() { override fun onPayloadReceived(endpointId: String, payload: Payload) { if (payload.type == Payload.Type.BYTES) { payload.asBytes()?.let { val body = String(it) Log.d(TAG, "A payload was received: $body") // Validate that this payload contains the login credentials, and process them. val ack = Payload.fromBytes(ACK_PAYLOAD.toByteArray()) Nearby.getConnectionsClient(context).sendPayload(endpointId, ack) } } } override fun onPayloadTransferUpdate( endpointId: String, update: PayloadTransferUpdate ) { } }
The phone should have a `PayloadCallback` that initiates a disconnection in response to the ACK. This is also a good time to reset the UI to show an authenticated state.
private val payloadCallback = object : PayloadCallback() { override fun onPayloadReceived(endpointId: String, payload: Payload) { if (payload.type == Payload.Type.BYTES) { payload.asBytes()?.let { val body = String(it) Log.d(TAG, "A payload was received: $body") if (body == ACK_PAYLOAD) { waitingIndicator.visibility = View.VISIBLE waitingIndicator.text = getString(R.string.login_successful) emailInput.editText?.isEnabled = false passwordInput.editText?.isEnabled = false loginButton.isEnabled = false Nearby.getConnectionsClient(this@MainActivity) .disconnectFromEndpoint(endpointId) } } } } override fun onPayloadTransferUpdate( endpointId: String, update: PayloadTransferUpdate ) { } }
For security (especially since we're sending over sensitive information like login credentials), it's strongly recommended that you authenticate the connection by showing a code and having the user confirm that the two devices being connected are the intended ones -- without this, the connection established by Nearby Connection is encrypted but not authenticated, and that's susceptible to Man-In-The-Middle attacks. The documentation goes into greater detail on how to authenticate a connection.
There are many times when a user needs to supply input to a TV app. The Nearby API provides a way to offload the hardships of an onscreen-dpad-driven keyboard to an easy and familiar phone keyboard.
What use cases do you have where a second screen would simplify your user's life? Leave a comment or send me (@benjamintravels) or Varun (@varunkapoor, Team Lead for Nearby Connections) a tweet to continue the discussion.
At Google I/O 2017, we announced a redesign of the Android TV's home screen. We expanded the recommendation row concept so that each app can have its own row (or multiple rows) of content on the home screen. Since the release of the new home screen, we have seen increased adoption of the new recommendation channels for Android Oreo in a wide variety of apps.
With more and more apps surfacing high-quality recommendations using the new channels, the legacy recommendation row in the new home screen on Android O will be phased out over the next year.
Currently, when an app creates recommendations with the legacy notification based API the content is added to a channel for that app. The channel may already exist if there was recommended content for it when you upgraded from Android N (or below). If the there is no channel for the app, it will be automatically generated for you. In either case, the user can't add or remove programs from the channel, but they can move, hide, and show the channel. When an app starts to use the new API to add its own channels, the system removes the auto-generated channel and the app takes over control of the display of their content.
Over the next year, we will phase out the automatic generation of channels. Instead of generating multiple channels, one for each app's legacy recommendations, we will insert one channel for all legacy recommendations. This channel will appear at the bottom of the channel list. Users can move or remove it. When a user upgrades to Android O, the previous recommendation row from Android N (and below) becomes a channel at the bottom of the home screen.
If you have not updated your app to post content to the new channels on the home screen, take a look at our documentation, codelab, and sample to get started.
We look forward to more and more apps taking advantage of the new changes in the home screen. We love to hear your feedback, so please visit the Android TV Developer Community on G+ to share your thoughts and ideas.
In its continuous effort to improve user experience, the Android platform has introduced strict limitations on background services starting in API level 26. Basically, unless your app is running in the foreground, the system will stop all of your app's background services within minutes.
As a result of these restrictions on background services, JobScheduler jobs have become the de facto solution for performing background tasks. For people familiar with services, JobScheduler is generally straightforward to use: except in a few cases, one of which we shall explore presently.
JobScheduler
Imagine you are building an Android TV app. Since channels are very important to TV Apps, your app should be able to perform at least five different background operations on channels: publish a channel, add programs to a channel, send logs about a channel to your remote server, update a channel's metadata, and delete a channel. Prior to Android 8.0 (Oreo) each of these five operations could be implemented within background services. Starting in API 26, however, you must be judicious in deciding which should be plain old background Services and which should be JobServices.
Service
JobService
In the case of a TV app, of the five operations mentioned above, only channel publication can be a plain old background service. For some context, channel publication involves three steps: first the user clicks on a button to start the process; second the app starts a background operation to create and submit the publication; and third, the user gets a UI to confirm subscription. So as you can see, publishing channels requires user interactions and therefore a visible Activity. Hence, ChannelPublisherService could be an IntentService that handles the background portion. The reason you should not use a JobService here is because JobService will introduce a delay in execution, whereas user interaction usually requires immediate response from your app.
IntentService
For the other four operations, however, you should use JobServices; that's because all of them may execute while your app is in the background. So respectively, you should have ChannelProgramsJobService, ChannelLoggerJobService, ChannelMetadataJobService, and ChannelDeletionJobService.
ChannelProgramsJobService
ChannelLoggerJobService
ChannelMetadataJobService
ChannelDeletionJobService
Since all the four JobServices above deal with Channel objects, it should be convenient to use the channelId as the jobId for each one of them. But because of the way JobServices are designed in the Android Framework, you can't. The following is the official description of jobId
Channel
channelId
jobId
Application-provided id for this job. Subsequent calls to cancel, or jobs created with the same jobId, will update the pre-existing job with the same id. This ID must be unique across all clients of the same uid (not just the same package). You will want to make sure this is a stable id across app updates, so probably not based on a resource ID.
What the description is telling you is that even though you are using 4 different Java objects (i.e. -JobServices), you still cannot use the same channelId as their jobIds. You don't get credit for class-level namespace.
This indeed is a real problem. You need a stable and scalable way to relate a channelId to its set of jobIds. The last thing you want is to have different channels overwriting each other's operations because of jobId collisions. Were jobId of type String instead of Integer, the solution would be easy: jobId= "ChannelPrograms" + channelId for ChannelProgramsJobService, jobId= "ChannelLogs" + channelId for ChannelLoggerJobService, etc. But since jobId is an Integer and not a String, you have to devise a clever system for generating reusable jobIds for your jobs. And for that, you can use something like the following JobIdManager.
jobId= "ChannelPrograms" + channelId
ChannelProgramsJobService, jobId= "ChannelLogs" + channelId
ChannelLoggerJobService,
JobIdManager
JobIdManager is a class that you tweak according to your app's needs. For this present TV app, the basic idea is to use a single channelId over all jobs dealing with Channels. To expedite clarification: let's first look at the code for this sample JobIdManager class, and then we'll discuss.
public class JobIdManager { public static final int JOB_TYPE_CHANNEL_PROGRAMS = 1; public static final int JOB_TYPE_CHANNEL_METADATA = 2; public static final int JOB_TYPE_CHANNEL_DELETION = 3; public static final int JOB_TYPE_CHANNEL_LOGGER = 4; public static final int JOB_TYPE_USER_PREFS = 11; public static final int JOB_TYPE_USER_BEHAVIOR = 21; @IntDef(value = { JOB_TYPE_CHANNEL_PROGRAMS, JOB_TYPE_CHANNEL_METADATA, JOB_TYPE_CHANNEL_DELETION, JOB_TYPE_CHANNEL_LOGGER, JOB_TYPE_USER_PREFS, JOB_TYPE_USER_BEHAVIOR }) @Retention(RetentionPolicy.SOURCE) public @interface JobType { } //16-1 for short. Adjust per your needs private static final int JOB_TYPE_SHIFTS = 15; public static int getJobId(@JobType int jobType, int objectId) { if ( 0 < objectId && objectId < (1<< JOB_TYPE_SHIFTS) ) { return (jobType << JOB_TYPE_SHIFTS) + objectId; } else { String err = String.format("objectId %s must be between %s and %s", objectId,0,(1<<JOB_TYPE_SHIFTS)); throw new IllegalArgumentException(err); } } }
As you can see, JobIdManager simply combines a prefix with a channelId to get a jobId. This elegant simplicity, however, is just the tip of the iceberg. Let's consider the assumptions and caveats beneath.
First insight: you must be able to coerce channelId into a Short, so that when you combine channelId with a prefix you still end up with a valid Java Integer. Now of course, strictly speaking, it does not have to be a Short. As long as your prefix and channelId combine into a non-overflowing Integer, it will work. But margin is essential to sound engineering. So unless you truly have no choice, go with a Short coercion. One way you can do this in practice, for objects with large IDs on your remote server, is to define a key in your local database or content provider and use that key to generate your jobIds.
Second insight: your entire app ought to have only one JobIdManager class. That class should generate jobIds for all your app's jobs: whether those jobs have to do with Channels, Users, or Cats and Dogs. The sample JobIdManager class points this out: not all JOB_TYPEs have to do with Channel operations. One job type has to do with user prefs and one with user behavior. The JobIdManager accounts for them all by assigning a different prefix to each job type.
User
Cat
Dog
JOB_TYPE
Third insight: for each -JobService in your app, you must have a unique and final JOB_TYPE_ prefix. Again, this must be an exhaustive one-to-one relationship.
-JobService
JOB_TYPE_
The following code snippet from ChannelProgramsJobService demonstrates how to use a JobIdManager in your project. Whenever you need to schedule a new job, you generate the jobId using JobIdManager.getJobId(...).
JobIdManager.getJobId(...)
import android.app.job.JobInfo; import android.app.job.JobParameters; import android.app.job.JobService; import android.content.ComponentName; import android.content.Context; import android.os.PersistableBundle; public class ChannelProgramsJobService extends JobService { private static final String CHANNEL_ID = "channelId"; . . . public static void schedulePeriodicJob(Context context, final int channelId, String channelName, long intervalMillis, long flexMillis) { JobInfo.Builder builder = scheduleJob(context, channelId); builder.setPeriodic(intervalMillis, flexMillis); JobScheduler scheduler = (JobScheduler) context.getSystemService(Context.JOB_SCHEDULER_SERVICE); if (JobScheduler.RESULT_SUCCESS != scheduler.schedule(builder.build())) { //todo what? log to server as analytics maybe? Log.d(TAG, "could not schedule program updates for channel " + channelName); } } private static JobInfo.Builder scheduleJob(Context context,final int channelId){ ComponentName componentName = new ComponentName(context, ChannelProgramsJobService.class); final int jobId = JobIdManager .getJobId(JobIdManager.JOB_TYPE_CHANNEL_PROGRAMS, channelId); PersistableBundle bundle = new PersistableBundle(); bundle.putInt(CHANNEL_ID, channelId); JobInfo.Builder builder = new JobInfo.Builder(jobId, componentName); builder.setPersisted(true); builder.setExtras(bundle); builder.setRequiredNetworkType(JobInfo.NETWORK_TYPE_ANY); return builder; } ... }
Footnote: Thanks to Christopher Tate and Trevor Johns for their invaluable feedback
adb shell am start -a "android.search.action.GLOBAL_SEARCH" --es query \"The Incredibles\"
import static android.support.v4.content.IntentCompat.EXTRA_START_PLAYBACK; public class SearchableActivity extends Activity { @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); if (getIntent() != null) { // Retrieve video from getIntent().getData(). boolean startPlayback = getIntent().getBooleanExtra(EXTRA_START_PLAYBACK, false); Log.d(TAG, "Should start playback? " + (startPlayback ? "yes" : "no")); if (startPlayback) { // Start playback. startActivity(...); } else { // Show details for movie. startActivity(...); } } finish(); } }
adb shell 'am start -a android.intent.action.VIEW --ez android.intent.extra.START_PLAYBACK true -d <URI> -f 0x14000000'
adb shell 'am start -a android.intent.action.VIEW --ez android.intent.extra.START_PLAYBACK true -d content://com.example.android.assistantplayback/video/2 -n com.example.android.assistantplayback/.SearchableActivity -f 0x14000000'
public class MyMediaSessionCallback extends MediaSessionCompat.Callback { private final PlaybackTransportControlGlue<?> mGlue; public MediaSessionCallback(PlaybackTransportControlGlue<?> glue) { mGlue = glue; } @Override public void onPlay() { Log.d(TAG, "MediaSessionCallback: onPlay()"); mGlue.play(); updateMediaSessionState(...); } @Override public void onPause() { Log.d(TAG, "MediaSessionCallback: onPause()"); mGlue.pause(); updateMediaSessionState(...); } @Override public void onSeekTo(long position) { Log.d(TAG, "MediaSessionCallback: onSeekTo()"); mGlue.seekTo(position); updateMediaSessionState(...); } @Override public void onStop() { Log.d(TAG, "MediaSessionCallback: onStop()"); // Handle differently based on your use case. } @Override public void onSkipToNext() { Log.d(TAG, "MediaSessionCallback: onSkipToNext()"); playAndUpdateMediaSession(...); } @Override public void onSkipToPrevious() { Log.d(TAG, "MediaSessionCallback: onSkipToPrevious()"); playAndUpdateMediaSession(...); } }
Android TV brings rich app experiences and entertainment to the biggest screen in your house, and with Android O, we’re making it even easier for users to access content from their favorite apps. We’ve built a new, content-centric home screen experience for Android TV, and we're bringing the Google Assistant to the platform as well. These features put content that users want to access a few clicks, or spoken words, away.
The new Android TV home screen organizes video content into channels and programs in a way that’s familiar to TV viewers. Each Android TV app can publish multiple channels, which are represented as rows of programs on the home screen. Apps add relevant programs on each channel, and update these programs and channels as users access content or when new content is available. To help engage users, programs can include a video preview, which is automatically played when a user focuses on a program. Users can configure which channels they wish to see on the home screen, and the ordering of channels, so the themes and shows they’re interested in are quick and easy to access.
In addition to channels for you app, the top of the new Android TV home screen includes a quick launch bar for users' favorite apps, and a special Watch Next channel. This channel contains programs based on the viewing habits of the user.
The APIs for creating and maintaining channels and programs are part of the TvProvider APIs, which are distributed as an Android Support Library module with Android O. To get started using these APIs, visit the Android O Developer Preview site for an overview, and try out the Android TV Channels and Programs codelab for a first-hand experience building an Android TV app for Android O.
Later this year, Nexus Players will receive the new Android TV home experience as an OTA update. If you wish build and test apps for the new interface today, however, you can use the Android TV emulator or Nexus Player device images that are part of the latest Android O Developer Preview.
The Google Assistant on Android TV, coming later this year, will allow users to quickly find and access content using their voice. Because the Assistant is context-aware, it can help users narrow down what content to play. Users will also be able access the Assistant to control playback, even while a video or music is playing. And since the Assistant can control compatible smart home devices, a simple voice request can dim the lights to create an ideal movie viewing environment. When the Google Assistant comes to Android TV, it will launch in the US on Android devices running M, N, and O.
We're looking forward to seeing how developers take advantage of the new Android TV home screen. We welcome feedback, so please visit the Android TV Developer Community on G+ to share you thoughts and ideas!
Posted by: Dave Burke, VP of Engineering
With billions of Android devices around the world, Android has surpassed our wildest expectations. Today at Google I/O, we showcased a number of ways we’re pushing Android forward, with the O Release, new tools for developers to help create more performant apps, and an early preview of a project we call Android Go -- a new experience that we’re building for entry-level devices.
Posted by Nick Felker and Sachit Mishra, Developer Programs Engineers
The TV Input Framework (TIF) on Android TV makes it easy for third-party app developers to create their own TV channels with any type of linear media. It introduces a new way for apps to engage with users with a high-quality channel surfing experience, and it gives users a single interface to browse and watch all of their channels.
To help developers get started with building TV channels, we have created the TV Input Framework Companion Library, which includes a number of helper methods and classes to make the development process as easy as possible.
This library provides standard classes to set up a background task that updates the program guide and an interface that helps integrate your media player with the playback controller, as well as supports the new TV Recording APIs that are available in Android Nougat. It includes everything you need to start showing your content on your Android TV's live TV app.
(Note: source from android-tv-sample-inputs sample)
To get started, take a look at the sample app and documentation. The sample demonstrates how to extend this library to create custom channels and manage video playback. Developers can immediately get started with the sample app by updating the XMLTV file with their own content or dynamically creating channels in the SampleJobService.
You can include this library in your app by copying the library directory from the sample into your project root directory. Then, add the following to your project's settings.gradle file:
library
settings.gradle
include ':library'
In your app's build.gradle file, add the following to your dependencies:
build.gradle
compile project(':library')
Android TV continues to grow, and whether your app has on-demand or live media, TIF is a great way to keep users engaged with your content. One partner for example, Haystack TV, recently integrated TIF into their app and it now accounts for 16% of watch time for new users on Android TV.
Check out our TV developer site to learn more about Android TV, and join our developer community on Google+ at g.co/androidtvdev to discuss this library and other topics with TV developers.
Posted by Josh Gordon, Developer Advocate
Channel surfing is a popular way of watching TV. You pick up the remote, lean back, and flip through channels to see what’s on. On Android TV, app developers can create their own channel-like experiences using the TV Input Framework.
To the user, the channels you create look and feel just like regular TV channel. But behind the scenes, they stream video over the internet. For example, you can create a channel from a video playlist.
Watch this DevByte for an overview of how to build to a channel, and see the sample app and developer training for more info. The sample shows how to work with a variety of media formats, including HLS, MPEG-Dash, and HTTP Progressive.
If you already have an app that streams video, consider also making your content available as a channel. It’s a great opportunity to increase engagement. We’re excited to see what you develop, and look forward to seeing your content on the big screen!
Lily Sheringham, Developer Marketing at Google Play
Editor’s note: This is another post in our series featuring tips from developers finding success on Google Play. This week, we’re sharing advice from Telltale Games on how to create a successful game on Android TV. -Ed.
With new Android hardware being released from the likes of Sony, Sharp, and Philips amongst others, Android TV and Google Play can help you bring your game to users right in their living rooms through a big screen experience.
The recent Marshmallow update for Android TV means makes it easier than ever to extend your new or existing games and apps for TV. It's important to understand how your game is presented in the user interface and how it can help users get to the content they want quickly.
Telltale Games is a US-founded game developer and publisher, based in San Francisco, California. They’re well known for the popular series ‘The Walking Dead’ and ‘Game of Thrones‘ which was created in partnership with HBO.
Zac Litton, VP of Technology at Telltale Games, shares his tips for creating and launching your games with Android TV.
With the recently released Android TV codelab and online class from Udacity, you can learn how to convert your existing mobile game into Android TV in just four hours. Find out more about how to build games for Android TV and how you to publish them using familiar tools and processes in Google Play.
Posted by Anirudh Dewani, Developer Advocate
Android 6.0 introduces a new runtime permission model that gives users more granular control over granting permissions requested from their apps and leads to faster app installs. Users can also revoke these permissions from Settings at any point of time. If an app running on the M Preview supports the new permissions model, the user does not have to grant any permissions when they install or upgrade the app. Developers should check for permissions that require runtime grant from users, and request them if the app doesn’t already have them.
To list all permissions that require runtime grant from users on Android 6.0 -
adb shell pm list permissions -g -d
Apps should generally request as few permissions as possible. Voice search is an integral part of Android TV content discovery experience. When using the internal SpeechRecognizer to enable Voice Search, apps must declare RECORD_AUDIO permission in the manifest. RECORD_AUDIO requires explicit user grant during runtime in Android 6.0. When using the Android TV Leanback support library, apps can eliminate the need for requesting RECORD_AUDIO during runtime by using SpeechRecognitionCallback instead of SpeechRecognizer.
Commit from Android TV Leanback Sample repository.
mFragment = (SearchFragment) getFragmentManager() .findFragmentById(R.id.search_fragment); if (!USE_INTERNAL_SPEECH_RECOGNIZER) { mSpeechRecognitionCallback = new SpeechRecognitionCallback() { @Override public void recognizeSpeech() { if (DEBUG) Log.v(TAG, "recognizeSpeech"); // ACTION_RECOGNIZE_SPEECH startActivityForResult(mFragment.getRecognizerIntent(), REQUEST_SPEECH); } }; mFragment.setSpeechRecognitionCallback(mSpeechRecognitionCallback); }
When SpeechRecognitionCallback is set, Android Leanback support library will let the your activity process the voice search action instead of using the internal SpeechRecognizer. The app can then use RecognizerIntent to support speech recognition.
Posted by Maru Ahues, Media Developer Advocate
When it comes to TV, content is king. But to enjoy great content, you first need to find it. We created Android TV with that in mind: a truly smart TV should deliver interesting content to users. Today, EPIX® joins a growing list of apps that use the Android TV platform to make it easy to enjoy movies, TV shows, sports highlights, music videos and more.
Think of your favorite movie. Now try to locate it in one of your streaming apps. If you have a few apps to choose from, it might take some hunting before you can watch that movie. With Android TV, we want to make it easier to be entertained. Finding ‘Teenage Mutant Ninja Turtles’ should be as easy as picking up the remote, saying ‘Teenage Mutant Ninja Turtles’ and letting the TV find it.
Searching for ‘Teenage Mutant Ninja Turtles’ shows results from Google Play and EPIX
You can drive users directly to content within your app by making it searchable from the Android TV search interface. Join app developers like EPIX, Sky News, YouTube, and Hulu Plus who are already making content discovery a breeze.
When users want suggestions for content, the recommendations row on Android TV helps them quickly access relevant content right from the home screen. Recommendations are based on the user’s recent and frequent usage behaviors, as well as content preferences.
Recommendations from installed apps, like EPIX, appear in the Android TV home screen
Android TV allows developers to create recommendations for movies, TV shows, music and other types of content. Your app can provide recommendations to users to help get your content noticed. As an example, EPIX shows hollywood movies. NBA Game Time serves up basketball highlights. Washington Post offers video summaries of world events, and YouTube suggests videos based on your subscriptions and viewing history.
With less than one year since the consumer launch of Android TV, we’re already building upon a simpler, smarter and more personalized TV experience, and we can’t wait to see what you create.
Posted by Joshua Gordon, Developer Advocate
Haystack TV is a small six person startup with an ambitious goal: personalize the news. Traditionally, watching news on TV means viewing a list of stories curated by the network. Wouldn’t it be better if you could watch a personalized news channel, based on interesting YouTube stories?
Haystack already had a mobile app, but entering the living room space seemed daunting. Although “Smart TVs” have been on the market for a while, they remain challenging for developers to work with. Many hardware OEMs have proprietary platforms, but Android TV is different. It’s an open ecosystem with great developer resources. Developers can reach millions of users with familiar Android APIs. If you have an existing Android app, it’s easy to bring it to the living room.
Two weeks was all it took for Haystack TV to bring their mobile app to Android TV. That includes building an immersive, cinematic UI (a task greatly simplified by the Android framework). Since launching on Android TV, Haystack TV’s viewership is growing at 40% per month. Previously, users were spending about 40 minutes watching content on mobile per week. Now that’s up to 80 minutes in the living room. Their longest engagements are through Chromecast and Android TV.
Hear from Daniel Barreto, CEO of Haystack TV, on developing for Android TV
Haystack TV’s success on Android TV is a great example of how the Android multi-form factor developer experience shines. Once you’ve learned the ropes of writing Android apps, developing for another form factor (Wear, Auto, TV) is simple.
Haystack TV’s UI is smooth and cinematic. How were they able to build a great one so quickly? Developing an immersive UI/UX with Android TV is surprisingly easy. The Leanback support library provides fragments for browsing content, showing a details screen, and search. You can use these to get transitions and animations almost for free. To learn more about building UIs for Android TV, watch the Using the Leanback Library DevByte and check out the code samples.
Browsing recommended stories
The recommendations row is a central feature of the Android TV home screen. It’s the first thing users see when they turn on their TVs. You can surface content to appear on the recommendations row by implementing the recommendation service. For example, your app can suggest videos your users will want to watch next (say, the next episode in a series, or a related news story). This is great for getting noticed and increasing engagements.
Haystack’s content on the recommendations row
How can users find their favorite movie or show from a library of thousands? On Android TV, they can search for it using their voice. This is much faster and more relaxing than typing on the screen with a remote control! In addition to providing in-app search, your app can surface content to appear on the global search results page. The framework takes care of speech recognition for you and delivers the result to your app as a plain text string.
Android TV makes it possible for small startups to create apps for the living room. There are extensive developer resources. For an overview, watch the Introduction to Android TV DevByte. For details, see the developer training docs. Watch this episode of Coffee with a Googler to learn more about the vision for the platform. To get started on your app, visit developer.android.com/tv.
Posted by Greg Hartrell, Senior Product Manager of Google Play Games
Everyone has a gaming-ready device in their pocket today. In fact, of the one billion Android users in more than 190 countries, three out of four of them are gamers. This allows game developers to reach a global audience and build a successful business. Over the past year, we paid out more than $7 billion to developers distributing apps and games on Google Play.
At our Developer Day during the Game Developers Conference (GDC) taking place this week, we announced a set of new features for Google Play Games and AdMob to power great gaming. Rolling out over the next few weeks, these launches can help you better measure and monetize your games.
“Player Analytics has helped me hone in on BombSquad’s shortcomings, right the ship, and get to a point where I can financially justify making the games I want to make.”
Eric Froemling, BombSquad developer
Google Play Games is a set of services that help game developers reach and engage their audience. To further that effort, we’re introducing Player Analytics, giving developers access to powerful analytics reports to better measure overall business success and understand in-game player behavior. Launching in the next few weeks in the Google Play Developer Console, the new tool will give indie developers and big studios better insight into how their players are progressing, spending, and churning; access to critical metrics like ARPPU and sessions per user; and assistance setting daily revenue targets.
BombSquad, created by a one-person game studio in San Francisco, was able to more than double its revenue per user on Google Play after implementing design changes informed during beta testing Player Analytics.
After optimizing your game for performance, it’s important to build a smarter monetization experience tailored to each user. That’s why we’re announcing three important updates to the AdMob platform:
"Atari creates great game experiences for our broad audience. We're happy to be partnering with Google and be the first games company to take part in the native ads beta and help monetize games in a way that enhances our users' experience."
Todd Shallbetter, Chief Operating Officer, Atari
Last year, we launched Android TV as a way to bring Android into the living room, optimizing games for the big screen. The OEM ecosystem is growing with announced SmartTVs and micro-consoles from partners like Sony, TPVision/Philips and Razer.
To make gaming even more dynamic on Android TV, we’re launching the Nearby Connections API with the upcoming update of Google Play services. With this new protocol, games can seamlessly connect smartphones and tablets as second-screen controls to the game running on your TV. Beach Buggy Racing is a fun and competitive multiplayer racing game on Android TV that plans to use Nearby Connections in their summer release, and we are looking forward to more living room multiplayer games taking advantage of mobile devices as second screen controls.
At Google I/O last June, we also unveiled Google Cardboard with the goal of making virtual reality (VR) accessible to everyone. With Cardboard, we are giving game developers more opportunities to build unique and immersive experiences from nothing more than a piece of cardboard and your smartphone. The Cardboard SDKs for Android and Unity enable you to easily build VR apps or adapt your existing app for VR.
Visit us at the Google booth #502 on the Expo floor to get hands on experience with Project Tango, Niantic Labs and Cardboard starting on Wednesday, March 4. Our teams from AdMob, AdWords, Analytics, Cloud Platform and Firebase will also be available to answer any of your product questions.
For more information on what we’re doing at GDC, please visit g.co/dev/gdc2015.