Today we are releasing updates to multiple components of the Android SDK:
Android 2.0.1 is a minor update to Android 2.0. This update includes several bug fixes and behavior changes, such as application resource selection based on API level and changes to the value of some Bluetooth-related constants. For more detailed information, please see the Android 2.0.1 release notes.
To differentiate its behavior from Android 2.0, the API level of Android 2.0.1 is 6. All Android 2.0 devices will be updated to 2.0.1 before the end of the year, so developers will no longer need to support Android 2.0 at that time. Of course, developers of applications affected by the behavior changes should start compiling and testing their apps immediately.
We are also providing an update to the Android 1.6 SDK component. Revision 2 includes fixes to the compatibility mode for applications that don't support multiple screen sizes, as well as SDK fixes. Please see the Android 1.6, revision 2 release notes for the full list of changes.
Finally, we are also releasing an update to the SDK Tools, now in revision 4. This is a minor update with mostly bug fixes in the SDK Manager. A new version of the Eclipse plug-in that embeds those fixes is also available. For complete details, please see the SDK Tools, revision 4 and ADT 0.9.5 release notes.
One more thing: you can now follow us on twitter @AndroidDev.
Android 1.6 introduces numerous enhancements and bug fixes in the UI framework. Today, I'd like to highlight three two improvements in particular.
The UI toolkit introduced in Android 1.6 is aware of which views are opaque and can use this information to avoid drawing views that the user will not be able to see. Before Android 1.6, the UI toolkit would sometimes perform unnecessary operations by drawing a window background when it was obscured by a full-screen opaque view. A workaround was available to avoid this, but the technique was limited and required work on your part. With Android 1.6, the UI toolkit determines whether a view is opaque by simply querying the opacity of the background drawable. If you know that your view is going to be opaque but that information does not depend on the background drawable, you can simply override the method called isOpaque():
@Override public boolean isOpaque() { return true; }
The value returned by isOpaque() does not have to be constant and can change at any time. For instance, the implementation of ListView in Android 1.6 indicates that a list is opaque only when the user is scrolling it.
Updated: Our apologies—we spoke to soon about isOpaque(). It will be available in a future update to the Android platform.
RelativeLayout is the most versatile layout offered by the Android UI toolkit and can be successfully used to reduce the number of views created by your applications. This layout used to suffer from various bugs and limitations, sometimes making it difficult to use without having some knowledge of its implementation. To make your life easier, Android 1.6 comes with a revamped RelativeLayout. This new implementation not only fixes all known bugs in RelativeLayout (let us know when you find new ones) but also addresses its major limitation: the fact that views had to be declared in a particular order. Consider the following XML layout:
<?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="64dip" android:padding="6dip"> <TextView android:id="@+id/band" android:layout_width="fill_parent" android:layout_height="26dip" android:layout_below="@+id/track" android:layout_alignLeft="@id/track" android:layout_alignParentBottom="true" android:gravity="top" android:text="The Airborne Toxic Event" /> <TextView android:id="@id/track" android:layout_marginLeft="6dip" android:layout_width="fill_parent" android:layout_height="26dip" android:layout_toRightOf="@+id/artwork" android:textAppearance="?android:attr/textAppearanceMedium" android:gravity="bottom" android:text="Sometime Around Midnight" /> <ImageView android:id="@id/artwork" android:layout_width="56dip" android:layout_height="56dip" android:layout_gravity="center_vertical" android:src="@drawable/artwork" /> </RelativeLayout>
This code builds a very simple layout—an image on the left with two lines of text stacked vertically. This XML layout is perfectly fine and contains no errors. Unfortunately, Android 1.5's RelativeLayout is incapable of rendering it correctly, as shown in the screenshot below.
The problem is that this layout uses forward references. For instance, the "band" TextView is positioned below the "track" TextView but "track" is declared after "band" and, in Android 1.5, RelativeLayout does not know how to handle this case. Now look at the exact same layout running on Android 1.6:
As you can see Android 1.6 is now better able to handle forward reference. The result on screen is exactly what you would expect when writing the layout.
Setting up a click listener on a button is very common task, but it requires quite a bit of boilerplate code:
findViewById(R.id.myButton).setOnClickListener(new View.OnClickListener() { public void onClick(View v) { // Do stuff } });
One way to reduce the amount of boilerplate is to share a single click listener between several buttons. While this technique reduces the number of classes, it still requires a fair amount of code and it still requires giving each button an id in your XML layout file:
View.OnClickListener handler = View.OnClickListener() { public void onClick(View v) { switch (v.getId()) { case R.id.myButton: // doStuff break; case R.id.myOtherButton: // doStuff break; } } } findViewById(R.id.myButton).setOnClickListener(handler); findViewById(R.id.myOtherButton).setOnClickListener(handler);
With Android 1.6, none of this is necessary. All you have to do is declare a public method in your Activity to handle the click (the method must have one View argument):
class MyActivity extends Activity { public void myClickHandler(View target) { // Do stuff } }
And then reference this method from your XML layout:
<Button android:onClick="myClickHandler" />
This new feature reduces both the amount of Java and XML you have to write, leaving you more time to concentrate on your application.
The Android team is committed to helping you write applications in the easiest and most efficient way possible. We hope you find these improvements useful and we're excited to see your applications on Android Market.
You may have heard that one of the key changes introduced in Android 1.6 is support for new screen sizes. This is one of the things that has me very excited about Android 1.6 since it means Android will start becoming available on so many more devices. However, as a developer, I know this also means a bit of additional work. That's why we've spent quite a bit of time making it as easy as possible for you to update your apps to work on these new screen sizes.
To date, all Android devices (such as the T-Mobile G1 and Samsung I7500, among others) have had HVGA (320x480) screens. The essential change in Android 1.6 is that we've expanded support to include three different classes of screen sizes:
Any given device will fall into one of those three groups. As a developer, you can control if and how your app appears to devices in each group by using a few tools we've introduced in the Android framework APIs and SDK. The documentation at the developer site describes each of these tools in detail, but here they are in a nutshell:
The documentation also provides a quick checklist and testing tips for developers to ensure their apps will run correctly on devices of any screen size.
Once you've upgraded your app using Android 1.6 SDK, you'll need to make sure your app is only available to users whose phones can properly run it. To help you with that, we've also added some new tools to Android Market.
Until the next time you upload a new version of your app to Android Market, we will assume that it works for normal-class screen sizes. This means users with normal-class and large-class screens will have access to these apps. Devices with "large" screens simply run these apps in a compatibility mode, which simulates an HVGA environment on the larger screen.
Devices with small-class screens, however, will only be shown apps which explicitly declare (via the AndroidManifest) that they will run properly on small screens. In our studies, we found that "squeezing" an app designed for a larger screen onto a smaller screen often produces a bad result. To prevent users with small screens from getting a bad impression of your app (and reviewing it negatively!), Android Market makes sure that they can't see it until you upload a new version that declares itself compatible.
We expect small-class screens, as well as devices with additional resolutions in Table 1 in the developer document to hit the market in time for the holiday season. Note that not all devices will be upgraded to Android 1.6 at the same time. There will be significant number of users still with Android 1.5 devices. To use the same apk to target Android 1.5 devices and Android 1.6 devices, build your apps using Android 1.5 SDK and test your apps on both Android 1.5 and 1.6 system images to make sure they continue to work well on both types of devices. If you want to target small-class devices like HTC Tattoo, please build your app using the Android 1.6 SDK. Note that if your application requires Android 1.6 features, but does not support a screen class, you need to set the appropriate attributes to false. To use optimized assets for normal-class, high density devices like WVGA, or for low density devices please use the Android 1.6 SDK.
Touch screens are a great way to interact with applications on mobile devices. With a touch screen, users can easily tap, drag, fling, or slide to quickly perform actions in their favorite applications. But it's not always that easy for developers. With Android, it's easy to recognize simple actions, like a swipe, but it's much more difficult to handle complicated gestures, which also require developers to write a lot of code. That's why we have decided to introduce a new gestures API in Android 1.6. This API, located in the new package android.gesture, lets you store, load, draw and recognize gestures. In this post I will show you how you can use the android.gesture API in your applications. Before going any further, you should download the source code of the examples.
android.gesture
The Android 1.6 SDK comes with a new application pre-installed on the emulator, called Gestures Builder. You can use this application to create a set of pre-defined gestures for your own application. It also serves as an example of how to let the user define his own gestures in your applications. You can find the source code of Gestures Builders in the samples directory of Android 1.6. In our example we will use Gestures Builder to generate a set of gestures for us (make sure to create an AVD with an SD card image to use Gestures Builder.) The screenshot below shows what the application looks like after adding a few gestures:
As you can see, a gesture is always associated with a name. That name is very important because it identifies each gesture within your application. The names do not have to be unique. Actually it can be very useful to have several gestures with the same name to increase the precision of the recognition. Every time you add or edit a gesture in the Gestures Builder, a file is generated on the emulator's SD card, /sdcard/gestures. This file contains the description of all the gestures, and you will need to package it inside your application inside the resources directory, in /res/raw.
/sdcard/gestures
/res/raw
Now that you have a set of pre-defined gestures, you must load it inside your application. This can be achieved in several ways but the easiest is to use the GestureLibraries class:
GestureLibraries
mLibrary = GestureLibraries.fromRawResource(this, R.raw.spells); if (!mLibrary.load()) { finish(); }
In this example, the gesture library is loaded from the file /res/raw/spells. You can easily load libraries from other sources, like the SD card, which is very important if you want your application to be able to save the library; a library loaded from a raw resource is read-only and cannot be modified. The following diagram shows the structure of a library:
/res/raw/spells
To start recognizing gestures in your application, all you have to do is add a GestureOverlayView to your XML layout:
GestureOverlayView
<android.gesture.GestureOverlayView android:id="@+id/gestures" android:layout_width="fill_parent" android:layout_height="0dip" android:layout_weight="1.0" />
Notice that the GestureOverlayView is not part of the usual android.widget package. Therefore, you must use its fully qualified name. A gesture overlay acts as a simple drawing board on which the user can draw his gestures. You can tweak several visual properties, like the color and the width of the stroke used to draw gestures, and register various listeners to follow what the user is doing. The most commonly used listener is GestureOverlayView.OnGesturePerformedListener which fires whenever a user is done drawing a gesture:
GestureOverlayView.OnGesturePerformedListener
GestureOverlayView gestures = (GestureOverlayView) findViewById(R.id.gestures); gestures.addOnGesturePerformedListener(this);
When the listener fires, you can ask the GestureLibrary to try to recognize the gesture. In return, you will get a list of Prediction instances, each with a name - the same name you entered in the Gestures Builder - and a score. The list is sorted by descending scores; the higher the score, the more likely the associated gesture is the one the user intended to draw. The following code snippet demonstrates how to retrieve the name of the first prediction:
GestureLibrary
public void onGesturePerformed(GestureOverlayView overlay, Gesture gesture) { ArrayList predictions = mLibrary.recognize(gesture); // We want at least one prediction if (predictions.size() > 0) { Prediction prediction = predictions.get(0); // We want at least some confidence in the result if (prediction.score > 1.0) { // Show the spell Toast.makeText(this, prediction.name, Toast.LENGTH_SHORT).show(); } } }
In this example, the first prediction is taken into account only if it's score is greater than 1.0. The threshold you use is entirely up to you but know that scores lower than 1.0 are typically poor matches. And this is all the code you need to create a simple application that can recognize pre-defined gestures (see the source code of the project GesturesDemo):
In the example above, the GestureOverlayView was used as a normal view, embedded inside a LinearLayout. However, as its name suggests, it can also be used as an overlay on top of other views. This can be useful to recognize gestures in a game or just anywhere in the UI of an application. In the second example, called GesturesListDemo, we'll create an overlay on top of a list of contacts. We start again in Gestures Builder to create a new set of pre-defined gestures:
And here is what the XML layout looks like:
<android.gesture.GestureOverlayView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/gestures" android:layout_width="fill_parent" android:layout_height="fill_parent" android:gestureStrokeType="multiple" android:eventsInterceptionEnabled="true" android:orientation="vertical"> <ListView android:id="@android:id/list" android:layout_width="fill_parent" android:layout_height="fill_parent" /> </android.gesture.GestureOverlayView>
In this application, the gestures view is an overlay on top of a regular ListView. The overlay also specifies a few properties that we did not need before:
gestureStrokeType
eventsInterceptionEnabled
orientation
action_delete
The code used to load and set up the gestures library and overlay is exactly the same as before. The only difference is that we now check the name of the predictions to know what the user intended to do:
public void onGesturePerformed(GestureOverlayView overlay, Gesture gesture) { ArrayList<Prediction> predictions = mLibrary.recognize(gesture); if (predictions.size() > 0 && predictions.get(0).score > 1.0) { String action = predictions.get(0).name; if ("action_add".equals(action)) { Toast.makeText(this, "Adding a contact", Toast.LENGTH_SHORT).show(); } else if ("action_delete".equals(action)) { Toast.makeText(this, "Removing a contact", Toast.LENGTH_SHORT).show(); } else if ("action_refresh".equals(action)) { Toast.makeText(this, "Reloading contacts", Toast.LENGTH_SHORT).show(); } } }
The user is now able to draw his gestures on top of the list without interfering with the scrolling:
The overlay even gives visual clues as to whether the gesture is considered valid for recognition. In the case of a vertical overlay, for instance, a single vertical stroke cannot be recognized as a gesture and is therefore drawn with a translucent color:
Adding support for gestures in your application is easy and can be a valuable addition. The gestures API does not even have to be used to recognize complex shapes; it will work equally well to recognize simple swipes. We are very excited by the possibilities the gestures API offers, and we're eager to see what cool applications the community will create with it.
Today Android 1.6 NDK, release 1 is available for download from the Android developer site.
To recap, the NDK is a companion to the SDK that provides tools to generate and embed native ARM machine code within your application packages. This native code has the same restrictions as the VM code, but can execute certain operations much more rapidly. This is useful if you're doing heavy computations, digital processing, or even porting existing code bases written in C or C++.
If you already use the Android 1.5 NDK, upgrading to this release is highly recommended. It provides the following improvements:
If you have any questions, please join us in the Android NDK forum.
The Android 1.6 SDK includes a tool called zipalign that optimizes the way an application is packaged. Doing this enables Android to interact with your application more efficiently and thus has the potential to make your application and the overall system run faster. We strongly encourage you to use zipalign on both new and already published applications and to make the optimized version available—even if your application targets a previous version of Android. We'll get into more detail on what zipalign does, how to use it, and why you'll want to do so in the rest of this post.
zipalign
In Android, data files stored in each application's apk are accessed by multiple processes: the installer reads the manifest to handle the permissions associated with that application; the Home application reads resources to get the application's name and icon; the system server reads resources for a variety of reasons (e.g. to display that application's notifications); and last but not least, the resource files are obviously used by the application itself.
The resource-handling code in Android can efficiently access resources when they're aligned on 4-byte boundaries by memory-mapping them. But for resources that are not aligned (i.e. when zipalign hasn't been run on an apk), it has to fall back to explicitly reading them—which is slower and consumes additional memory.
For an application developer like you, this fallback mechanism is very convenient. It provides a lot of flexibility by allowing for several different development methods, including those that don't include aligning resources as part of their normal flow.
Unfortunately, the situation is reversed for users—reading resources from unaligned apks is slow and takes a lot of memory. In the best case, the only visible result is that both the Home application and the unaligned application launch slower than they otherwise should. In the worst case, installing several applications with unaligned resources increases memory pressure, thus causing the system to thrash around by having to constantly start and kill processes. The user ends up with a slow device with a poor battery life.
Luckily, it's very easy to align the resources:
AndroidManifest.xml
build.properties
key.store
key.alias
zipalign -v 4 source.apk destination.apk
zipalign -c -v 4 application.apk
We encourage you manually run zipalign on your currently published applications and to make the newly aligned versions available to users. And don't forget to align any new applications going forward!
We've introduced a new feature in version 1.6 of the Android platform: Text-To-Speech (TTS). Also known as "speech synthesis", TTS enables your Android device to "speak" text of different languages.
Before we explain how to use the TTS API itself, let's first review a few aspects of the engine that will be important to your TTS-enabled application. We will then show how to make your Android application talk and how to configure the way it speaks.
The TTS engine that ships with the Android platform supports a number of languages: English, French, German, Italian and Spanish. Also, depending on which side of the Atlantic you are on, American and British accents for English are both supported.
The TTS engine needs to know which language to speak, as a word like "Paris", for example, is pronounced differently in French and English. So the voice and dictionary are language-specific resources that need to be loaded before the engine can start to speak.
Although all Android-powered devices that support the TTS functionality ship with the engine, some devices have limited storage and may lack the language-specific resource files. If a user wants to install those resources, the TTS API enables an application to query the platform for the availability of language files and can initiate their download and installation. So upon creating your activity, a good first step is to check for the presence of the TTS resources with the corresponding intent:
Intent checkIntent = new Intent(); checkIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA); startActivityForResult(checkIntent, MY_DATA_CHECK_CODE);
A successful check will be marked by a CHECK_VOICE_DATA_PASS result code, indicating this device is ready to speak, after the creation of our android.speech.tts.TextToSpeech object. If not, we need to let the user know to install the data that's required for the device to become a multi-lingual talking machine! Downloading and installing the data is accomplished by firing off the ACTION_INSTALL_TTS_DATA intent, which will take the user to Android Market, and will let her/him initiate the download. Installation of the data will happen automatically once the download completes. Here is an example of what your implementation of onActivityResult() would look like:
CHECK_VOICE_DATA_PASS
android.speech.tts.TextToSpeech
onActivityResult()
private TextToSpeech mTts; protected void onActivityResult( int requestCode, int resultCode, Intent data) { if (requestCode == MY_DATA_CHECK_CODE) { if (resultCode == TextToSpeech.Engine.CHECK_VOICE_DATA_PASS) { // success, create the TTS instance mTts = new TextToSpeech(this, this); } else { // missing data, install it Intent installIntent = new Intent(); installIntent.setAction( TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA); startActivity(installIntent); } } }
In the constructor of the TextToSpeech instance we pass a reference to the Context to be used (here the current Activity), and to an OnInitListener (here our Activity as well). This listener enables our application to be notified when the Text-To-Speech engine is fully loaded, so we can start configuring it and using it.
TextToSpeech
Context
OnInitListener
At Google I/O, we showed an example of TTS where it was used to speak the result of a translation from and to one of the 5 languages the Android TTS engine currently supports. Loading a language is as simple as calling for instance:
mTts.setLanguage(Locale.US);
to load and set the language to English, as spoken in the country "US". A locale is the preferred way to specify a language because it accounts for the fact that the same language can vary from one country to another. To query whether a specific Locale is supported, you can use isLanguageAvailable(), which returns the level of support for the given Locale. For instance the calls:
mTts.isLanguageAvailable(Locale.UK)) mTts.isLanguageAvailable(Locale.FRANCE)) mTts.isLanguageAvailable(new Locale("spa", "ESP")))
will return TextToSpeech.LANG_COUNTRY_AVAILABLE to indicate that the language AND country as described by the Locale parameter are supported (and the data is correctly installed). But the calls:
mTts.isLanguageAvailable(Locale.CANADA_FRENCH)) mTts.isLanguageAvailable(new Locale("spa"))
will return TextToSpeech.LANG_AVAILABLE. In the first example, French is supported, but not the given country. And in the second, only the language was specified for the Locale, so that's what the match was made on.
TextToSpeech.LANG_AVAILABLE
Also note that besides the ACTION_CHECK_TTS_DATA intent to check the availability of the TTS data, you can also use isLanguageAvailable() once you have created your TextToSpeech instance, which will return TextToSpeech.LANG_MISSING_DATA if the required resources are not installed for the queried language.
ACTION_CHECK_TTS_DATA
isLanguageAvailable()
TextToSpeech.LANG_MISSING_DATA
Making the engine speak an Italian string while the engine is set to the French language will produce some pretty interesting results, but it will not exactly be something your user would understand So try to match the language of your application's content and the language that you loaded in your TextToSpeech instance. Also if you are using Locale.getDefault() to query the current Locale, make sure that at least the default language is supported.
Locale.getDefault()
Now that our TextToSpeech instance is properly initialized and configured, we can start to make your application speak. The simplest way to do so is to use the speak() method. Let's iterate on the following example to make a talking alarm clock:
speak()
String myText1 = "Did you sleep well?"; String myText2 = "I hope so, because it's time to wake up."; mTts.speak(myText1, TextToSpeech.QUEUE_FLUSH, null); mTts.speak(myText2, TextToSpeech.QUEUE_ADD, null);
The TTS engine manages a global queue of all the entries to synthesize, which are also known as "utterances". Each TextToSpeech instance can manage its own queue in order to control which utterance will interrupt the current one and which one is simply queued. Here the first speak() request would interrupt whatever was currently being synthesized: the queue is flushed and the new utterance is queued, which places it at the head of the queue. The second utterance is queued and will be played after myText1 has completed.
myText1
On Android, each audio stream that is played is associated with one stream type, as defined in android.media.AudioManager. For a talking alarm clock, we would like our text to be played on the AudioManager.STREAM_ALARM stream type so that it respects the alarm settings the user has chosen on the device. The last parameter of the speak() method allows you to pass to the TTS engine optional parameters, specified as key/value pairs in a HashMap. Let's use that mechanism to change the stream type of our utterances:
HashMap<String, String> myHashAlarm = new HashMap(); myHashAlarm.put(TextToSpeech.Engine.KEY_PARAM_STREAM, String.valueOf(AudioManager.STREAM_ALARM)); mTts.speak(myText1, TextToSpeech.QUEUE_FLUSH, myHashAlarm); mTts.speak(myText2, TextToSpeech.QUEUE_ADD, myHashAlarm);
Note that speak() calls are asynchronous, so they will return well before the text is done being synthesized and played by Android, regardless of the use of QUEUE_FLUSH or QUEUE_ADD. But you might need to know when a particular utterance is done playing. For instance you might want to start playing an annoying music after myText2 has finished synthesizing (remember, we're trying to wake up the user). We will again use an optional parameter, this time to tag our utterance as one we want to identify. We also need to make sure our activity implements the TextToSpeech.OnUtteranceCompletedListener interface:
QUEUE_FLUSH
QUEUE_ADD
myText2
TextToSpeech.OnUtteranceCompletedListener
mTts.setOnUtteranceCompletedListener(this); myHashAlarm.put(TextToSpeech.Engine.KEY_PARAM_STREAM, String.valueOf(AudioManager.STREAM_ALARM)); mTts.speak(myText1, TextToSpeech.QUEUE_FLUSH, myHashAlarm); myHashAlarm.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, "end of wakeup message ID"); // myHashAlarm now contains two optional parameters mTts.speak(myText2, TextToSpeech.QUEUE_ADD, myHashAlarm);
And the Activity gets notified of the completion in the implementation of the listener:
public void onUtteranceCompleted(String uttId) { if (uttId == "end of wakeup message ID") { playAnnoyingMusic(); } }
While the speak() method is used to make Android speak the text right away, there are cases where you would want the result of the synthesis to be recorded in an audio file instead. This would be the case if, for instance, there is text your application will speak often; you could avoid the synthesis CPU-overhead by rendering only once to a file, and then playing back that audio file whenever needed. Just like for speak(), you can use an optional utterance identifier to be notified on the completion of the synthesis to the file:
HashMap<String, String> myHashRender = new HashMap(); String wakeUpText = "Are you up yet?"; String destFileName = "/sdcard/myAppCache/wakeUp.wav"; myHashRender.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, wakeUpText); mTts.synthesizeToFile(wakuUpText, myHashRender, destFileName);
Once you are notified of the synthesis completion, you can play the output file just like any other audio resource with android.media.MediaPlayer.
But the TextToSpeech class offers other ways of associating audio resources with speech. So at this point we have a WAV file that contains the result of the synthesis of "Wake up" in the previously selected language. We can tell our TTS instance to associate the contents of the string "Wake up" with an audio resource, which can be accessed through its path, or through the package it's in, and its resource ID, using one of the two addSpeech() methods:
addSpeech()
mTts.addSpeech(wakeUpText, destFileName);
This way any call to speak() for the same string content as wakeUpText will result in the playback of destFileName. If the file is missing, then speak will behave as if the audio file wasn't there, and will synthesize and play the given string. But you can also take advantage of that feature to provide an option to the user to customize how "Wake up" sounds, by recording their own version if they choose to. Regardless of where that audio file comes from, you can still use the same line in your Activity code to ask repeatedly "Are you up yet?":
wakeUpText
destFileName
mTts.speak(wakeUpText, TextToSpeech.QUEUE_ADD, myHashAlarm);
The text-to-speech functionality relies on a dedicated service shared across all applications that use that feature. When you are done using TTS, be a good citizen and tell it "you won't be needing its services anymore" by calling mTts.shutdown(), in your Activity onDestroy() method for instance.
mTts.shutdown()
onDestroy()
Android now talks, and so can your apps. Remember that in order for synthesized speech to be intelligible, you need to match the language you select to that of the text to synthesize. Text-to-speech can help you push your app in new directions. Whether you use TTS to help users with disabilities, to enable the use of your application while looking away from the screen, or simply to make it cool, we hope you'll enjoy this new feature.
One of the new features we're really proud of in the Android 1.6 release is Quick Search Box for Android. This is our new system-wide search framework, which makes it possible for users to quickly and easily find what they're looking for, both on their devices and on the web. It suggests content on your device as you type, like apps, contacts, browser history, and music. It also offers results from the web search suggestions, local business listings, and other info from Google, such as stock quotes, weather, and flight status. All of this is available right from the home screen, by tapping on Quick Search Box (QSB).
What we're most excited about with this new feature is the ability for you, the developers, to leverage the QSB framework to provide quicker and easier access to the content inside your apps. Your apps can provide search suggestions that will surface to users in QSB alongside other search results and suggestions. This makes it possible for users to access your application's content from outside your application—for example, from the home screen.
The code fragments below are related to a new demo app for Android 1.6 called Searchable Dictionary.
In previous releases, we already provided a mechanism for you to expose search and search suggestions in your app as described in the docs for SearchManager. This mechanism has not changed and requires the following two things in your AndroidManifest.xml:
1) In your <activity>, an intent filter, and a reference to a searchable.xml file (described below):
<activity>
searchable.xml
<intent-filter> <action android:name="android.intent.action.SEARCH" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> <meta-data android:name="android.app.searchable" android:resource="@xml/searchable" />
2) A content provider that can provide search suggestions according to the URIs and column formats specified by the Search Suggestions section of the SearchManager docs:
<!-- Provides search suggestions for words and their definitions. --> <provider android:name="DictionaryProvider" android:authorities="dictionary" android:syncable="false" />
In the searchable.xml file, you specify a few things about how you want the search system to present search for your app, including the authority of the content provider that provides suggestions for the user as they type. Here's an example of the searchable.xml of an Android app that provides search suggestions within its own activities:
<searchable xmlns:android="http://schemas.android.com/apk/res/android" android:label="@string/search_label" android:searchSuggestAuthority="dictionary" android:searchSuggestIntentAction="android.intent.action.VIEW"> </searchable>
Note that the android:searchSuggestAuthority attribute refers to the authority of the content provider we declared in AndroidManifest.xml.
android:searchSuggestAuthority
For more details on this, see the Searchability Metadata section of the SearchManager docs.
In Android 1.6, we added a new attribute to the metadata for searchables: android:includeInGlobalSearch. By specifying this as "true" in your searchable.xml, you allow QSB to pick up your search suggestion content provider and include its suggestions along with the rest (if the user enables your suggestions from the system search settings).
android:includeInGlobalSearch
"true"
You should also specify a string value for android:searchSettingsDescription, which describes to users what sorts of suggestions your app provides in the system settings for search.
android:searchSettingsDescription
<searchable xmlns:android="http://schemas.android.com/apk/res/android" android:label="@string/search_label" android:searchSettingsDescription="@string/settings_description" android:includeInGlobalSearch="true" android:searchSuggestAuthority="dictionary" android:searchSuggestIntentAction="android.intent.action.VIEW"> </searchable>
These new attributes are supported only in Android 1.6 and later.
The first and most important thing to note is that when a user installs an app with a suggestion provider that participates in QSB, this new app will not be enabled for QSB by default. The user can choose to enable particular suggestion sources from the system settings for search (by going to "Search" > "Searchable items" in settings).
You should consider how to handle this in your app. Perhaps show a notice that instructs the user to visit system settings and enable your app's suggestions.
Once the user enables your searchable item, the app's suggestions will have a chance to show up in QSB, most likely under the "more results" section to begin with. As your app's suggestions are chosen more frequently, they can move up in the list.
One of our objectives with QSB is to make it faster for users to access the things they access most often. One way we've done this is by 'shortcutting' some of the previously chosen search suggestions, so they will be shown immediately as the user starts typing, instead of waiting to query the content providers. Suggestions from your app may be chosen as shortcuts when the user clicks on them.
For dynamic suggestions that may wish to change their content (or become invalid) in the future, you can provide a 'shortcut id'. This tells QSB to query your suggestion provider for up-to-date content for a suggestion after it has been displayed. For more details on how to manage shortcuts, see the Shortcuts section within the SearchManager docs.
QSB provides a really cool way to make your app's content quicker to access by users. To help you get your app started with it, we've created a demo app which simply provides access to a small dictionary of words in QSB—it's called Searchable Dictionary, and we encourage you to check it out.
I am happy to let you know that Android 1.6 SDK is available for download. Android 1.6, which is based on the donut branch from the Android Open Source Project, introduces a number of new features and technologies. With support for CDMA and additional screen sizes, your apps can be deployed on even more mobile networks and devices. You will have access to new technologies, including framework-level support for additional screen resolutions, like QVGA and WVGA, new telephony APIs to support CDMA, gesture APIs, a text-to-speech engine, and the ability to integrate with Quick Search Box. What's new in Android 1.6 provides a more complete overview of this platform update.
The Android 1.6 SDK requires a new version of Android Development Tools (ADT). The SDK also includes a new tool that enables you to download updates and additional components, such as new add-ons or platforms.
You can expect to see devices running Android 1.6 as early as October. As with previous platform updates, applications written for older versions of Android will continue to run on devices with Android 1.6. Please test your existing apps on the Android 1.6 SDK to make sure they run as expected.
Over the next several weeks, we will publish a series of blog posts to help you get ready for the new developer technologies in Android 1.6. The following topics, and more, will be covered: how to adapt your applications to support different screen sizes, integrating with Quick Search Box, building gestures into your apps, and using the text-to-speech engine.
If you are interested to see some highlights of Android 1.6, check out the video below.
Happy coding!
I'm pleased to let you know about several updates to Android Market. First, we will soon introduce new features in Android Market for Android 1.6 that will improve the overall experience for users. As part of this change, developers will be able to provide screenshots, promotional icons and descriptions that will better show off applications and games.
We have also added four new sub-categories for applications: sports, health, themes, and comics. Developers can now choose these sub-categories for both new and existing applications via the publisher website. Finally, we have added seller support for developers in Italy. Italian developers can go to the publisher website to upload applications and target any of the countries where paid applications are currently available to users.
To take advantage of the upcoming Android Market refresh, we encourage you to visit the Android Market publisher website and upload additional marketing assets. Check out the video below for some of the highlights.