03 September 2025
Androidify is our new app that lets you build your very own Android bot, using a selfie and AI. We walked you through some of the components earlier this year, and starting today it’s available on the web or as an app on Google Play. In the new Androidify, you can upload a selfie or write a prompt of what you’re looking for, add some accessories, and watch as AI builds your unique bot. Once you’ve had a chance to try it, come back here to learn more about the AI APIs and Android tools we used to create the app. Let's dive in!
Androidify leverages the Firebase AI Logic SDK to access Google's powerful Gemini and Imagen* models. This is crucial for several key features:
The Androidify app also has a "Help me write" feature which uses Gemini 2.5 Flash to create a random description for a bot's clothing and hairstyle, adding a bit of a fun "I'm feeling lucky" element.
The app's user interface is built entirely with Jetpack Compose, enabling a declarative and responsive design across form factors. The app uses the latest Material 3 Expressive design, which provides delightful and engaging UI elements like new shapes, motion schemes, and custom animations.
For camera functionality, CameraX is used in conjunction with the ML Kit Pose Detection API. This intelligent integration allows the app to automatically detect when a person is in the camera's view, enabling the capture button and adding visual guides for the user. It also makes the app's camera features responsive to different device types, including foldables in tabletop mode.
Androidify also makes extensive use of the latest Compose features, such as:
In the latest version of Androidify, we’ve added some new powerful AI driven features.
Using the latest Gemini 2.5 Flash Image model, we combine the Android bot with a preset background “vibe” to bring the Android bots to life.
This is achieved by using Firebase AI Logic - passing a prompt for the background vibe, and the input image bitmap of the bot, with instructions to Gemini on how to combine the two together.
override suspend fun generateImageWithEdit( image: Bitmap, backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...", ): Bitmap { val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel( modelName = "gemini-2.5-flash-image-preview", generationConfig = generationConfig { responseModalities = listOf( ResponseModality.TEXT, ResponseModality.IMAGE, ) }, ) // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background val prompt = content { text(backgroundPrompt) image(image) } val response = model.generateContent(prompt) val image = response.candidates.firstOrNull() ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() } return image ?: throw IllegalStateException("Could not extract image from model response") }
The app also includes a “Sticker mode” option, which integrates the ML Kit Subject Segmentation library to remove the background on the bot. You can use "Sticker mode" in apps that support stickers.
The code for the sticker implementation first checks if the Subject Segmentation model has been downloaded and installed, if it has not - it requests that and waits for its completion. If the model is installed already, the app passes in the original Android Bot image into the segmenter, and calls process on it to remove the background. The foregroundBitmap object is then returned for exporting.
override suspend fun generateImageWithEdit( image: Bitmap, backgroundPrompt: String = "Add the input image android bot as the main subject to the result... with the background that has the following vibe...", ): Bitmap { val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel( modelName = "gemini-2.5-flash-image-preview", generationConfig = generationConfig { responseModalities = listOf( ResponseModality.TEXT, ResponseModality.IMAGE, ) }, ) // We combine the backgroundPrompt with the input image which is the Android Bot, to produce the new bot with a background val prompt = content { text(backgroundPrompt) image(image) } val response = model.generateContent(prompt) val image = response.candidates.firstOrNull() ?.content?.parts?.firstNotNullOfOrNull { it.asImageOrNull() } return image ?: throw IllegalStateException("Could not extract image from model response") }
See the LocalSegmentationDataSource for the full source implementation
To learn more about Androidify behind the scenes, take a look at the new solutions walkthrough, inspect the code or try out the experience for yourself at androidify.com or download the app on Google Play.
*Check responses. Compatibility and availability varies. 18+.