Google launched a groundbreaking AI feature in February 2026 that enables its Gemini assistant to autonomously control third-party apps on Android phones, completing complex tasks like ordering food or booking rides without user intervention. The new “task automation” capability, powered by the Gemini 3 model, debuted as an early preview on the Pixel 10 and Samsung Galaxy S26 devices, marking a fundamental shift in how users interact with their smartphones.
The feature works through simple voice commands like “Get me an Uber to the Palace of Fine Arts,” prompting Gemini to launch the relevant app in a virtual window and execute the request autonomously, according to The Verge. Users maintain full control throughout the process, watching the automation unfold in real time and retaining the ability to intervene at any point.
When the AI encounters ambiguities, such as an out-of-stock menu item, it pauses to request user input before continuing. Most importantly, while Gemini can prepare orders or ride bookings up to the final step, it requires explicit user confirmation before submitting requests or authorizing payments, The Verge reported.
The early preview launched exclusively in the United States and Korea, available only on Google’s Pixel 10 series and Samsung’s Galaxy S26 devices.
Technical Architecture and Industry Vision
The system employs what Android ecosystem president Sameer Samat described to The Verge as a “series of technology stacks” to interact with applications. This includes leveraging official developer frameworks where available, but more significantly, the AI can autonomously navigate unfamiliar apps by visually parsing their interfaces.
When no formal integration exists, Gemini can “figure it out by itself,” Samat told The Verge, identifying and interacting with on-screen elements like buttons and text fields to complete tasks independently.
This capability represents a fundamental reimagining of Android 17 as an “intelligence system” rather than merely a platform for running applications, according to Samat’s comments to The Verge. His assertion that “This technology is happening” signals Google’s determination to lead this transformation, expecting developers to adapt to the new AI-driven paradigm.
The development positions Google ahead of competitors, particularly Apple, which has not announced comparable capabilities for Siri as of February 2026. Samsung, as a launch partner, is pursuing a hybrid strategy by integrating Google’s agentic AI while continuing to develop its own Bixby assistant.
The broader implications for the mobile ecosystem are significant. By positioning Gemini as an intermediary between users and apps, Google potentially reduces direct engagement opportunities for companies like Uber and DoorDash, limiting their ability to present loyalty programs, promotions, or unique branding during interactions, The Verge noted.
Sources
- The Verge


























