Apple has confirmed that its revamped Siri voice assistant will rely on a customized Google Gemini model to handle key functions in the cloud, validating earlier reports that suggested the company was turning to Google for support in its AI efforts. The announcement highlights the strategic collaboration between Apple and Google as Cupertino prepares to roll out its next-generation voice assistant later this year. Bloomberg journalist Mark Gurman had reported in November 2025 that Apple was planning to integrate a massive Gemini model, containing 1.2 trillion parameters, to replace its existing 1.5 billion-parameter AI model currently powering Siri. The deal, according to Gurman, could see Apple paying Google up to $1 billion annually for the use of the technology.
The updated Siri under Apple Intelligence will feature three main components designed to enhance user interactions. The first is the query planner, which determines the most efficient way to fulfill user requests by accessing web search, personal data, or third-party apps through App Intents. The second is a knowledge search system, allowing Siri to respond to general queries using Apple’s on-device Foundation Models without relying on external AI services or web results. Finally, the summarizer component will enable Siri to condense text or audio from third-party AI tools such as ChatGPT, providing summaries for notifications, Safari webpages, or writing assistance. The Gemini model will specifically power the query planner and summarizer, while Apple’s own Foundation Models will handle the knowledge search system.
Apple is also introducing a range of features with its Spring 2026 iOS update, iOS 26.4, that expand Siri’s capabilities. In-app Actions will allow users to perform tasks such as sending messages, adding items to lists, or controlling media entirely through voice commands. Personal Context Awareness will enable Siri to access messages and other personal data to provide more tailored recommendations. On-Screen Awareness will allow the voice assistant to understand content displayed on the device and perform actions accordingly. Together, these features aim to make Siri more contextually aware and capable of completing complex user requests efficiently.
In a statement to CNBC, Apple emphasized that its evaluation determined Google’s technology offered the most capable foundation for Apple Foundation Models. Google confirmed that next-generation Apple Foundation Models would indeed run on Gemini. CEO Tim Cook reiterated during Apple’s latest earnings call that development on the new Siri is progressing well and remains on track for a 2026 launch. He also suggested that Apple plans to expand integration with additional AI models over time, signaling ongoing efforts to align Siri with the Model Context Protocol (MCP) launched by Anthropic, which supports interoperability between AI models and various applications. Currently, OpenAI’s ChatGPT is the only external AI integrated with Apple Intelligence, setting the stage for broader model collaboration in the future.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.