
What is now called intelligence was formerly known as personalization. The technology still relies on a massive amount of data to provide something unique for
each individual search or query, but now it can analyze the input much faster and remembers nuances of time, place and sequence.
This same type of information is used in Google Performance
Max, an AI-powered campaign type that allows advertisers to access all of their Google Ads inventory from one campaign location, according to Google AI
Mode.
"For AI to be truly useful, it needs to understand you," Demis Hassabis, co-founder and CEO of Google DeepMind, wrote
in a post on X. "With Personal Intelligence, we’re beginning to solve this. With your permission, Gemini can now securely reason across your own data to answer questions that generic models
simply can't - like suggesting plans based on travel dates in Gmail or your hobbies found in Photos."
This week, Google officially expanded "Personal Intelligence" to AI Mode in Google
Search, made possible by the Gemini 3 models. The technology will support and interact with agentic systems, and will analyze consumer data to predict personal consumer needs.
advertisement
advertisement
The AI technology
connects and uses preferences across Google apps to understand the user's needs.
Subscribers to Google AI Pro and AI Ultra can opt in to connect Gmail and Google Photos to AI Mode to
enable users to connect with their own personal contextual experience.
This "foundational step" is what will move personalized AI into the next phase of intelligent search and assistance.
While it remains in the early days for this technology, developers will continue to work through known technical issues and limitations.
Most recently, Google solved something it calls
"content packing," as well as the problems around it.
Personal Intelligence has two core strengths -- tools that retrieve specific details and reasoning across the apps. It often
combines both approaches and can work across text, photos, and video to provide specific and "one-of-a-kind" answers.
With Personal Intelligence, Gemini models can retrieve relevant
context, pulling in "dense retrieval, and long context capabilities to reason across your personal data from Google products in real-time."
Advanced Reasoning Gemini 3 can decipher and
skim through more depth and nuances in text and images. It can understand complex personal context, such as mapping relationships between things and people, or recognize specific aesthetic
preferences. It can infer goals and retrieve more information related to specific preferences.
Google stated that its retrieval process builds on research that its developers have
conducted in search and dense retrieval methods.
In a report, the company explains how "Long Context Gemini 3" supports the process because it has a 1 million token context window. Those
tokens, in part, enable the technology to process and synthesize vast amounts of information across multiple modalities like text, images, video, audio, and code.
Personalization requires data processing at a much larger scale, as a user’s accumulated context across emails and photos alone often exceed this window by orders of magnitude, a document
explains. “Context packing," a technique that helps us dynamically identify and synthesize appropriate pieces of information into the working memory for the model, is used.
Users
can choose whether or not to turn these features on or off. In the Gemini app, they can manage all preferences directly in settings, like choosing which services — such as Google Workspace,
Google Photos, YouTube, and Search -- Personal Intelligence can connect with in the “Connected Apps” settings.
In AI Mode Search, connecting Workspace and Photos is off by default
and opt-in. This means that users can choose if and when they want to use it.
Google also has begun to build a secure infrastructure and implement additional industry safeguards to ensure this
data remains protected even as it powers new AI experiences.
For example, user data is encrypted by default when it is not in transit, and then protected in transit between Google's systems
using the Application Layer Transport Security (ALTS).
Work also has been done to increase protection from prompt injections with improvements made to protect against misuse via
cyberattacks.
Google's goal is to improve experiences and keep data secure with the user having the ability to control on and off switches. Gemini Apps do not train directly on a user's
Gmail inbox or Google Photos library, according to a company statement.
To improve functions over time, Google will train on info like prompts and responses in the Gemini app and AI Mode in
Search as well as summaries, excerpts and inferences used to help answer questions from prompts.