Google Preparing For AI Agents To Leave The Building

Google is preparing to transition to AI agents that leave the building. This means a shift from using AI as a single-purpose tool and multi-agent system within an enterprise to multi-agent systems that span multiple enterprises.

A blog post published Tuesday, hosted by Google Cloud CTO Will Grannis, explores input from several executives to clarify the complexities and steps involved in the process.

Although the post analyzes enterprise services, the process will not look different for advertising. It will move the industry from manual management to autonomous processes so it can conduct cross-company coordination and full campaigns.

Instead of humans logging in to platforms to adjust bids or approve creative, agents will handle the entire lifecycle of a campaign.

The change requires a new approach to trust. This is particularly important when agents act across enterprise boundaries and organizations need shared policies for identity verification and data sharing, according to Ashwin Ram, Google distinguished engineer and senior director of AI.

advertisement

advertisement

Ram views "zero trust" models as a security approach where nothing is "trusted" by default. This means that even if an AI agent is inside a company’s network, it must prove its identity and permission through a type of digital passport for each action or data request.

This is the way agents will need to interact and exchange information. It will require new security systems such as accessing machine behavior across full multi-agent networks, testing for quality, latency, cost and business impact.

This will lead to questions regarding which actions an AI agent can take on behalf of another company, and under which security and process agreements.

Six considerations stand out for John Abel, senior technical director in the Office of the CTO at Google Cloud.

Able suggests that companies treat agents as a contracted service. Organizations must define the risk levels in which these agents will operate. The data schemas, standards, and connected protocols must be agreed upon for all participants.

Able says humans should remain in the loop, so when one agent's decisions affect other agents they can evaluate the changes that require verification. He added that organizations need clarity in advance on costs and commercial terms of each agent partnership.

Trust, interoperability, and human control will be the most important factors, according to Ben McCormack. He wrote that security and governance require rethinking, and agents need clear limitations and hard-coded guardrails -- for example, that an agent can edit files but never delete them.

APIs should act as a rulebook for the agent to enforce rules deterministically, and for sensitive data, a "paranoid mode" that requires user confirmation before high-risk actions adds an important check, McCormack wrote. Autonomy should be a gradual progression, and agents should not be given full freedom all at once.

Yingchao Huang, Google software engineer, Office of the CTO, suggested that businesses should build dedicated APIs and data connectors designed for agents and focus on data transformation, ensuring that data governance travels with the data.

When an agent creates something new from a partner's source material, the original access controls, retention policies, and audit trails should carry over.

Antonio Gulli, Google senior director and distinguished engineer of the CTO Office for AI, Cloud and Search, warned that agents depend on continuous learning, taking in new market signals, code patterns and legal precedents to stay accurate.

Without the feedback loop, agents operating across organizations rapidly become outdated -- especially as regulations, markets, and partner systems keep shifting.

Next story loading loading..