Commentary

Google's Marvell Partnership Could Reduce 'Efficiency Tax'

Google’s negotiation with Marvell Technology to co-develop two new chips aimed at running models ‌more efficiently would signal a shift in its custom, silicon strategy toward artificial intelligence (AI) inference.

That is the moment an AI model uses its training to answer a question, identify an image, or make a prediction.

The technology would help to reduce “efficiency tax” that companies incurred since switching to AI-run advertising systems.

The tax refers to hidden costs related to overhead and lost performance caused by outdated infrastructure, middleman fees, and the massive computational expenses of running AI.

If the deal goes through as The Information reported on Sunday, citing two people with knowledge of the discussions, it will give advertisers and their agency partners as well as technology platform companies a needed push and theoretically a reduction in costs.

advertisement

advertisement

Reportedly, it would help to lower the total cost of ownership and operation of AI-related technology, which in turn could make it less expensive to create, buy media, target ads, and analyze performance and outcomes.

Since advertising began to shift from "keyword matching" to complex "AI-driven targeting," this “efficiency tax” manifests through infrastructure costs, monetization fees and privacy-related costs.

It also would allow Google to diversify beyond just working with Broadcom, although the two announced a long-term agreement to design and supply TPUs and networking components through 2031.

The strategy diversifies Google’s strategy, rather than reducing its partner participation, and demonstrates interoperability.

Google and Marvell also are reportedly in discussions to develop a memory processing unit and a new tensor processing unit (TPU) for large language models. A TPU is an application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, accelerating AI training, and more. The goal is to finalize the design of the memory processing unit in 2027.

Marvell Technology -- a semiconductor company that provides tech for data infrastructures -- would help to give Google a massive leap to further support the advertising industry.

Any improvement in AI hardware performance translates directly into higher margins and more effective ad products for companies like Google, because advertising generates most of its revenue.

It’s not that Google could not have achieved this alone with Deepmind in its corner, but Marvell is a highly respected semiconductor manufacturer.

Google Deepmind has been working on AI inference in various forms for more than a decade, although it may not have been referred to in that exact phrase. 

For example, Deepmind in 2013 published research on AI systems that could play Atari games that required real-time decision-making, a form of high-speed inference.

Researchers at the company have been working to solve the problem of running AI models efficiently to power features such as voice recognition and image search. 

Next story loading loading..