LLM Search Integrates Into TV Entertainment Queries

Advanced conversational search and personalized recommendations for connected TV (CTV) advanced on Wednesday through Gracenote, the content data business unit of Nielsen, through the launch of a content protocol server that uses large language models (LLMs) and entertainment data.

Gracenote rolled out its Model Context Protocol (MCP) Server, built on an open technology from Anthropic, but adopted by many, allows any company that delivers entertainment experiences to build and serve detailed real-time searches in products. It’s based on LLM-driven inference while ensuring the answers are real and sourced against validated data.

Tyler Bell, Gracenote senior vice president of product, described it as an agentic agent that links to an LLM and MCP server.

“The protocol exists as a URL that ties to tools,” Bell said, explaining that most companies still link that search box to a server. “In a preferred world, like when I was at Roku, you would always have exactly what the viewer wanted to view on the home screen. In that world you need to know the user, what’s hot, and what people are talking about.”

advertisement

advertisement

Bell joined Gracenote with experience at companies including Roku, as well as The Trade Desk, where he supported the development of Ventura, its operating system.

Today, Bell believes Gracenote’s LLM-based search protocol takes CTV to the next level through conversational queries that tie together two or more concepts.

He described how the technology allows TV platforms to answer complex queries and make recommendations based on a range of detailed parameters like “Show me the episodes of Brooklyn Nine-Nine in which Jake references Die Hard,” or “The Academy Awards are on this week. List the twenty highest-grossing Oscar-winning films from the last ten years.”

The LLM can provide detailed and structured lists from disparate data. Gracenote’s customers will connect the MCP server through a codebase or chat interface like Gemini to support consumer-facing entertainment platforms like Roku, for example.

It’s important to train the LLMs appropriately. When asked how to make a LLM "unlearn" something that isn’t true, Bell said “it’s very difficult, because LLMs do not have introspection. If you tell it the answer wasn’t correct, they cannot unlearn it.”

The LLM might return an answer like “I’m terribly sorry I got that wrong,” but it cannot go back and fix it.

Conversations are the only way to influence the model and correct the information.

Next story loading loading..