Commentary

Persuade Me: AI's Large Language Models Become Persuasive

As Google continues to roll out new products and features wrapped in artificial intelligence, one artificial intelligence (AI) startup gaining traction keeps talking about how its large language models have become more persuasive.

New research from Anthropic posted Tuesday examines how its latest models compare to humans when it comes to persuasiveness.

Researchers found that "each successive model generation is rated to be more persuasive than the previous" one -- and that its latest and most capable model, Claude 3 Opus, "produces arguments that don't statistically differ in their persuasiveness" when compared with arguments written by humans.

Persuasiveness increases across model generations within both classes of models, the company said. Researchers studied persuasion because it is a skill widely used by all.

advertisement

advertisement

Advertisers try to persuade people to buy products, healthcare providers try to persuade people to make healthier lifestyle changes, and politicians try to persuade people to support their policies and vote for them. It will become more important as platforms including Google and Microsoft learn how to build advertising into their models.

"Developing ways to measure the persuasive capabilities of AI models is important because it serves as a proxy measure of how well AI models can match human skill in an important domain," researchers wrote. They also realize that "persuasion may ultimately be tied to certain kinds of misuse, such as using AI to generate disinformation, or persuading people to take actions against their own interests."

This research was mostly focused on complex and emerging issues where people are less likely to have hardened views such as online content moderation, ethical guidelines for space exploration, and the appropriate use of AI-generated content.

Researchers hypothesized that people's opinions on these topics might be more malleable and susceptible to persuasion because there is less public discourse and people may have less established views.

Opinions on controversial issues such as politics are typically more entrenched and can potentially reduce the effects of persuasive arguments.

The idea of persuasiveness centers on how long it will take for machines to become more persuasive than humans.

Elon Musk, the owner of X, co-founder of OpenAI and CEO of Tesla, on Monday predicted that AI would become smarter than the smartest human most likely by 2025 or 2026.

Researchers acknowledged the need to further study and fully understand the implications of the technology.

To help enable this, the company released all of the data from this work -- claims, arguments, and persuasiveness scores -- for other researchers to investigate and build upon.

Anthropic's research can also be found here.

Next story loading loading..