Commentary

Can Chat GPT Change Your Mind?

If you’ve ever had a “conversation” with GPT-4, you know how easy it is to get drawn into what feels like an organic interaction with a software platform. This doesn’t happen with phone Interactive voice response (IVR) systems. But with AI, that’s all about to change.

In a controlled and randomized study, GPT-4 demonstrated a greater ability to shift opinions during a debate compared to human participants, especially when provided with personal information about the individuals it was debating (this advantage was not observed when humans had the same information).

The impact was substantial: the AI boosted the likelihood of changing someone’s mind by 87% more than its human counterparts. This effectiveness could explain why another research study found that GPT-4 was able to handle a particularly challenging conversational task: debating with conspiracy theorists.

advertisement

advertisement

In a trial held in Lausanne, Switzerland, and Trento, Italy, involving three rounds of debates where GPT-4 took the contrary position, it significantly reduced beliefs in conspiracy theories. Impressively, these effects were sustained over time, even among those deeply entrenched in such beliefs.

I asked Gary Marcus, a leading voice in AI and professor emeritus at NYU, about this revelation.  Would those persuasive powers be truthful? 

“Truth is clearly going to be a casualty of the LLM era,” said Marucs.  “These systems don’t understand truth, and can and will be misused by those don’t care about the truth to undermine the truth.”

Andrew Orlowski, who writes a weekly column for The Daily Telegraph, is skeptical.  He understands why people want to trust the bots “when common sense suggests they shouldn't. It's their bedside manner. You have to be fairly strong-minded to disagree with one.”

So, can AI convince humans to make bad decisions? It depends on how the owners of the platforms design their programs. Not all AI are created equal.

Here are a few ways this could happen:

Misinformation and disinformation: AI can -- and will --  be used to create and spread false or misleading information. Deepfakes can be highly convincing and manipulate public opinion or personal beliefs.  Election security followers, take note. 

Biased recommendations: If an AI system has been trained on biased data or flawed algorithms, it can make biased recommendations and then gently nudge humans into making decisions.

Manipulative marketing: AI can optimize and personalize marketing strategies to exploit vulnerabilities or psychological biases of individuals. For instance, it can encourage excessive spending or promote unhealthy products by targeting susceptible audiences.

Security vulnerabilities: Malicious use of AI in cybersecurity attacks can trick individuals and organizations into making security mistakes, such as revealing sensitive information or allowing unauthorized access to systems.

As Orlowski sees it, “People who are feeble-minded or intellectually insecure are the people most likely to be impressed by LLMs producing art or arguments. Wow, this is amazing! But then they've already ‘lost their minds.’  Cults target such people - and so do AI grifters. They're easy meat.”

AI technologies like GPT-4 are reshaping our interaction with tech, bringing us closer to real, human-like conversations. Studies like the one referenced above show that GPT-4 isn't just chatting; it's effectively changing minds, sometimes even in tough debates with conspiracy theorists.

But there's a flip side. Experts like Gary Marcus remind us that AI doesn’t really grasp the concept of truth, which can lead to its misuse in spreading false information or manipulating opinions. Andrew Orlowski also points out that it’s easy to trust these systems a bit too much, even when we probably shouldn’t.

As AI grows more sophisticated, keeping an eye on the ethical side of things is just as important as celebrating the tech advances. We need to be mindful of how AI is used, making sure it helps rather than harms.

Next story loading loading..