The dangers of AI were vividly illustrated last week when a German weekly pulled a cruel stunt that led to the editor being fired and to a probable lawsuit.
Die Aktuelle ran an interview with race car champion Michael Schumacher. The cover featured a photo of Schumacher, with the headline, “Michael Schumacher, the first interview.”
There was only one problem: Schumacher hasn’t been seen in public since 2013, when he suffered grave brain damage in a skiing accident. The purported interview was generated by AI.
Totally fake were quotes like this one: “I was so badly injured that I lay for months in a kind of artificial coma, because otherwise my body couldn’t have dealt with it all.”
There were ample hints that the interview was fraudulent, including a subhead that said, “It sounded deceptively real.” But Schumacher’s family was outraged at this insensitivity, and is planning litigation, according to media reports.
Editor Anne Hoffman, who had served since 2009, was fired. And publisher FUNKE issued a statement saying, “This tasteless and misleading article should never have appeared.” Hoffman has also apologized.
The controversy has upended the German media industry.
Of course, this episode was not caused solely by AI: editorial talent devised this clumsy attempt at parody and decided to make a cover story out of it.
The danger is that AI could just as well be employed to create a fraudulent interview with King Charles or Joe Biden, say. Bad actors can easily use AI to spread misinformation, and doubtless already have.
So is AI at fault here?
Not by itself. AI obviously came up with the quotes after being fed certain hints and it failed to discern that Schumacher could not have said these things.
But you can’t blame the technology. Sadly, it was a failure of human judgment.