Corralling AI: Content Created By Bots Should Be Labeled

One problem with artificial intelligence (AI) is that it is difficult to determine where the intelligence ends and the artificial begins.

This subject took on a certain urgency last fall when OpenAI introduced ChatGPT.  

“Within weeks, the chatbot amassed 100 million users and spawned competitors like Google’s Bard,” writes Valérie Pisano, president and CEO of Mila, a non-profit artificial intelligence research institute based in Montreal that focuses largely on governance. 

Writing in Maclean’s, Pisano adds: “All of a sudden, these applications are co-writing our emails, mimicking our speech and helping users create fake (but funny) photos."

The problem is that "most countries don’t have any AI-focused regulations in place — no best practices for use and no clear penalties to prevent bad actors from using these tools to do harm,” Pisano adds. 



However, the European Union has “taken the lead with its AI Act, the first AI-specific rules in the Western world, which it began drafting two years ago,” Pisano writes.

The main requirement? That any company “deploying generative AI tools like ChatGPT, in any capacity, will have to publish a summary of the copyrighted data used to train it.”

This need is a practical one: As far as we can see, the EU is not making the melodramatic argument that AI could lead to the extinction of humanity, as The Centre for AI Safety website suggests, according to BBC News. 

Mila argues that Canada should adopt one key measure from the EU —  that “developers and companies must disclose when they use or promote content made by AI. Any photos produced using the text-to-image generator DALL-E 2 could come with watermarks, while audio files could come with a disclaimer from a chatbot — whatever makes it immediately clear to anyone seeing, hearing or otherwise engaging with the content that it was made with an assist from machines.”

This type of transparency is required of historians and journalists citing other types of sources. Indeed, a professor who works with Mila allows students to use ChatGPT to compile literature reviews at the start of their papers — if they label the bot-generated parts. 

It is difficult to foresee a national AI bill taking hold in the U.S. — we can’t even seem to pass a federal privacy law. It may be up to the states to take the lead. 

Next story loading loading..