Artificial intelligence might be a relatively new technology, but companies deploying it remain subject to the same consumer protection and civil rights laws that have existed for decades. That's according to Federal Trade Commissioner Alvaro Bedoya, who spoke this week at a conference of the International Association of Privacy Professionals.
Artificial intelligence, or “AI,” is already regulated, Bedoya emphasized in a 21-minute address.
“There is a very powerful myth out there that AI is unregulated,” he said, adding that the concept has a “powerful, intuitive appeal,” which he described as follows: “How can our dusty old laws apply to mysterious new technologies?”
In fact, he said, those current laws do govern artificial intelligence companies.
“Unfair and deceptive trade practices apply to AI,” he said. “If a company injures consumers in a way that satisfies our test for unfairness when using or releasing AI, that company can be held accountable.”
He added that products or services powered by artificial intelligence are also subject to civil rights laws, as well as laws regarding liability for injuries.
“There is no AI carve out,” he said.
Claims that artificial intelligence is unregulated only benefit a “small subset of companies that are uninterested in compliance," the FTC commissioner added.
“We've heard these lines before: We're not a taxi company, we're a tech company. We're not a hotel company, we're a tech company," he said. "These statements are usually followed by claims that state or local regulations could not possibly apply to those companies."
Bedoya also called for more transparency from developers, specifically criticizing a recent OpenAI report that included the following language: “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”
“This is a mistake,” Bedoya said, adding that outside researchers and government officials “need to be involved in analyzing and stress testing this models, and it's hard to see how that can be done with this kind of opacity.”
His remarks came as some industry watchers are increasingly voicing skepticism of artificial intelligence. Just last week, the advocacy group Center for AI and Digital Policy -- founded by longtime privacy advocate Marc Rotenberg -- urged the FTC to issue an order halting further commercial releases of the language model software GPT-4.
The software “is biased, deceptive, and a risk to privacy and public safety,” the organization wrote in its complaint.
Earlier this week, President Joe Biden also raised questions about artificial intelligence.
"AI can help deal with some very difficult challenges like disease and climate change, but we also have to address the potential risks to our society, to our economy, to our national security," Biden said during a meeting with the President's Council of Advisors on Science and Technology.
When asked whether he thinks artificial intelligence is "dangerous," he responded: "It remains to be seen. It could be."