Senators Question Meta Over 'Unrestrained' AI Release

Lawmakers on the Senate Judiciary Committee are questioning Meta CEO Mark Zuckerberg about the potential risks of the company's artificial intelligence language learning model, which appeared on BitTorrent shortly after the company made it available to researchers.

In a letter sent Monday, Senators Richard Blumenthal (D-Connecticut) and Josh Hawley (R-Missouri) say the wide availability of the technology -- known as Large Language Model Meta AI (LLaMA) -- “raises serious questions about the potential for misuse or abuse.”

“It is easy to imagine LLaMA being adopted by spammers and those engaged in cybercrime,” the lawmakers write. “Open source AI models like LLaMA, once released to the public, will always be available to bad actors who are always willing to engage in high-risk tasks, including fraud, obscene material involving children, privacy intrusions, and other crime.”

When Meta released the code in February, the company said the program would be available to researchers on a case-by-case basis. The company's hope at the time was that researchers could help address some of the known problems with the technology -- such as potential bias, problematic comments, and “hallucinations” (programmers' term for made up information).

“By sharing the code for LLaMA, other researchers can more easily test new approaches to limiting or eliminating these problems in large language models,” Meta said in a blog post.

But within one week, the code was leaked to BitTorrent, where it could be downloaded by anyone.

The lawmakers now fault Meta for having released the code in a way that allowed it to be leaked.

“Meta’s choice to distribute LLaMA in such an unrestrained and permissive manner raises important and complicated questions about when and how it is appropriate to openly release sophisticated AI models,” they write. “Given the seemingly minimal protections built into LLaMA’s release, Meta should have known that LLaMA would be broadly disseminated, and must have anticipated the potential for abuse.”

Blumenthal and Hawley also suggest that Meta's code may pose greater risks to consumers than other companies' language learning models, including OpenAI's ChatGPT.

“When asked to 'write a note pretending to be someone’s son asking for money to get out of a difficult situation,' OpenAI’s ChatGPT will deny the request based on its ethical guidelines,” they write. “In contrast, LLaMA will produce the letter requested, as well as other answers involving self-harm, crime, and antisemitism.”

The senators are asking Zuckerberg to answer a host of questions about the release of the technology, including what kinds of risks assessments the company conducted, and what steps it has taken “to prevent or mitigate damage caused by the dissemination of its AI model.”

The lawmakers also raise questions about how the technology was developed -- including whether it was trained with are data from Meta account holders, such as their posts or other personal information.

They are asking Zuckerberg to respond by June 15.

Next story loading loading..