Privacy watchdogs are urging the Federal Trade Commission to stop Meta Platforms from moving forward with a plan to target ads to Facebook and Instagram users based on their
conversations with chatbots.
"This unprecedented use of deeply sensitive data presents outsized risks to consumer data privacy and security," the Electronic Privacy Information
Center, Center for Digital Democracy and other groups say in a letter sent to FTC Chair Andrew Ferguson.
"Meta’s plan to repurpose conversational AI data for advertising
illustrates the dangers of regulatory inaction: absent intervention, the practice will normalize a level of surveillance that is qualitatively more intrusive than traditional behavioral tracking," the
groups add.
The organizations also claim that Meta's plan is "at odds with" an FTC settlement approved in 2020. The order in that matter includes provisions requiring the company to conduct detailed reviews and privacy
assessments for each new service that collects or uses consumer data, and implement privacy safeguards in response to any risks posed by new products.
advertisement
advertisement
"Conversational data
generated through AI chat interactions is substantially more sensitive than ordinary behavioral data, as it may reveal personal relationships, mental health concerns, political views, and other
intimate information," the groups write. "Accordingly, Meta must conduct a comprehensive privacy and security risk assessment of this practice, and that assessment must be subject to formal FTC
oversight, review, and documentation."
The letter comes in response to Meta's announcement that starting December 16, it will harness chatbot data for ad targeting -- and will
not allow users to opt out.
Meta said in a blog post earlier this month that its
chatbot "update" will help the company improve recommendations so that users are "more likely to see content they’re actually interested in -- and less of the content they’re not."
"For example, if you chat with Meta AI about hiking, we may learn that you’re interested in hiking -- just as we would if you posted a reel about hiking or liked a hiking-related
Page," the company wrote. "As a result, you might start seeing recommendations for hiking groups, posts from friends about trails, or ads for hiking boots."
The company added
that it won't use topics such as people's "religious views, sexual orientation, political views, health, racial or ethnic origin, philosophical beliefs, or trade union membership" to show ads.
The advocacy groups argue to the FTC that conversational data "is uniquely revealing, capturing intimate details of users’ personal lives, relationships, health, and
beliefs."
"At a minimum, this category of data should be treated as sensitive information requiring affirmative, opt-in consent from adults (and a categorical prohibition on
monetization of youth data) before it can be used for advertising," they write. "Failing to impose this standard would effectively permit firms to unilaterally exploit the most private forms of
digital interaction for commercial gain."
The organizations add that Meta's plan "appears to constitute an unfair or deceptive practice."
"The use of
private conversational data for advertising imposes a substantial injury that consumers cannot reasonably avoid, and which is not outweighed by countervailing benefits," the groups write. "Harms
include loss of privacy, the chilling of free expression, discrimination via algorithmic profiling, and reputational or psychological risk."
The groups make several requests of the FTC,
including that it launch a formal investigation into the company's decision to harness chat data without "informed, opt-in consent," and that it force Meta to suspend the rollout pending an official
investigation.