Missouri Attorney General Andrew Bailey's recent proposal to regulate online content moderation raises "raises profound constitutional and practical red flags."
That's according to the Vanderbilt University think tank The Future of Free Speech, which weighed in against the proposal this week.
"While the rule is couched in the language of consumer protection, at its core, it is a sweeping attempt to dictate how platforms design, organize, and deliver speech online," the organization writes in comments submitted to the attorney general's office.
Bailey's proposal, unveiled in May, would require large tech platforms to allow Missouri residents to, in his office's words, “choose their own content moderators rather than being forced to rely on the biased algorithms of monopolistic tech giants.”
advertisement
advertisement
The proposed rule specifically would require tech companies to offer Missouri users a “choice screen” that would allow them “to choose among competing content moderators, if any competing content moderators have sought access to the platform.”
Vanderbilt's The Future of Free Speech says this scheme would violate the First Amendment by tampering with social media companies' editorial decisions.
"Much like a newspaper cannot be forced to publish external inserts or opposing editorials, a digital platform cannot be forced to support alternative content moderation systems simply because the government deems them more ideologically balanced," the group writes.
The Supreme Court said last year that social media platforms, like other publishers, have First Amendment rights to determine their own editorial policies.
The First Amendment “does not go on leave when social media are involved,” Justice Elena Kagan wrote in a ruling handed down last July.
“This Court has many times held, in many contexts, that it is no job for government to decide what counts as the right balance of private expression -- to 'un-bias' what it thinks biased, rather than to leave such judgments to speakers and their audiences,” she added. “That principle works for social-media platforms as it does for others.”
The tech industry group NetChoice -- which counts Google, Meta and other large companies as members -- also criticized against the proposal.
"When a service decides to limit the kinds of content it will disseminate, it makes an expressive choice about the kinds of content that it considers most useful and the kind of community it seeks to foster," NetChoice writes.
"Just as a magazine dedicated to politics will necessarily publish different articles and cultivate a different reader base from a magazine dedicated to fashion, websites can make different judgments about the content that will shape their communities," the group adds.
NetChoice also says the proposal threatens privacy and security because it requires social media companies to allow their platforms to be accessed by unvetted moderators.
"This lack of oversight creates a significant risk where bad actors could pose as moderation services to gain access to sensitive user data," the group adds.