The New York State attorney general’s office has launched an investigation of Amazon’s Twitch, as well as 4chan, 8chan and Discord, in the wake of the domestic terrorist incident in
Buffalo in which 10 supermarket shoppers were shot to death and three others wounded.
The investigations will probe how social platforms and other online resources may have been used by the
shooter to plan, discuss, stream or promote the attack.
“The terror attack in Buffalo has once again revealed the depths and danger of the online forums that spread and promote
hate,” said New York Attorney General Letitia James, in a statement.
“The fact that an individual can post detailed plans to commit such an act of hate without consequence,
and then stream it for the world to see is bone-chilling and unfathomable,” she added. “Time and time again, we have seen the real-world devastation that is borne of these dangerous and
hateful platforms, and we are doing everything in our power to shine a spotlight on this alarming behavior and take action to ensure it never happens again.”
A person calling himself
Payton Gendron outlined attack plans on Discord. Discord told NPR.org that the suspect used a private forum on the site as a journal, and that it was only viewed by a small group of people invited in
shortly before the attack.
The suspect appears to have attributed racist content on 4chan with influencing an 180-page screed he allegedly posted online.
The suspect also livestreamed
the attack on Twitch. Amazon said it removed the feed less than two minutes after it started, but the footage nevertheless spread across the internet.
The AG received a referral from New York
Governor Kathy Hochul to conduct the investigation, based on state law permitting the AG to investigate matters concerning public safety and justice.
However, under Section 230 of the
Communications Decency Act, internet platforms are in most circumstances protected from lawsuits based on material created by third parties.
One exception would be if federal prosecutors named
a platform as a co-defendant in a hate crime, Santa Clara University School of Law professor Eric Goldman told NPR. "However, the odds that an internet service would be held liable for a user's hate
crimes are very, very low," he said.
To date, most courts have upheld platforms’ content moderation activities as being protected by the First Amendment. Proving that a platform had
sufficient knowledge of illegal actions to support charging it as a defendant in a crime would be extremely difficult, according to Goldman.
But attempts to challenge Section 230 based on
claims of anti-conservative bias have been increasingly aggressive, and last week, one succeeded, albeit probably temporarily.
Last week, a federal appellate panel ruled that Texas can enforce its social media law, which prohibits
Twitter, Facebook and YouTube from suppressing users' posts based on viewpoint. Tech industry groups are appealing that decision, in line with their full-out defense of Section 230 in response to all
challenges.
Goldman has called the Texas law “brazenly unconstitutional.”
Last year, Florida also passed a law challenging Section 230 by subjecting large social media
services to fines of up to $250,000 per day for “deplatforming” candidates for statewide office, and $25,000 per day for other offices.
The tech industry successfully challenged
that measure, but Florida officials are appealing the ruling.
Various proposals have also sought to limit Section 230.
Notably, the SAFE TECH Act introduced last May by Sens. Mark
Warner, Mazie Hirono and Amy Klobuchar seeks to keep Section 230 from enabling fraud, exploitation, threats, extremism and consumer harms — including fraud through advertising.
Under
that legislation, online platforms would not be able to claim immunity under Section 230 for alleged violations of federal or state criminal or civil rights laws, antitrust laws, stalking and
intimidation laws, international human rights laws or civil actions for wrongful death. They would also not be shielded from complying with court orders.
The law would also “strip
companies of immunity for any speech they were paid to carry, including ads and marketplace listings,” summed up Politico. It would discourage platforms from carrying ads for fraudulent
products or scams, and clarify that platforms can be held responsible for facilitating any ads that violate civil rights statutes.”
Currently, Section 230 makes it difficult to make
cases against platforms for carrying ads that allegedly violate civil rights laws —although Facebook did settle a suit in 2019 that alleged it had allowed advertisers to place housing,
employment and credit ads that could exclude users based on race.
But individuals also keep trying to take on Section 230 through court challenges, and again, one has recently scored a partial
win.
Earlier this month, a federal judge cited Section 230’s
protection of platforms’ content moderation decisions for dismissed claims by journalist Alex Berenson that Twitter violated his free speech rights by banning him — allegedly for
assertions about COVID-19 vaccines’ limited efficacy. However, the judge ruled that Berenson can proceed with a claim that a Twitter executive failed to follow through on promises about how it
would apply its misinformation policies, and so broke a “contract” with him.
As is generally the case with First Amendment issues, none of this is going to get any less
complicated. And as with many critical issues facing the country, the fate of Section 230 may come down to whether either political party manages to secure commanding majorities in the Senate and
House.