As the 2020 U.S. presidential election approaches, Facebook is struggling to justify its controversial policy of letting politicians run false or misleading ads on its platform.
Its "explanation" for why it approved an ad from President Donald Trump making false claims about former Vice President Joe Biden, Facebook said broadcast stations have run the ad nearly 1,000 times.
“Looks like broadcast stations across the country have aired this ad nearly 1,000 times, as required by law,” Facebook press representatives tweeted over the weekend.
Facebook also reiterated its strategy of not wanting to police political debate.
“FCC doesn’t want broadcast companies censoring candidates’ speech,” it tweeted. “We agree it’s better to let voters -- not companies -- decide.”
The message was tweeted at Sen. Elizabeth Warren (D-Mass.), who has distinguished herself as a major critic of Facebook’s content policies.
In an effort to needle Facebook, Warren recently ran a Facebook ad suggesting the tech titan and its head Mark Zuckerberg had officially endorsed Trump for re-election.
Soon after the ad ran, Warren’s camp admitted the false claim was a merely a means of poking holes in Facebook’s political ad policies.
Nick Clegg, vice president, global affairs and communications, has recently been tasked with trying to explain Facebook’s odd content policies.
“We don’t believe … that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny,” Clegg wrote in a recent blog post. “That’s why Facebook exempts politicians from our third-party fact-checking program."
The program to which Clegg referred was put in place to prevent users from spreading false news and other types of viral misinformation -- like memes, manipulated photos and videos -- on Facebook.
Facebook has maintained since 2016 that questionable or downright false content spread by politicians is acceptable due to its “newsworthiness.”
But the policy has been the source of much confusion and frustration. Earlier this year, for example, Facebook drew widespread criticism for refusing to take down a video manipulated to make House Speaker Nancy Pelosi appear impaired.A year earlier, the social giant was blasted for not removing Donald Trump’s proposed ban on Muslim immigration.
Thank you for the article.
It may be important to remember several things:
1) In 2016 the year of the U.S. Presidential election, Facebook made +528% more money selling ads on its pages compared to 2012, which was the year of the prior U.S. Presidential election.
2) According to the Mueller Report, Russia bought over 3,500 Facebook advertisements before/during/after the 2016 Presidential election which included anti-Clinton and pro-Trump advertisements [Buzzfeed, 4/18/2019] documented evidence illustrating Russia’s interest in influencing the U.S. election.
3) Vladimir Putin, Russia's President, has been called the wealthiest person on the planet [Fortune, 7/29/2019 “Vladimir Putin Is Reportedly Richer Than Bill Gates and Jeff Bezos Combined”].
What if Putin decided to flood Facebook with billions of dollars’ worth of ads supporting Trump, either paid for by Putin himself, the Russian government, or shadow companies, so it cannot be easily determined who paid for the ads?
Perhaps this scenario goes beyond the original scope of this article, but it presents us with another potential problem: a foreign government having the ability to easily interfere in U.S. elections through social media. It has been reported the cracks that allowed Russia to interfere in 2016 have gotten wider since then.
I'm interested if you believe this is a potential (related) problem worth reporting too. Thank you very much.