After a week of apologies related to the Cambridge Analytica scandal, Facebook had another mistake to atone for on Friday.
This time, the slip-up stems from CEO Mark Zuckerberg and other Facebook executives being allowed to delete Messenger messages they had already sent -- a feature not offered to other Messenger users.
Trying to explain the discrepancy, a company spokesperson said Zuckerberg and other executives were using an encrypted version of Messenger, which included a “timer” that allowed them to have their messages automatically deleted.
But, analysts weren't buying the explanation on Friday.
“It shows that Zuckerberg understands the value of an individual’s right to be forgotten,” Forrester Research analyst Jeff Pollard said. “It just also shows that he doesn’t think you or I should have that right (until he was caught with it).”
Seemingly in response to the gaffe, Facebook now plans to let all Messenger users “unsend” sent messages at some point in the future.
“We will now be making a broader delete message feature available [yet] this may take some time,” a company spokesperson said.
Until the forthcoming feature is ready for prime time, Facebook promises executives will no longer have the privilege of unsending Messenger messages.
Facebook also admits that it should not have given executives the delete feature before offering its to all Messenger users.
“We should have done this sooner … and we’re sorry that we did not,” the spokesperson said.
Already angry in the wake of the Cambridge Analytica controversy, Facebook watchers are clearly unnerved by the appearance of Zuckerberg manipulating his company’s Messenger service.
“Can we really trust [Facebook] to run a messaging service?” famed technology reporter Walt Mossberg tweeted on Friday. “Is that the way we expect a texting service to be managed?”
Making matter worse, the spotlight-shy Zuckerberg is scheduled to testify before Congress next week.
Among other failures, Zuckerberg admitted on a call with reporters this week: “We didn’t focus enough on preventing abuse and thinking through how people could use these tools to do harm.” Specifically, “that goes for fake news, foreign interference in elections, hate speech, in addition to developers and data privacy.”