Boris Backing Corbyn? Brands Will Need Strong Ethics When Deepfakes Target Businesses

Marketers have always known that working hard on brand image is key to earning the public’s trust. There is much talk, however, about how being ethical and transparent in delivering corporate values is becoming even more imperative.

Deepfake technology could soon be used to inflict real harm on companies, their top-level executives and brand ambassadors. A good likeness of another person can be honed and placed over another person’s face, with their voice accurately copied, in just 24 hours now.

Just look at what happened this week with the release of a deepfake video this week in which Boris appears to back Jeremy Corbyn before the voiceover acknowledges the footage is instead a demonstration of what the technology makes possible.

Indeed, in rogue hands it is easy to see how a company's rivals, speculators, cyber criminals or disgruntled employees could seek to inflict damage on a brand. It may be a case of seeking revenge, extortion, decreasing a rival’s sales or hitting their share price.

The possibilities of what a deepfake video may contain are endless. It could be a CEO appearing to admit that their products are substandard or a video of a well-known brand ambassador launching a hate-filled tirade against a minority group. More subtly, it could just be a faked video of a seemingly everyday employee outside a facility admitting that the business uses child labour. 

How much damage these attacks on a company’s brand image and the integrity of its executives are capable of inflicting will largely depend on how well trusted the brand and its people are. 

A CEO who is a "colourful" character and has shown disregard for employees or customers in the past might be considered capable of a racist outburst.

In contrast, a respected senior executive who is seen to live by the company’s ethics as a good corporate citizens would probably be sympathised with as a victim of an unsolicited attack. 

What are today’s deepfake risks? 

We have already been given a small glimpse of what may be possible through the warning of the well-known Obama deepfake video in which the audience can probably just tell it is a fake, albeit at first glimpse, a fairly convincing one. More recently, a "Saturday Night Live" show saw impressionist Bill Hader morph into Tom Cruise while impersonating him.

We have already seen the first story of a brand being embarrassed by such use of technology, with a story in the Wall Street Journaldetailing how a deepfake of the CEO’s voice was used to compel a junior executive to pay £200,000 to fraudsters. If you were a customer of a company this happened to, or were considering becoming a customer, you may question if it is professional enough to earn your loyalty.

If we are looking for a glimpse into the future, perhaps the best current example comes with the deepfake video circulating on Instagram of Mark Zuckerberg.  The discerning eye can tell is it not the real founder of Facebook, but the message that he got a tip from the villainous organisation in James Bond, Spectre, that controlling data means you control the future, may not sound quite so crazy to those who despise the brand.

It is filmed as if it were a news interview, but it is easy to see how some people could be fooled -- particularly if it were made to look more natural, as an off-the-cuff remark, rather than clearly as a demonstration of what the technology can do.

Again, the question arises. If it were another company founder, the deepfake might well be dismissed out of hand, but Mark Zuckerberg? He has earned a considerable amount of mistrust over Cambridge Analytica and Facebook’s handling of personal information. It would make him a key target for deepfake attacks.

What can brands do? 

Brand need to realise they are dealing with public perception more than the truth. They are subtly different, and it is the gap between the two where fake news is currently empowered to spread and, soon enough, deepfake videos will follow. 

The main way brands can protect themselves is to commit to an ethical strategy, to stand for decent principles which are communicated to the public and then, crucially, lived by in a transparent manner.

It is also crucial they do not get involved in any deepfake trickery themselves. Any business suspected of using fabricated footage to attack a rival will not only incur customer wrath -- they would never be able to defend themselves if subsequently targeted by a deepfake themselves.

Brands that are seen as striving to do the right thing with executives who are decent people are likely to be trusted by the public. Those who take an alternative route will not.

Put it this way. If two well-known entrepreneurs suffered deepfake attacks tomorrow which would you give the benefit of the doubt to -- Sir Richard Branson or Sir Philip Green? If two retailers suffered a similar fate, which would you believe -- Waitrose or Sports Direct?

Next story loading loading..