Could AI Put Brands On The Wrong Side Of GDPR?

AI holds a lot of promise -- nobody needs to be sold on the technology anymore. However, what is often sidestepped is the technology's knack of occasionally bringing in human biases to what are supposed to be logical decisions made by an algorithm alone.

I was once asked to look at this for a legal firm, and I have to confess that I thought it was nonsense because computers aren't us, they're -- for want of a better word, computers. 

However, the evidence is out there. New York City has already announced a task force of its AI systems to ensure they do not show signs of gender or race bias. 

In digital marketing, as Marketing Week reminds today, we already have the case of a Facebook algorithm being accused of showing gender and race bias in serving ads and a recruitment tool for Amazon being accused of favouring men over women. Instagram was in the headlines recently for all the wrong reasons as it realised an algorithm had allowed children to be exposed to potentially harmful content.

There's a very real risk of reputational damage here, but there is also a very real risk of inadvertently breaking GDPR, or at the very least, not fulfilling an individual's data rights. 

How so? Well, the new data protection rules enshrine a person's ability to not have their personal information used for automated decision making. That means companies have to be able to respond to such request and prevent individuals from having their data thrown into an algorithm to see if the computer says "yes" or "no."

On that subject, GDPR also enables individuals to have decisions explained to them. If a bank won't let you have an account or a mortgage, for example, a person is entitled to ask the question "why?"

Here's the real rub. On the one hand, algorithms, if unchecked, run the risk of exhibiting bias that might be brought to light by consumers asking for explanations behind the decision. 

On the other hand, it's easy to see a situation where a brand simply doesn't know why the computer turned down an application from a customer or job seeker. The decision was made inside a black box of tricks and its operators may not be able to answer with clarity and precision why a decision was made.

This is not to suggest that AI should not be used -- that smart algorithms should be retired until further notice. But we are beginning to see some indications that the tech can be as flawed as the humans who program it, and there is a need to ensure systems are examined to ensure they will not cause reputational damage or land a brand in hot water over GDPR rights. 

Next story loading loading..