Commentary

AI Must Be Self Governed

According to Grant Gross, independent contributing writer, addressing the affect of Artificial Intelligence and personal data, “AI analytics will be under increasing scrutiny. Experts say companies using AI with personal data must focus on GDPR and HIPAA, but long term, companies can expect governments and people affected to increasingly push for audits and explanations of AI decisions. Business use of cognitive and artificial intelligence is expected to skyrocket in the coming years, with global spending on the technology reaching $19.1 billion in 2018, a 54.2 percent increase over last year, according to IDC, says the report.

“But as businesses embrace AI to help with all kinds of tasks, they face a complex set of regulations that limit what personal data they can collect and use. A business using AI to predict when its own factory machinery needs maintenance has little to fear from regulations. But a handful of significant regulations in the United States and the European Union restrict what personal data businesses can collect, and potentially what AI systems can use, from customers and other people,” says Gross.

“The elephant in the room is the EU’s General Data Protection Regulation (GDPR), which (took) effect May 25,” says the report. “GDPR requires companies using personal information to get explicit consent before collecting and using personal information, including names, home addresses, email addresses, bank details, social networking posts, and computer IP addresses.”

Gross writes that “the U.S. has less comprehensive privacy regulations than the EU, with a patchwork of laws covering some industries and technologies. Most AI experts see the Health Insurance Portability and Accountability Act (HIPAA), the 1996 law governing medical data privacy, as the U.S. regulation companies need to pay the most attention to.

But HIPAA and GDPR may not be the last of the regulations affecting AI. With Facebook’s recent data leak involving Cambridge Analytica, companies using AI should expect more government scrutiny. Another European regulation that limits some AI data use is the long-standing right to be forgotten.”

The Cambridge Analytica leak and other data breaches “will almost certainly give rise to new data privacy regulations in the U.S. and elsewhere,” says “Governments are realizing that these massive platforms have access to massive amounts of personal information, and their terms of service are insufficient to protect consumers from potential abuse.”

“The new regulation requires most companies that collect EU residents’ data to get the consent of people whose data is being processed. The regulation also requires companies to anonymize collected data to protect privacy and notify people who are affected by a data breach,” says the report.

Eric Schrock, chief technology officer of data management platform provider Delphix, predicts that “governments and affected people will increasingly push for audits and explanations of AI decisions. The notion of legal liability in the world of artificial intelligence is convoluted, and it’s only going to get worse as the algorithms become more sophisticated and put into use in more places,” he says. “Part of the legal process will undoubtedly come down to litigation, which will force companies to explain how an AI algorithm came to the outcome it did and whether the algorithm is flawed … or if it’s a reasonable outcome given the data presented to it.”

AI success will require human involvement, notes the report. Companies deploying AI systems must also remember to retain robust human oversight and avoid decisions made completely by AI, says the report. 

Chuck Davis, CTO and co-founder of Element Data, an AI business intelligence startup, says. “Companies using AI can self-regulate by keeping humans involved in their criteria, which would be a positive move for the industry as a whole. It will pose some initial challenges as companies may have to pivot, but I believe approaching data responsibly and ethically from the start will clear a path to gain consumer trust.”

In addition, AI users should have processes to identify personal information and should put in place reliable measures to test the effectiveness and fairness of their AI systems, Economou recommends. While many privacy advocates have applauded GDPR, some in the tech industry aren’t big fans. The regulation could “wipe out a decade of advances in AI.” 

“Requiring AI companies to explain results would be particularly difficult,  “ says Vian Chinner, CEO of Xineoh, a predictive analytics vendor. “Results from older-generation algorithms like decision trees are easily explainable, but significantly less accurate than modern AI techniques.”

Economou, who serves as co-chair of the law committee of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, has a much different take on GDPR. “GDPR is a powerful step in placing the ordinary citizen back in control of her or his own data,” he says. “It offers the hope of meaningful safeguards against a Kafkaesque world, where the institutions of state and society rely on opaque AI to make non-appealable decisions that affect citizens’ rights or opportunities.”

Currently, “operators are the sole judges of their own competence,” Economou adds. “This is unlikely to be sustainable: Are even the best doctors competent to understand how AI arrives at a diagnosis?”

“This kind of self-regulation is important,” he adds. “If industry is to gain the trust of both regulators and civil society, it needs to put in place checks on its own AI systems. Failure to self-regulate will motivate governments to get involved to make their own, often politically motivated, rules.

This article/content was written initially by the individual writer (Grant Gross, contributing writer,) is a veteran tech policy reporter, and does not necessarily reflect the view of Hewlett Packard Enterprise Company.)

For additional information, please visit here.

Next story loading loading..