Media Industry Eyes Existential Risk Of AI, Ex-Googlers Weigh In

Israeli-based startup Twiggle recently hosted a small group in a private discussion on the existential risk of artificial intelligence to debate the outcome and risks of algorithms that aren't taught, but rather allowed to learn based on daily input.

Eventually the algorithm, in theory, would become smarter and more manipulative than humans. "It can learn to do stuff that we don't want it to do and use any means to achieve it," said Amir Konigsberg, CEO and co-founder of Twiggle. He co-founded the company with CTO Adi Avidor, who worked on the Google Now project. "It's more difficult to build artificial intelligence without risk than artificial intelligence with risk."

Entrepreneurs creating AI-related technology agree they must consider and take responsibility for the benefits as well as the future risks around building algorithms. Take Google's self-driving cars, for example. The intended use -- to help the elderly or the vision-impaired who cannot drive -- get from from one location to another creates positive change for humanity, but put those cars into the hands of terrorists and the car turns into a ticking time bomb.

Konigsberg, a founding member of Google’s operations in Israel, said when creating boundaries for something that is supposed to learn on its own, the creator must mitigate risk without mitigating the power of the technology -- a topic championed by Elon Musk, the founder of Telsa Motors, SpaceX, and PayPal.

"We have to be conscious of the risk in the field of artificial intelligence and keep the conversation going," Konigsberg said. Along with other other entrepreneurs who formerly worked at Google, he has been thinking about how the technology will influence society in the future as well as taking safeguards to mitigate risk.

Graham Cooke, CEO and co-founder of Qubit -- a former Googler who spent five years building Big Data systems and learning about consumer behavior -- said entrepreneurs have "a moral responsibility" to hire the correct people who will develop AI-related technology with safeguards. "Any new technology poses new and unknown risks," he said. "You weave the moral obligations into the business." 

Falon Fatemi couldn't agree more. "We think about this a lot because it's a real responsibility," she said. "I take that responsibility very seriously. We really do have control of the choices we make in terms of the markets we enter and partnerships and integrations we choose."

At 19, Fatemi joined Google, remained for six years, and left to build Node.io, a stealth startup of former Google, Facebook and Twitter employees focusing on data intelligence that powers personalized recommendations for everything from news to ecommerce. Integrated with Salesforce CRM, the initial focus supports business decisions for sales and marketing, pulling data from the company's CRM database, as well as open networks across the Web.

Similar to Google's Knowledge Graph, Node.io's technology, described as a "search engine without a search box," goes beyond search to understand the continually changing relationship between people, places and things.

Fatemi acknowledges that technologies based on AI must have intelligence and move beyond the workflow recommendations like the technology found in Now, Google's first generation personal assistant.

When asked how entrepreneurs mitigate risk, she said "It's important to have a clear set of ethics and values on how to achieve goals, because there are so many ways to execute strategies." It's a "responsibility" to think about how the technology could be used in the future and add safeguards to mitigate risk.

"If you're not ready to take on that responsibility, don't become a founder of a company," Fatemi said. Node.io is backed by NEA, Mark Cuban, Avalon Ventures, Canaan Partners, among others

Next story loading loading..