Microsoft Teaming With OpenAI To Pursue Artificial General Intelligence

Microsoft is investing $1 billion in OpenAI, a firm founded four years ago by several tech moguls as a nonprofit with a mission “to ensure that artificial general intelligence benefits all of humanity.” On the somewhat less idealistic level, Microsoft will be OpenAI’s preferred partner for commercializing new AI technologies.

The long-range goal of the partnership is “for artificial general intelligence to work with people to help solve currently intractable multidisciplinary problems, including global challenges such as climate change, more personalized healthcare and education,” according to a blog post announcing the deal.



AGI “represents a more futuristic version of AI that aims to work across different fields, rather than being more narrowly focused on specific tasks such as writing or translation,” Sarah E. Needleman writes for  The Wall Street Journal.

Based in San Francisco, Open AI was “cofounded by CTO Greg Brockman, chief scientist Ilya Sutskever, Elon Musk, and others, with backing from luminaries like LinkedIn cofounder Reid Hoffman and former Y Combinator president Sam Altman,” Kyle Wiggers writes  for VentureBeat.  

“Musk co-founded OpenAI partly because he was concerned about the potential dangers of AI, having described it as ‘summoning a demon’ -- though he's since left, citing a conflict of interest with Tesla's budding AI business,” Rosalie Chan writes for Business Insider.

“Since it was founded in 2015, OpenAI has created technology that helps computers understand language, train robots that can do tasks like housework, and even beat humans in the computer game ‘Dota 2.’ Earlier this year, OpenAI brought Altman on full-time as its CEO, around the same time that it moved away from being a nonprofit,” Chan continues.

Meanwhile, “Microsoft has been adding features to Azure to drive growth. Azure is second in size to Inc.’s AWS cloud-computing product. Microsoft said last week that Azure sales rose 64% in the most recent quarter, compared with a year earlier.”

Those clouds will be floating in the blue sky of AI.

“The quintessential characteristic for any application being built in 2019 and beyond will be AI,” CEO Satya Nadell said on an earnings call last week, the WSJ’s Needleman points out.

“AGI still has a whiff of science fiction. But in their agreement, Microsoft and OpenAI discuss the possibility with the same matter-of-fact language they might apply to any other technology they hope to build, whether it’s a cloud-computing service or a new kind of robotic arm,” Cade Metz writes  for The New York Times.

“My goal in running OpenAI is to successfully create broadly beneficial A.G.I.,” Altman recently told Metz in an interview. “And this partnership is the most important milestone so far on that path.”

“In recent years, a small but fervent community of artificial intelligence researchers have set their sights on AGI, and they are backed by some of the wealthiest companies in the world. DeepMind, a top lab owned by Google’s parent company, says it is chasing the same goal,” Metz observes.

Not that AGI is right around the corner.

“As Facebook AI chief Yann LeCun put it: when it comes to general intelligence, we can’t even build something as smart as a rat. When exactly researchers might be able to create AGI -- and whether it’s even possible -- is a topic of lively debate in the community. In a recent survey of some of the field’s leading experts, the average estimate was that there was a 50% chance of creating AGI by the year 2099,” James Vincent writes  for The Verge.

“To date, OpenAI has certainly impressed the AI world with its research. It has set new benchmarks for robot dexterity; its gaming bots have flattened human champions at Dota 2; and it’s designed remarkably flexible text-generation systems…,” Vincent adds.

But for all that utopian promise, AGI presents some worrisome possibilities.

“Advocacy groups and policy makers have raised concerns about some types of AI and called for regulation to increase transparency, guard against bias and ensure the technology isn’t used for military purposes and other dangerous applications. Those issues are likely to become more pressing as researchers try to develop AI that has more human-like capabilities,”Bloomberg’s Dina Bass reports.

“In February, OpenAI unveiled an algorithm that can write coherent sentences, including fake news articles, after being given just a small sample. The implications were so worrying that the group opted not to release the most powerful version of the software,” Bass adds.


Next story loading loading..