Researchers have been working to refine large language models (LLMs) that could become intelligent enough to challenge the human brain. Now, in a research paper, Microsoft suggests the new artificial intelligence shows signs of reasoning.
Building a machine that works like the human brain can change the world, but it also could become dangerous when the technology demonstrates mental capabilities such as reasoning, creativity, and deduction.
Supporting the findings, Sébastien Bubeck, lead author on the paper about artificial general intelligence (AGI), will run one of the reorganized research labs at Microsoft to explore humanlike answers and ideas not programmed into the LLM using GPT-4, The New York Times reported. He has documented complex behaviors exhibited by the system during the past several months.
“All of the things I thought it wouldn’t be able to do? It was certainly able to do many of them — if not most of them,” Bubeck told The New York Times.
When asked by a query on how to stack a book, nine eggs, a laptop, a bottle and a nail in a stable manner so they don’t fall, the model provided a rather lengthy conclusion, as stated in the research paper.
There is much that remains to be done. The model has trouble determining when it should be confident and when it is guessing. It makes up facts that have not appeared in its training data, and also exhibits inconsistencies between the generated content and the query prompt.
The latest LLM developed by OpenAI, GPT-4, as well as Google’s pathway language model (PaLM), were trained using an unprecedented amount of computer power and data, but these models exhibit more general intelligence than previous AI models.
Microsoft's research paper, Sparks of Artificial General Intelligence, focuses on AI and AGI systems, which describe a machine that can do anything the human brain can do.
The research explores GPT-4 and the emphasis on its limitations. Challenges for advancing toward a more comprehensive versions of AI, include the possible need for pursuing a new paradigm that moves beyond next-word prediction. The paper concludes with a discussion of societal influences.
Sam Altman, CEO and co-founder of OpenAI, the maker of GPT-4, appeared before a Senate Judiciary subcommittee Tuesday, along with IBM chief privacy officer Christina Montgomery and NYU professor Gary Marcus.
As lawmakers questioned Altman about the potential misuse of the technology, he repeatedly acknowledged that he welcomes legislation.
As The New York Times points out, making claims about this technology can kill the reputation of a computer scientist and a researcher. One researcher might see a sign of intelligence, while another can explain it away.
Last year, Google fired a researcher who claimed that a similar AI system was sentient — meaning the ability to sense or feel what is happening in the world around it. This is a step beyond what Microsoft has claimed.
However, some including Elon Musk believe the industry has inched toward something that cannot be explained away, such as an AI system that comes up with humanlike answers and ideas that were not previously programmed in.