Microsoft, as expected, is working on an artificial intelligence (AI) chip that can train large language models -- in a continuation of its research and development work since 2019.
Some Microsoft and OpenAI developers have access to them to test the performance in GPT-4.
The Redmond, Washington company could make the AI chips available to other areas of Microsoft and OpenAI early next year. The roadmap for the product has multiple future generations.
The project for the chips -- Code name Athena, first reported by The Information -- is not considered to be a direct replacement for Nvidia’s, which Microsoft uses today. But designing and making them in-house could cut costs in features for Bing, Office app, and GitHub, or anywhere the company uses this type of technology.
The rollout of Microsoft’s chip is being accelerated following the success of ChatGPT, the report says.
The AI chip is a segment of semiconductors expected to see significant substantial growth.
Semiconductor chips typically come in the form of a system on a chip (SoC), meaning that the chip has multiple functions beyond the central processing unit (CPU) and can be programmed to do just about anything.
The AI chip market size is estimated to grow at a CAGR of 61.51% between 2022 and 2027, reaching $2,10,506.47 million, according to Technavio. The research firm estimates between 70% and 75% of businesses will adopt AI technology in some capacity within the next 20 years.
Alphabet, Apple, Arm, Intel, Advanced Microsoft Devices (AMD), Baidu, and Graphcore are some of the other companies that have developed AI chips.
MacroPolo, the Paulson Institute’s think tank based in Chicago, explains how an AI chip works. For example, a user can point a smartphone camera at a cat and the device can instantly distinguish that cat from a lynx or another wild cat like a serval. This sort of image recognition is easy for a human, but is exceptionally difficult for a computer.