Commentary

Google Delays Launch Of Gemini

Google's most complex artificial intelligence (AI) has been put on hold until next year.

The model, Gemini, has been described as the next generation of AI multimodal model with the ability to process multiple types of data such as text, images and video. It aims to have the capability to understand and generate text, images and websites based on a written description or text.

OpenAI's DALL-E is a version of GPT-3 trained to generate images from text descriptions using text–image pairs.

The controversy has continued for several weeks. Two previously unannounced launch events originally scheduled to take place this week were quietly rescheduled for early 2024 after concerns surfaced that the AI Google would demonstrate wasn't reliable enough, according to The Information, citing two anonymous sources. The technology wouldn't respond well enough to some non-English questions, prompts and inquiries.

advertisement

advertisement

Pichai announced Gemini in May 2023 and described it as a multimodal, highly efficient tool and API integrations built to enable future innovations, like memory and planning.

“While still early, we’re already seeing impressive multimodal capabilities not seen in prior models,” Pichai wrote in the blog post. “Once fine-tuned and rigorously tested for safety, Gemini will be available at various sizes and capabilities, just like PaLM 2.”

When launched, it has said, Gemini will outperform OpenAI’s DALL-E. Gemini will pull from Google’s enormous sources of data across its network of products. Google also has discussed using Gemini to power features like to verbally describe chart analysis.

AI is being embedded more deeply into programmatic advertising. Many technology companies now use a combination of AI and machine learning to improve bidding and performance.

In August 2023, analysts at SemiAnalysis, a semiconductor blog, wrote a post titled, in part, Google Gemini Eats The World.

It was sparked, in part, by Google’s release of the MEENA model before COVID. For a short period of time, was the best large language model in the world, according to SemiAnalysis analysts Dylan Patel and Daniel Nishball who pointed to a blog and paper written by Google engineers that detail Gemini.

The Google engineers wrote: “This model required more than 14x the FLOPS of GPT-2 to train, but this was largely irrelevant because only a few months later OpenAI dropped GPT-3, which was >65x more parameters and >60x the token count, >4,000x more FLOPS. The performance difference between these two models was massive.”

SemiAnalysis reported that the MEENA model sparked a memo written by Noam Shazeer, founder at Character.AI titled "MEENA Eats The World.”

In this memo, Shazeer predicted things that the rest of the world would realize after the release of ChatGPT.

“The key takeaways were that language models would get increasingly integrated into our lives in a variety of ways, and that they would dominate the globally deployed FLOPS,” SemiAnalysis reported. “Noam was so far ahead of his time when he wrote this, but it was mostly ignored or even laughed at by key decision makers.”

Next story loading loading..