Commentary

Accidental AI: How Nano Banana Gained Traction With No Help From Google

Nano Banana came to Google Search and NotebookLM this week, with Photos the next step to get this enhancement. One analyst believes the popularity of Google's AI image-editing model developed by the DeepMind team spread virally based on word of mouth with no involvement from Google. Here's why.

Google believes the image creator lets people dream up a world they can only imagine, allowing creators to turn one photo into countless new creations by uploading multiple images to blend scenes or combine ideas. 

Trip Chowdhry, managing director of equity research at Global Equities Research, wrote in a note published Wednesday that Nano Banana offers prompt-based local editing -- something that is missing in OpenAI DALL-E. It's a feature that lets users make specific edits to an image using natural language instructions without creating masks or complex design tools. 

advertisement

advertisement

For example, creators can tell it to change only the red chair from modern to vintage, while preserving the rest of the room. Users also can tell it to remove a chair and AI will intelligently fill the empty space. 

"The majority of the content created using Nano Banana is uploaded to Meta Instagram, and has increased Instagram engagement with lots of Likes and Comments," Chowdhry wrote. "It seems like the success ...  is an accident, if something becomes an overnight sensation by an accident, it is very foolish to bet against it."

The feature has already generated more than 5 billion images since launching in August. Google announced the milestone this week as part of the expansion of the model's availability.

Now creators will have the ability to access the tool through Google Lens in the Search app. In NotebookLM, Nano Banana works behind the scenes to enhance Video Overviews with new visual styles like watercolor.

Chowdhry wrote that two significant innovations from DeepMind that have made Google Nano Banana successful include an auto prompt cleanup -- that cleans up in the background to make the prompt semantically clear and to reduce hallucinations -- and SynthID, the invisible watermarking technology that tracks how the generated image is used. He said SynthID makes AI safer to raise adoption rates. 

These innovations gave rise to the possibility that Meta Platforms may use Google's Nano Banana model, Chowdhry wrote, pointing to an article that originated from The Information in September. The news highlights Meta employees who have had discussions with Google Cloud about the possibility of using its Gemini models to improve its ad business.

For those wondering about the origins of Nano Banana, it was Google's internal codename for the Gemini 2.5 Flash Image editing AI tool that became popular online. The name appeared on AI evaluation platforms before Google's official announcement and just stuck with users. 

Next story loading loading..