What happens when AI takes over coding for a specific project? Apparently it now has a name -- vibe coding -- and it is gaining acceptance among some developers.
“I felt somewhat dirty the first few times I indulged in this behavior, then just embraced the vibe,” Max Levchin, founder and CEO of Affirm, wrote in a post on x in February. “As penance, doing programming puzzles in K&R C, aiming for O(nlogn)-solutions only.”
Andrej Karpathy, former director of AI at Tesla and founding member of the research group at OpenAI, wrote a post on x about "vibe coding," saying that it is “where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.
I had never heard of it before reading a post about it on Search Engine Journal written by Roger Montti, owner of Martinibuster.com. Then I looked it up.
advertisement
advertisement
Karpathy wrote that this type of coding is possible because the large language models (LLMs) have progressed to the point to creating highly professional code.
He just speaks to two coding models, starting with Composer with SuperWhisper, which enables him to barely touch the keyboard. “I ask for the dumbest things like ‘decrease the padding on the sidebar by half’ because I'm too lazy to find it," he says.
Then a tap on "Accept All" gets him on his way. The LLMs can't always fix a bug, so he works around it or asks for random changes until it goes away.
Search Engine Journal wrote that the approach aligns with principles outlined by Google co-founder Sergey Brin in a recent email to DeepMind engineers.
“Brin’s message suggests that Google will embrace it to dramatically speed up AI development. Given its potential, this approach may also extend to Google’s search algorithms, leading to more changes to how search results are ranked,” Montti explains.
Brin also recommends using first-party code instead of relying on open-source or third-party software. But his message de-emphasizes the use of LoRA, a machine-learning technique used to fine-tune AI models efficiently, which could imply that he wants DeepMind engineers to prioritize efficient workflows rather than spending excessive time fine-tuning models, Montti wrote.