Reasoning Models Enabling Brands To Generate 'Cinematic' Content

Generative AI (GAI) lab and research company Luma AI unveiled a reasoning model Thursday. Amit Jain, chief executive officer and co-founder, calls it the “first video model built to think like a creative partner.”

"Our goal is to build multimodal general intelligence," Jain said, explaining that the company wants to build what comes after text-based large language models to further creativity. 

Ray3, the third generation of the model, became available today in Luma’s Dream Machine platform. It allows advertisers, filmmakers, and game developers to create cinematic videos with the same technical standards as professional productions.

Adobe, Dentsu Digital, Humain, Monks UK, Galeria, Hogarth, and Strawberry Frog are named as Luma AI’s launch partners.

Adobe became the first company to integrate Ray3 into one its one of its platforms, Firefly, known for its AI-powered ideation, creation and production features.

advertisement

advertisement

“The higher you raise the floor, the more advanced things people can do,” Jain said.

Some large language models can read text, but for creators across advertising and filmmaking it is extremely limited.

Ray3 uses reason to visualize and conceptualize images, evaluating its own outputs and refining the results as images produced in true 10-, 12-, and 16-bit High Dynamic Range (HDR) ACES2065-1 EXR format. Each 5-second video is 1 gigabit.

This model, which has a memory, makes high-end movies and advertising for all types of platforms.

The Draft Mode enables creatives to explore dozens of ideas up to 20x faster, then select and polish the best options into cinematic 4k quality HDR.

Luma AI has a studio in Los Angeles -- LA Lab, where people can go to learn how to create AI-based reasoning model images with enterprise partners. 

The technology follows instructions and evaluates the outcome, such as turning a green light red in an image. If the image looks inaccurate, it will reason and determine the best hue and time for the colors in the frames to change, similar to a human editor. The colors and timing are judged based on data fed into the model.

The model changes the economics from between $100,000 to $1 million per minute, which is what it costs to make movies these days, he said, to barely $1,000 per minute.

"The innovation is not that it's less expensive, but that you can explore a creative and quickly think through the changes," Jain said. 

Next story loading loading..