AI Converts Sketches Into Realistic Images In Seconds

Artificial intelligence is being used to turn rough doodles into photorealistic masterpieces within seconds.

The deep learning technology tool from Nvidia Research leverages generative adversarial networks, or GANs, to convert segmentation maps to lifelike images.

The interactive app, called GauGAN, allows users to draw their own segmentation maps and then manipulate the scene, as detailed in a company video.

“It’s much easier to brainstorm designs with simple sketches, and this technology is able to convert sketches into highly realistic images,” states Bryan Catanzaro, vice president of applied deep learning research at Nvidia, in a company blogpost.

The technology is like a smart paintbrush that can fill in the details inside the segmentation maps, according to Catanzaro.

The deep learning model was trained on 1 million images, which provides the knowledge that real lakes or ponds have reflections, so the technology adds those in when a user selects to make a portion of a picture a lake or pond.

“It’s like a coloring book picture that describes where a tree is, where the sun is, where the sky is,” Catanzaro states. “And then the neural network is able to fill in all of the detail and texture, and the reflections, shadows and colors, based on what it has learned about real images.”

The technology is being demonstrated at the GPU Technology Conference in California this week.

Next story loading loading..