GANverse3D: Nvidia creates 3D models from just one 2D photo


At GTC, Nvidia's research department is using the GANverse3D to show how a 3D model can be created from just a single 2D photo. KITT from the action series Knight Rider can serve as an example. Game developers, artists, designers, and architects should be able to import new 3D models with little effort.

GANverse3D is being offered as an extension for Nvidia's collaboration platform Omniverse, which left the Open Beta phase at GTC and is being offered to the first enterprise customers. Omniverse is a cloud-based platform and optionally provides multiple GPUs based on RTX technology, so that raytracing-based rendering is also possible. Via the so-called connectors, better described as plug-ins, professional applications such as 3DS Max, Houdini, Maya, or Photoshop and the Unreal Engine 4 can be used collaboratively.

Quickly create new 3D models

For this development environment, Nvidia is introducing GANverse3D today, in order to be able to generate new 3D models without expertise in this area and to be able to import them into the applications. All that is needed on the part of the user is a single 2D photo of a car in order to generate a 3D model from it in a few milliseconds. At GANverse3D, Nvidia initially decided to render automobiles. Game developers, artists, designers, and architects should be able to add new 3D objects to their virtual environments with little effort.

The Generative Adversarial Network required for the process(GAN) had previously trained Nvidia with 55,000 images of different automobiles from several perspectives, but for the actual process on the part of the user, only a single photo is needed that shows the entire vehicle. "Fed" with the 2D photo, GANverse3D uses it to generate a 3D model that also includes individually addressable sub-components of the car such as wheels and headlights, which can then be animated to show an animated driving sequence, for example - as with KITT. For the initially rough model, the texture determined by GANverse3D can then be converted to high-quality materials using tools such as Omniverse Kit and PhysX. From loading the photo to the finished model, only 65 ms should pass on a Tesla V100 graphics card.

Nvidia has not yet commented on which objects GANverse3D should master next. In principle, however, there would be no restriction, even if complex models such as trees would presuppose “a little more optimization” in the algorithm.

Post a Comment

0 Comments