NVIDIA Research released NVIDIA GET3D’s 3D model automatic generation technology, which can quickly generate a large number of 3D objects and characters from 2D images #AI (183037)

NVIDIA Research announced an automatic 3D model generation technology called GET3D, which aims to help 3D content creators to quickly obtain accurate 3D objects; GET3D uses AI technology to train with 2D images to generate realistic textures and complex Geometric 3D objects, and these 3D models can be imported into 3D renderers and game engines for subsequent editing. GET3D can generate content from animals, vehicles, people, buildings, outdoor spaces and even entire cities.

In the future, small game studios or metaverse artists with limited resources can generate rich basic 3D objects with the help of GET3D technology. 3D artists in the team can focus on re-adjusting the details of the 3D models of the main characters, and make the secondary objects simple. Edit to create 3D worlds with rich 3D objects.

The full name of GET3D is 3D Generate Explicit Texuture 3D, which means that GET3D can create various shapes with triangular meshes, just like covering texture materials in a mud pulp model, so objects generated by GET3D can be imported into mainstream 3D content creation tools and game engines Make subsequent edits.

The traditional inverse rendering method used to build 3D models of real objects is to construct 3D images by shooting 2D multi-views of real objects, but each set of photos can only generate one set of 3D models, which is not efficient. The adversarial generation network is extended and applied, so that it can generate 3D objects while generating 2D images, and finally infer through NVIDIA GPU, which can generate 20 shapes per second. The process is like an artist looking at a picture and pinching out a clay model and then coloring generally.

▲At present, GET3D can only train and generate 3D objects for one type of 2D images at a time. In the future, it will be able to input multiple types of content and generate various 3D objects at one time.

NVIDIA Research trained 2D images of 3D shapes taken from different angles, and performed up to 1 million images on the NVIDIA A100 Tensor GPU to complete the GET3D AI model within two days. When the model is completed, put in the 2D car picture data set, GET3D and can generate 3D cars, trucks, racing cars and trucks; if it is changed to 2D animal atlas, it can produce 3D foxes, rhinos, horses and bears; change to 2D Chair pictures, you can create all kinds of 3D swivel chairs, dining chairs and lounge chairs.

These generated content can not only be exported to various drawing applications, but also combined with StyleGAN-NADA, also developed by NVIDIA Research, to add various effects through text descriptions, such as adding decadent effects to 3D house objects, or It is to turn a complete 3D car into a rubbed or even burnt effect.

The research team claims that the subsequent version of GET3D can also be combined with camera pose estimation technology to use real environments instead of synthetic data for model training. At the same time, GET3D models will continue to support general generation technology, which will be able to use a variety of 3D shapes for training at one time. Instead of being able to train with only one type of object at a time.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.