Nvidia researchers have made a generative model that can create virtual environments using real-world videos from sources like YouTube — a way of generating graphics that could have implications for the future of gaming, AI, and graphics.

“It’s a new kind of rendering technology, where the input is basically just a sketch, a high-level representation of objects and how they are interacting in a virtual environment. Then the model actually takes care of the details, elaborating the textures, and the lighting, and, and so forth, in order to make a fully rendered image,” Nvidia VP of applied deep learning Bryan

Read More At Article Source | Article Attribution