Nvidia partners with Shutterstock, Getty Images on AI-generated 3D content

by | Mar 18, 2024 | Technology

Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.

3D can be a powerful tool for brands and creatives, offering immersive, engaging experiences and enhancing the design process. 

Still, it can be expensive, time-consuming and difficult to execute effectively — and thus not always feasible in everyday enterprise. 

But generative AI, once again, is rising to the challenge — and today, Nvidia is looking to stake its claim in this new dimension. The company announced at GTC 2024 that its Nvidia Edify multimodal generative AI model can now generate 3D content, and that it has partnered with Shutterstock and Getty Images on Edify-powered tools.

Shutterstock is providing early access to an application programming interface (API) built on Edify that creates 3D objects for virtual scenes from text prompts and images. 

VB Event
The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.

Request an invite

Getty, meanwhile, is adding custom fine-tuning capabilities to its gen AI service so that enterprise customers can generate visuals adhering to brand guidelines and style. 

Developers will soon be able to test these models through Nvidia NIM, a new collection of inference microservices announced at GTC. 

“3D asset generation is among the latest capabilities Edify offers developers and visual content providers, who will also be able to exert more creative control over AI image generation,” Gerardo Delgado, a director of product management at Nvidia, wrote in a blog post about the new capability. 

Getty fine-tuning Edify to specific brands (including Sam’s Club, Mucinex and Coca-Cola)

One of the biggest challenges in gen AI is finer control over AI image outputs. 

To help address this problem, Getty announced at the Consumer Electronics Show (CES) in January Edify-powered APIs for inpainting and outpainting. Inpainting can add, remove or replace objects in an image, while outpainting can expand the canvas. Both these features are now available on Getty’s website and iStock.com. 

Beginning in May, the company will also provide new services allowing companies to custom fine-tune Edify to their specific brand and style. This will be via a no-code, self-service method in which brands can upload proprietary datasets, review auto-tags, submit fine-tuning parameters and review results before deployment. 

Additionally, developers will soon have access to Sketch, Depth and Segmentation features. These, respectively, allow users to submit a drawing to guide image generation; copy compositions of reference images via a “depth map”; and segment sections of images to add, remove or retouch characters and objects. 

“Getty Images continues to expand the capabilities offered through its commercially safe gen AI service, which provides users indemnification for the content they generate,” writes Delgado. 

Like Shutterstock, Getty’s gen AI tools are being used by “leading creatives and advertisers,” according to the company. Some of these include: 

Dentsu Inc.,: The Japanese PR agency is using Nvidia Picasso to fine-tune Getty’s model for membership retail giant Sam’s Club. The company is also using Getty to support Manga Anime for All, which can generate manga and anime-type content for marketing use cases. 

McCann: The creative agency used gen AI to create a game for over-the-counter cold remedy Mucinex; this interactive feature allows users to interact with its …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nn
Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.

3D can be a powerful tool for brands and creatives, offering immersive, engaging experiences and enhancing the design process. 

Still, it can be expensive, time-consuming and difficult to execute effectively — and thus not always feasible in everyday enterprise. 

But generative AI, once again, is rising to the challenge — and today, Nvidia is looking to stake its claim in this new dimension. The company announced at GTC 2024 that its Nvidia Edify multimodal generative AI model can now generate 3D content, and that it has partnered with Shutterstock and Getty Images on Edify-powered tools.

Shutterstock is providing early access to an application programming interface (API) built on Edify that creates 3D objects for virtual scenes from text prompts and images. 

VB Event
The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.

Request an invite

Getty, meanwhile, is adding custom fine-tuning capabilities to its gen AI service so that enterprise customers can generate visuals adhering to brand guidelines and style. 

Developers will soon be able to test these models through Nvidia NIM, a new collection of inference microservices announced at GTC. 

“3D asset generation is among the latest capabilities Edify offers developers and visual content providers, who will also be able to exert more creative control over AI image generation,” Gerardo Delgado, a director of product management at Nvidia, wrote in a blog post about the new capability. 

Getty fine-tuning Edify to specific brands (including Sam’s Club, Mucinex and Coca-Cola)

One of the biggest challenges in gen AI is finer control over AI image outputs. 

To help address this problem, Getty announced at the Consumer Electronics Show (CES) in January Edify-powered APIs for inpainting and outpainting. Inpainting can add, remove or replace objects in an image, while outpainting can expand the canvas. Both these features are now available on Getty’s website and iStock.com. 

Beginning in May, the company will also provide new services allowing companies to custom fine-tune Edify to their specific brand and style. This will be via a no-code, self-service method in which brands can upload proprietary datasets, review auto-tags, submit fine-tuning parameters and review results before deployment. 

Additionally, developers will soon have access to Sketch, Depth and Segmentation features. These, respectively, allow users to submit a drawing to guide image generation; copy compositions of reference images via a “depth map”; and segment sections of images to add, remove or retouch characters and objects. 

“Getty Images continues to expand the capabilities offered through its commercially safe gen AI service, which provides users indemnification for the content they generate,” writes Delgado. 

Like Shutterstock, Getty’s gen AI tools are being used by “leading creatives and advertisers,” according to the company. Some of these include: 

Dentsu Inc.,: The Japanese PR agency is using Nvidia Picasso to fine-tune Getty’s model for membership retail giant Sam’s Club. The company is also using Getty to support Manga Anime for All, which can generate manga and anime-type content for marketing use cases. 

McCann: The creative agency used gen AI to create a game for over-the-counter cold remedy Mucinex; this interactive feature allows users to interact with its …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]

Share This