How Google is accelerating ML development

by | Oct 12, 2022 | Technology

Did you miss a session from MetaBeat 2022? Head over to the on-demand library for all of our featured sessions here.

Accelerating machine learning (ML) and artificial intelligence (AI) development with optimized performance and cost, is a key goal for Google.

Google kicked off its Next 2022 conference this week with a series of announcements about new AI capabilities in its platform, including computer vision as a service with Vertex AI vision and the new OpenXLA open-source ML initiative. In a session at the Next 2022 event, Mikhail Chrestkha outbound product manager at Google Cloud, discussed additional incremental AI improvements including support for the Nvidia Merlin recommender system framework, AlphaFold batch inference as well TabNet support. 

[Follow VentureBeat’s ongoing Google Cloud Next 2022 coverage »]

Users of the new technology detailed their use cases and experiences during the session. 

Event
Low-Code/No-Code Summit
Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.

Register Here

“Having access to strong AI infrastructure is becoming a competitive advantage to getting the most value from AI,” Chrestkha said.

Uber using TabNet to improve food delivery

TabNet is a deep tabular data learning approach that makes use of transformer techniques to help improve speed and relevancy.

Chrestkha explained that TabNet is now available in the Google Vertex AI platform, which makes it easier for users to build explainable models at large scale. He noted that the Google’s implementation of TabNet will automatically select the appropriate feature transformations based on the input data, size of the data and prediction type to get the best results.

TabNet is not a theoretical approach to improving AI predictions, it is an approach that shows positive results in real-world use cases already. Among the early implementers of TabNet is Uber.

Kai Wang, senior …

Article Attribution | Read More at Article Source

Share This