Kubernetes ML optimizer, Kubeflow, improves data preprocessing with v1.6

by | Sep 9, 2022 | Technology

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

More often than not, when organizations deploy applications across hybrid and multicloud environments, they use the open-source Kubernetes container orchestration system.

Kubernetes itself helps to schedule and manage distributed virtual compute resources and isn’t optimized by default for any one particular type of workload, that’s where projects like Kubeflow come into play.

For organizations looking to run machine learning (ML) in the cloud, a group of companies including Google, Red Hat and Cisco helped to found the Kubeflow open-source project in 2017. It took three years for the effort to reach the Kubeflow 1.0 release in March 2020, as the project gathered more supporters and users. Over the last two years, the project has continued to evolve, adding more capabilities to support the growing demands of ML. 

This week, the latest iteration of the open-source technology became generally available with the release of Kubeflow 1.6. The new release integrates security updates and enhanced capabilities for managing cluster serving runtimes for ML, as well as new ways to more easily specify different artificial intelligence (AI) models to deploy and run.

Event
MetaBeat 2022
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

“Kubeflow is an open-source machine learning platform dedicated to data scientists who want to build and experiment with machine learning pipelines, or machine learning engineers who deploy systems to multiple development environments,” Andreea Munteanu, product manager for AI/ML, Canonical, told VentureBeat.

The challenges of using Kubernetes for ML

There is no shortage of potential challenges that o …

Article Attribution | Read More at Article Source

Share This