How Microsoft sees its Models-as-a-Service feature democratizing access to AI

by | May 22, 2024 | Technology

Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.

Today’s tools make it easy to build AI-powered applications. But a complex area most, if not all, developers want to avoid is having to sort out how to host the models being used. It’s one thing to choose between OpenAI’s GPT-4o, Meta’s Lllama 3, Google’s Gemini or the many open-source models out in the marketplace. It’s quite another to deploy it.

Such necessary but head-scratching work could frustrate developers, turning them off to their entrepreneurial ideas. However, Microsoft has a solution that could make it easier to focus more on the creative process than the model housekeeping. Called Models-as-a-Service (MaaS), it’s the AI equivalent of cloud services, charging for access rather than infrastructure and is available through the company’s AI Azure Studio product.

Keep it simple

“If you’ve ever tried to deploy a model, there’s a series of combinations of incantations and Pytorch versions and CPU and GPU stuff,” Seth Juarez, the principal program manager for Microsoft’s AI platform, tells VentureBeat. “Models-as-a-Service kind of abstracts all of that away, so that if you have a model that you want to use, and that’s open source or that’s something that OpenAI built, we provide that in a catalog. You hit a button, and now you have an endpoint to use it.”

Developers can rent inference APIs and host fine-tuning through a pay-as-you-go plan—all without needing to use a virtual machine. Juarez explains that while Microsoft …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nn
Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.

Today’s tools make it easy to build AI-powered applications. But a complex area most, if not all, developers want to avoid is having to sort out how to host the models being used. It’s one thing to choose between OpenAI’s GPT-4o, Meta’s Lllama 3, Google’s Gemini or the many open-source models out in the marketplace. It’s quite another to deploy it.

Such necessary but head-scratching work could frustrate developers, turning them off to their entrepreneurial ideas. However, Microsoft has a solution that could make it easier to focus more on the creative process than the model housekeeping. Called Models-as-a-Service (MaaS), it’s the AI equivalent of cloud services, charging for access rather than infrastructure and is available through the company’s AI Azure Studio product.

Keep it simple

“If you’ve ever tried to deploy a model, there’s a series of combinations of incantations and Pytorch versions and CPU and GPU stuff,” Seth Juarez, the principal program manager for Microsoft’s AI platform, tells VentureBeat. “Models-as-a-Service kind of abstracts all of that away, so that if you have a model that you want to use, and that’s open source or that’s something that OpenAI built, we provide that in a catalog. You hit a button, and now you have an endpoint to use it.”

Developers can rent inference APIs and host fine-tuning through a pay-as-you-go plan—all without needing to use a virtual machine. Juarez explains that while Microsoft …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]

Share This