Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Windows Central
Windows Central
Technology
Kevin Okemwa

Microsoft brings its lightweight Phi-3 model and OpenAI's 'magical' GPT-4o to Azure AI to help devs build transformational experiences

Microsoft AI lab.

What you need to know

  • Microsoft's Azure AI Studio just shipped to broad availability to help developers build custom Copilot apps.
  • OpenAI's GPT-4o model has also been released to Azure AI Studio as an API for a richer user experience.
  • Microsoft also announced Phi-3-small, Phi-3-medium, and Phi-3-vision, a new multimodal model as part of its Phi-3 family of AI small language models (SMLs).

Microsoft is currently holding its annual developer conference, Build 2024 in Seattle. While the event is centered on AI PCs and Windows 11 on Arm, Microsoft has also made a handful of announcements around its AI efforts, particularly in its Microsoft Azure cloud service.

The tech giant says the new advances in Azure will help users build "transformational" AI experiences, further stating, "there’s never been a more exciting time to be a developer."

Right off the bat, Microsoft has announced that its Azure AI Studio has finally shipped to broad availability. For context, Azure AI Studio is an important pro-code tool that allows developers to build custom Copilot apps while simultaneously promising responsible and safe development. The platform also features a user-friendly interface coupled with code-first capabilities, including AZD and AI Toolkit for VS Code to promote effectiveness and efficiency. 

OpenAI's GPT-4o makes its way to Azure AI Studio

(Image credit: Daniel Rubino)

Last week, OpenAI announced its new flagship GPT-4o model with "GPT-4-level intelligence." The new model was already available in preview in Azure OpenAI Service, allowing developers to use the service to integrate GPT technology into apps.

And now, Microsoft has announced that the model is available as an API in Azure AI Studio. Devs can now leverage its capabilities to integrate text, image, and audio processing into a single model. When coupled with the recently enabled GPT-4 Turbo with Vision, developers can build apps with inputs and outputs that space across text, images, and more for a richer user experience.

Microsoft announces a new multimodal model as part of its lightweight Phi-3 family of AI SMLs

(Image credit: Ben Wilson | Windows Central)

Last month, Microsoft debuted its super-lightweight AI model that's as capable as GPT-3.5, Phi-3 Mini. Now, the company has announced Phi-3-small, Phi-3-medium, and Phi-3-vision, a new multimodal model as part of its Phi-3 family of AI small language models (SLMs).

According to Microsoft:

"Phi-3 models are powerful, cost-effective and optimized for resource constrained environments including on-device, edge, offline inference, and latency bound scenarios where fast response times are critical."

The models also ship with Phi-3-vision which supports general visual reasoning tasks as well as analytical reasoning for charts, graphs, and more. Users can also input images and text into the model and output text responses.

Microsoft has also integrated Phi-3-mini into Azure AI’s Models-as-aService (MaaS) service. Additionally, Microsoft is also bringing new capabilities across APIs to support multimodal experiences. 

Finally, Microsoft is shipping several new features to Azure AI Speech in preview, including speech analytics and universal translation. These features are designed to help developers build high-quality and voice-enabled apps. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.