Get Started With Windows Ml Model Catalog Apis Microsoft Learn
Access to this page requires authorization. You can try signing in or changing directories. Access to this page requires authorization. You can try changing directories. This guide shows you how to use the Windows ML Model Catalog APIs to manage AI models in your Windows applications. You'll learn how to set up catalog sources, find compatible models, and download them to a shared location on the device for all apps to use.
You must first create or obtain a model catalog source, which is an index of models including information about how to access the model. See the Model Catalog Source docs for more information. The Model Catalog Source JSON file can either be hosted online, referenced via https:// endpoints, or used from a local file, referenced via file paths like C:\Users\.... Access to this page requires authorization. You can try signing in or changing directories. Access to this page requires authorization.
You can try changing directories. The Windows ML Model Catalog APIs allow your app or library to dynamically download large AI model files to a shared on-device location from your own online model catalogs without shipping those large files... Additionally, the model catalog will help filter which models are compatible with the Windows device it's running on, so that the right model is downloaded to the device. The Model Catalog APIs are a set of APIs that can be connected to one or many cloud model catalogs to facilitate downloading and storing those models locally on the device so that they... The APIs have a few core features: Model Catalog automatically matches models to your system's available execution providers (CPU, GPU, NPU, etc.).
When you request a model, the catalog only returns models that are compatible with your current hardware configuration. Access to this page requires authorization. You can try signing in or changing directories. Access to this page requires authorization. You can try changing directories. This topic shows you how to install and use Windows ML to discover, download, and register execution providers (EPs) for use with the ONNX Runtime shipped with Windows ML.
Windows ML handles the complexity of package management and hardware selection, automatically downloading the latest execution providers compatible with your device's hardware. If you're not already familiar with the ONNX Runtime, we suggest reading the ONNX Runtime docs. In short, Windows ML provides a shared Windows-wide copy of the ONNX Runtime, plus the ability to dynamically download execution providers (EPs). Python versions 3.10 to 3.13, on x64 and ARM64 devices. Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories. This short tutorial walks through using Windows ML to run the ResNet-50 image classification model on Windows, detailing model acquisition and preprocessing steps. The implementation involves dynamically selecting execution providers for optimized inference performance. The ResNet-50 model is a PyTorch model intended for image classification. In this tutorial, you'll acquire the ResNet-50 model from Hugging Face, and convert it to QDQ ONNX format by using the AI Toolkit.
Access to this page requires authorization. You can try signing in or changing directories. Access to this page requires authorization. You can try changing directories. Windows Machine Learning can be used in a variety of customizeable app solutions. Here, we provide several full tutorials covering how to create a Machine Learning model from a variety of potential non-code or programmatic services, and integrate them into a basic Windows ML app.
In addition, we cover several advanced methods to tweak the functionality of your app. And if you're looking for just a basic introductory use of the APIs with an existing model, or if you want to check out our samples, check out further links below. These following tutorials cover the creation of a Machine Learning model, and how to incorporate it into a Windows 10 app with Windows ML. Want to use an existing utility to train a machine learning model? These tutorials cover end-to-end walkthroughs of how to create Windows ML apps with models trained by existing services. Access to this page requires authorization.
You can try signing in or changing directories. Access to this page requires authorization. You can try changing directories. Windows Machine Learning (ML) enables C#, C++, and Python developers to run ONNX AI models locally on Windows PCs via the ONNX Runtime, with automatic execution provider management for different hardware (CPUs, GPUs, NPUs). ONNX Runtime can be used with models from PyTorch, Tensorflow/Keras, TFLite, scikit-learn, and other frameworks. If you're not already familiar with the ONNX Runtime, we suggest reading the ONNX Runtime docs.
In short, Windows ML provides a shared Windows-wide copy of the ONNX Runtime, plus the ability to dynamically download execution providers (EPs). An execution provider (EP) is a component that enables hardware-specific optimizations for machine learning (ML) operations. Execution providers abstract different compute backends (CPU, GPU, specialized accelerators) and provide a unified interface for graph partitioning, kernel registration, and operator execution. To learn more, see the ONNX Runtime docs. There was an error while loading. Please reload this page.
Machine learning is at the forefront of technological innovation, enabling transformative user experiences. With the advances in client silicon and model miniaturization, new scenarios are feasible to run completely locally. To support developers shipping production experiences in the increasingly complex AI landscape, we are thrilled to announce the public preview of Windows ML – a cutting-edge runtime optimized for performant on-device model inference and... Windows ML is designed to support developers creating AI-infused applications with ease, harnessing the incredible strength of Windows’ diverse hardware ecosystem whether it’s for entry-level laptops, Copilot+ PCs or top-of-the-line AI workstations. It’s built to help developers leverage the client silicon best suited for their specific workload on any given device – whether it’s an NPU for low-power and sustained inference, a GPU for raw horsepower... Windows ML provides a unified framework so developers can confidently target Windows 11 PCs that are available today.
It was built from the ground up to optimize model performance and agility and to respond to the speed of innovation in model architectures, operators and optimizations across all layers of the stack. Windows ML is an evolution of DirectML (DML) based on our learnings from the past year, listening to feedback from many developers, our silicon partners and our own teams developing AI experiences for Copilot+... Windows ML is designed with this feedback in mind, empowering our partners – AMD, Intel, NVIDIA, Qualcomm – to leverage the execution provider contract to optimize model performance, and match the pace of innovation. Windows ML is powered by ONNX Runtime Engine (ORT), allowing developers to utilize the familiar ORT APIs. With ONNX as a native model format and support for pytorch to intermediate representations for the EPs, Windows ML ensures seamless integration with existing models and workflows. A key design aspect is leveraging and enhancing the existing ORT Execution Provider (EP) contract to optimize workloads for varied client silicon.
Built in partnership with our Independent Hardware Vendors (IHVs), these execution providers are designed to optimize model execution on existing and emerging AI processors, enabling each to showcase their fullest capability. We’ve been working closely with AMD, Intel, NVIDIA and Qualcomm to integrate their EPs seamlessly in Windows ML, and are pleased to support the full set of CPUs, GPUs and NPUs from day one. ⚠️ IMPORTANTFor the latest documentation about Windows Machine Learning, see What is Windows ML. That documentation describes APIs that are in the Microsoft.Windows.AI.MachineLearning namespace, which ships in the Windows App SDK. Those APIs supersede the ones documented here, which are in the Windows.AI.MachineLearning namespace, and were shipped in 2018. Windows Machine Learning is a high-performance machine learning inference API that is powered by ONNX Runtime and DirectML.
The Windows ML API is a Windows Runtime Component and is suitable for high-performance, low-latency applications such as frameworks, games, and other real-time applications as well as applications built with high-level languages. This repo contains Windows Machine Learning samples and tools that demonstrate how to build machine learning powered scenarios into Windows applications. For additional information on Windows ML, including step-by-step tutorials and how-to guides, please visit the Windows ML documentation. There was an error while loading. Please reload this page.
People Also Search
- Get started with Windows ML Model Catalog APIs | Microsoft Learn
- Download and share models on-device with the Windows ML Model Catalog APIs
- Get started with Windows ML | Microsoft Learn
- Windows ML walkthrough | Microsoft Learn
- Windows Machine Learning tutorials | Microsoft Learn
- What is Windows ML? | Microsoft Learn
- windows-ai-docs/docs/new-windows-ml/model-catalog/overview.md at docs ...
- Introducing Windows ML: The future of machine learning development on ...
- microsoft/Windows-Machine-Learning - GitHub
- windows-ai-docs/docs/new-windows-ml/get-started.md at docs ...
Access To This Page Requires Authorization. You Can Try Signing
Access to this page requires authorization. You can try signing in or changing directories. Access to this page requires authorization. You can try changing directories. This guide shows you how to use the Windows ML Model Catalog APIs to manage AI models in your Windows applications. You'll learn how to set up catalog sources, find compatible models, and download them to a shared location on the ...
You Must First Create Or Obtain A Model Catalog Source,
You must first create or obtain a model catalog source, which is an index of models including information about how to access the model. See the Model Catalog Source docs for more information. The Model Catalog Source JSON file can either be hosted online, referenced via https:// endpoints, or used from a local file, referenced via file paths like C:\Users\.... Access to this page requires authori...
You Can Try Changing Directories. The Windows ML Model Catalog
You can try changing directories. The Windows ML Model Catalog APIs allow your app or library to dynamically download large AI model files to a shared on-device location from your own online model catalogs without shipping those large files... Additionally, the model catalog will help filter which models are compatible with the Windows device it's running on, so that the right model is downloaded ...
When You Request A Model, The Catalog Only Returns Models
When you request a model, the catalog only returns models that are compatible with your current hardware configuration. Access to this page requires authorization. You can try signing in or changing directories. Access to this page requires authorization. You can try changing directories. This topic shows you how to install and use Windows ML to discover, download, and register execution providers...
Windows ML Handles The Complexity Of Package Management And Hardware
Windows ML handles the complexity of package management and hardware selection, automatically downloading the latest execution providers compatible with your device's hardware. If you're not already familiar with the ONNX Runtime, we suggest reading the ONNX Runtime docs. In short, Windows ML provides a shared Windows-wide copy of the ONNX Runtime, plus the ability to dynamically download executio...