Pytorch Documentation Pytorch 2 9 Documentation
PyTorch is an optimized tensor library for deep learning using GPUs and CPUs. Features described in this documentation are classified by release status: Stable (API-Stable): These features will be maintained long-term and there should generally be no major performance limitations or gaps in documentation. We also expect to maintain backwards compatibility (although breaking changes can happen and notice will be given one release ahead of time). Unstable (API-Unstable): Encompasses all features that are under active development where APIs may change based on user feedback, requisite performance improvements or because coverage across operators is not yet complete. The APIs and performance characteristics of these features may change.
Created On: Aug 13, 2025 | Last Updated On: Sep 02, 2025 PyTorch provides a flexible and efficient platform for building deep learning models, offering dynamic computation graphs and a rich ecosystem of tools and libraries. This guide will help you harness the power of PyTorch to create and deploy machine learning models effectively. Created On: Dec 23, 2016 | Last Updated On: Jul 22, 2025 The torch package contains data structures for multi-dimensional tensors and defines mathematical operations over these tensors. Additionally, it provides many utilities for efficient serialization of Tensors and arbitrary types, and other useful utilities.
It has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0. Returns True if obj is a PyTorch tensor. Returns True if obj is a PyTorch storage object. Created On: Apr 16, 2025 | Last Updated On: Nov 10, 2025 We are excited to announce the release of PyTorch® 2.9 (release notes)! This release features:
This release is composed of 3216 commits from 452 contributors since PyTorch 2.8. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.9. More information about how to get started with the PyTorch 2-series can be found at our Getting Started page. If you maintain and build your own custom C++/CUDA extensions with PyTorch, this update is for you! We’ve been building out a stable ABI with C++ convenience wrappers to enable you to build extensions with one torch version and run with another.
We’ve added the following APIs since the last release: With these APIs, we have been able to enable a libtorch-ABI wheel for Flash-Attention 3: see the PR here. While we have been intentional about API design to ensure maximal stability, please note that the highlevel C++ APIs are still in preview! We are working on many next steps: building out the ABI surface, establishing versioning, writing more docs, and enabling more custom kernels to be ABI stable. We introduce PyTorch Symmetric Memory to enable easy programming of multi-GPU kernels that work over NVLinks as well as RDMA networks. Symmetric Memory unlocks three new programming opportunities:
Created On: Apr 16, 2025 | Last Updated On: Apr 16, 2025 Choose Your Path: Install PyTorch Locally or Launch Instantly on Supported Cloud Platforms As a member of the PyTorch Foundation, you’ll have access to resources that allow you to be stewards of stable, secure, and long-lasting codebases. You can collaborate on training, local and regional events, open-source developer tooling, academic research, and guides to help new users and contributors have a productive experience. Transition seamlessly between eager and graph modes with TorchScript, and accelerate the path to production with TorchServe. Scalable distributed training and performance optimization in research and production is enabled by the torch.distributed backend.
A rich ecosystem of tools and libraries extends PyTorch and supports development in computer vision, NLP and more. We are kicking off the PyTorch 2.9.0 release cycle and continue to be excited for all the great features from our PyTorch community! REMINDER OF CHANGES TO CLASSIFICATION & TRACKING As mentioned in this RFC, beginning in release 2.8, there are changes to classification and tracking. Feature submissions are now classified as either Stable (API-Stable) or Unstable (API-Unstable), and the previous classifications of Prototype, Beta and Stable, will no longer be used. The requirements for a feature to be considered stable remain the same, and in the RFC we propose a suggested path to stable.
All features continue to be welcome. If you would like a feature to be included in the release blogpost, please mention it in the “Release highlight for Proposed Feature” issue that you create, and include the release version this is... Introducing PyTorch 2.0, our first steps toward the next generation 2-series release of PyTorch. Over the last few years we have innovated and iterated from PyTorch 1.0 to the most recent 1.13 and moved to the newly formed PyTorch Foundation, part of the Linux Foundation. PyTorch’s biggest strength beyond our amazing community is that we continue as a first-class Python integration, imperative style, simplicity of the API and options. PyTorch 2.0 offers the same eager-mode development and user experience, while fundamentally changing and supercharging how PyTorch operates at compiler level under the hood.
We are able to provide faster performance and support for Dynamic Shapes and Distributed. Below you will find all the information you need to better understand what PyTorch 2.0 is, where it’s going and more importantly how to get started today (e.g., tutorial, requirements, models, common FAQs). There is still a lot to learn and develop but we are looking forward to community feedback and contributions to make the 2-series better and thank you all who have made the 1-series so... Today, we announce torch.compile, a feature that pushes PyTorch performance to new heights and starts the move for parts of PyTorch from C++ back into Python. We believe that this is a substantial new direction for PyTorch – hence we call it 2.0. torch.compile is a fully additive (and optional) feature and hence 2.0 is 100% backward compatible by definition.
Underpinning torch.compile are new technologies – TorchDynamo, AOTAutograd, PrimTorch and TorchInductor.
People Also Search
- PyTorch documentation — PyTorch 2.9 documentation
- User Guide — PyTorch 2.9 documentation
- torch — PyTorch 2.9 documentation
- Reference API — PyTorch 2.9 documentation
- PyTorch 2.9 Release Blog
- Search - PyTorch 2.9 documentation
- Developer Notes — PyTorch 2.9 documentation
- PyTorch
- PyTorch Release 2.9.0 | Key Dates - Release Announcements - PyTorch ...
- PyTorch 2.x
PyTorch Is An Optimized Tensor Library For Deep Learning Using
PyTorch is an optimized tensor library for deep learning using GPUs and CPUs. Features described in this documentation are classified by release status: Stable (API-Stable): These features will be maintained long-term and there should generally be no major performance limitations or gaps in documentation. We also expect to maintain backwards compatibility (although breaking changes can happen and ...
Created On: Aug 13, 2025 | Last Updated On: Sep
Created On: Aug 13, 2025 | Last Updated On: Sep 02, 2025 PyTorch provides a flexible and efficient platform for building deep learning models, offering dynamic computation graphs and a rich ecosystem of tools and libraries. This guide will help you harness the power of PyTorch to create and deploy machine learning models effectively. Created On: Dec 23, 2016 | Last Updated On: Jul 22, 2025 The tor...
It Has A CUDA Counterpart, That Enables You To Run
It has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0. Returns True if obj is a PyTorch tensor. Returns True if obj is a PyTorch storage object. Created On: Apr 16, 2025 | Last Updated On: Nov 10, 2025 We are excited to announce the release of PyTorch® 2.9 (release notes)! This release features:
This Release Is Composed Of 3216 Commits From 452 Contributors
This release is composed of 3216 commits from 452 contributors since PyTorch 2.8. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.9. More information about how to get started with the PyTorch 2-series can be found at our Getting Started page. If you maintain and build your own custom C++/C...
We’ve Added The Following APIs Since The Last Release: With
We’ve added the following APIs since the last release: With these APIs, we have been able to enable a libtorch-ABI wheel for Flash-Attention 3: see the PR here. While we have been intentional about API design to ensure maximal stability, please note that the highlevel C++ APIs are still in preview! We are working on many next steps: building out the ABI surface, establishing versioning, writing mo...