add-uint-support
Update PyTorch AT_DISPATCH_V2 macros to enable uint16, uint32, and uint64 support in operators and kernels.
Introduction
This skill automates the process of extending PyTorch operator type coverage to include unsigned integer types (uint16, uint32, and uint64). It is designed for PyTorch developers and library maintainers who need to update C++ dispatch logic to ensure kernels can handle unsigned inputs. By managing the conversion and expansion of AT_DISPATCH_V2 macros, the skill ensures consistent type safety and kernel execution across both CPU and CUDA implementations.
The skill identifies the current dispatch structure, determines if a migration to the V2 macro format is necessary, and applies the appropriate expansion of type groups such as AT_BAREBONES_UNSIGNED_TYPES or AT_INTEGRAL_TYPES_V2. It addresses common scenarios like upgrading from legacy AT_DISPATCH, integrating unsigned support alongside floating-point types, and handling multiple dispatch sites within a single source file. This operational support reduces manual boilerplate, minimizes type-mismatch errors, and maintains adherence to PyTorch's evolving dispatch requirements.
-
Enables uint16, uint32, and uint64 support across PyTorch operators and kernel implementations.
-
Standardizes usage of AT_DISPATCH_V2 macros for better type dispatching and maintainability.
-
Automates the injection of AT_BAREBONES_UNSIGNED_TYPES and AT_INTEGRAL_TYPES_V2 into operator dispatch lists.
-
Simplifies the migration path from legacy dispatch macros to the recommended V2 architecture.
-
Supports multi-site updates, ensuring consistent coverage across all CPU and CUDA dispatch definitions in a single file.
-
Before applying, verify if the current codebase is already using AT_DISPATCH_V2; if not, use the dedicated migration logic to upgrade the macro format first.
-
Prefer Method 2 (substituting AT_INTEGRAL_TYPES_V2) when applicable, as it provides a cleaner and more concise way to cover integral types including unsigned ones.
-
Ensure that all dispatch sites (e.g., CPU, CUDA, and internal kernel implementations) are updated uniformly to prevent runtime type errors.
-
Use AT_EXPAND() for all type groups to ensure correct macro expansion during the compilation phase.
-
The skill assumes access to standard ATen dispatch headers and compatibility with PyTorch's current macro-based type dispatch system.
Repository Stats
- Stars
- 99,495
- Forks
- 27,610
- Open Issues
- 18,545
- Language
- Python
- Default Branch
- main
- Sync Status
- Idle
- Last Synced
- Apr 28, 2026, 12:39 PM