Datasets:
id
int64 2.74B
3.05B
| title
stringlengths 1
255
| user
stringlengths 2
26
| state
stringclasses 2
values | labels
sequencelengths 0
24
| comments
int64 0
206
| author_association
stringclasses 4
values | body
stringlengths 7
62.5k
⌀ | is_title
bool 1
class |
|---|---|---|---|---|---|---|---|---|
3,048,328,783
|
Floating Point exception in Convolution with disabled SMT
|
Flamefire
|
open
|
[] | 0
|
COLLABORATOR
|
### 🐛 Describe the bug
Using NNPACK for convolution on a system with disabled SMT causes a `Floating Point exception" caused by a divide-by-zero, terminating the program.
This can be easily reproduced with `python nn/test_convolution.py TestConvolutionNN.test_conv2d_discontiguous_weight`
This can be traced to a calculation in NNPACK based on Cache reported by CPUInfo, see https://github.com/Maratyszcza/NNPACK/issues/218
This should be adressed and the NNPACK version used by PyTorch updated to the version containing the fix.
### Versions
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: AlmaLinux 9.4 (Seafoam Ocelot) (x86_64)
GCC version: (GCC) 13.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.11.5 (main, Feb 6 2025, 01:57:04) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-427.33.1.el9_4.x86_64-x86_64-with-glibc2.34
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architektur: x86_64
CPU Operationsmodus: 32-bit, 64-bit
Adressgrößen: 52 bits physical, 57 bits virtual
Byte-Reihenfolge: Little Endian
CPU(s): 128
Liste der Online-CPU(s): 0-63
Liste der Offline-CPU(s): 64-127
Anbieterkennung: AuthenticAMD
Modellname: AMD EPYC 9334 32-Core Processor
Prozessorfamilie: 25
Modell: 17
Thread(s) pro Kern: 1
Kern(e) pro Sockel: 32
Sockel: 2
Stepping: 1
Übertaktung: aktiviert
Skalierung der CPU(s): 68%
Maximale Taktfrequenz der CPU: 3910,2529
Minimale Taktfrequenz der CPU: 0,0000
BogoMIPS: 5400,05
Markierungen: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap
L1d Cache: 2 MiB (64 Instanzen)
L1i Cache: 2 MiB (64 Instanzen)
L2 Cache: 64 MiB (64 Instanzen)
L3 Cache: 256 MiB (8 Instanzen)
NUMA-Knoten: 8
NUMA-Knoten0 CPU(s): 0-7
NUMA-Knoten1 CPU(s): 8-15
NUMA-Knoten2 CPU(s): 16-23
NUMA-Knoten3 CPU(s): 24-31
NUMA-Knoten4 CPU(s): 32-39
NUMA-Knoten5 CPU(s): 40-47
NUMA-Knoten6 CPU(s): 48-55
NUMA-Knoten7 CPU(s): 56-63
Schwachstelle Gather data sampling: Not affected
Schwachstelle Itlb multihit: Not affected
Schwachstelle L1tf: Not affected
Schwachstelle Mds: Not affected
Schwachstelle Meltdown: Not affected
Schwachstelle Mmio stale data: Not affected
Schwachstelle Retbleed: Not affected
Schwachstelle Spec rstack overflow: Mitigation; Safe RET
Schwachstelle Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Schwachstelle Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Schwachstelle Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Schwachstelle Srbds: Not affected
Schwachstelle Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.2
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] torch==2.7.0
[pip3] triton==3.3.0
[conda] Could not collect
| true
|
3,048,208,603
|
miss doc for torch.segment_reduce
|
shadow150519
|
open
|
[] | 0
|
NONE
|
### 📚 The doc issue
I have noticed there is a function called segment_reduce, but I can't find its doc, will it have a better performance than torch.scatter_reduce since torch.scatter_reduce is more general?
### Suggest a potential alternative/fix
_No response_
| true
|
3,048,164,781
|
`torch.batch_norm` shows inconsistent error behavior between CPU and GPU
|
SilentTester73
|
open
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
## Description
When `torch.batch_norm` is called with one of `running_mean` or `running_var` as a tensor and the other as `None`, an internal assertion `Expected has_running_mean == has_running_var to be true, but got false` is triggered on CUDA-enabled GPUs. However, this error is *not* triggered when the same code is run on the CPU.
Ideally, the behavior should be consistent across devices, meaning either both CPU and GPU should raise this specific error.
**To Reproduce:**
The following code demonstrates the issue. It tests two scenarios:
1. `running_mean` is a Tensor, `running_var` is `None`.
2. `running_mean` is `None`, `running_var` is a Tensor.
```python
import torch
print(f"PyTorch Version: {torch.__version__}")
# Common parameters for torch.batch_norm
weight_param = None
bias_param = None
is_training_param = True # Error occurs with True or False
momentum_param = 0.1
eps_param = 1e-5
cudnn_enabled_param = True # Also occurs with False on GPU
# --- Scenario 1: running_mean is Tensor, running_var is None ---
print("\n--- Scenario 1: running_mean is Tensor, running_var is None ---")
# Input tensor
input_tensor_shape = (3, 4, 5) # N, C, D*
num_features = input_tensor_shape[1]
# CPU
print(" CPU (Scenario 1):")
try:
input_tensor_cpu = torch.randn(input_tensor_shape)
running_mean_param_cpu = torch.randn(num_features)
running_var_param_cpu = None
torch.batch_norm(
input_tensor_cpu,
weight_param,
bias_param,
running_mean_param_cpu,
running_var_param_cpu,
is_training_param,
momentum_param,
eps_param,
cudnn_enabled_param
)
print(" CPU: Error not triggered.")
except RuntimeError as e:
print(f" CPU Error: {e}")
if "Expected has_running_mean == has_running_var to be true, but got false" in str(e):
print(" CPU: Successfully triggered the target error (unexpected based on current behavior).")
# GPU
if torch.cuda.is_available():
print(" GPU (Scenario 1):")
try:
input_tensor_gpu = torch.randn(input_tensor_shape).cuda()
running_mean_param_gpu = torch.randn(num_features).cuda()
running_var_param_gpu = None
torch.batch_norm(
input_tensor_gpu,
weight_param,
bias_param,
running_mean_param_gpu,
running_var_param_gpu,
is_training_param,
momentum_param,
eps_param,
cudnn_enabled_param
)
print(" GPU: Error not triggered (unexpected for this specific error message).")
except RuntimeError as e:
print(f" GPU Error: {e}")
if "Expected has_running_mean == has_running_var to be true, but got false" in str(e):
print(" GPU: Successfully triggered the target error.")
else:
print(" GPU (Scenario 1): CUDA not available, skipping GPU test.")
# --- Scenario 2: running_mean is None, running_var is Tensor ---
print("\n--- Scenario 2: running_mean is None, running_var is Tensor ---")
# CPU
print(" CPU (Scenario 2):")
try:
input_tensor_cpu = torch.randn(input_tensor_shape)
running_mean_param_cpu = None
running_var_param_cpu = torch.randn(num_features)
torch.batch_norm(
input_tensor_cpu,
weight_param,
bias_param,
running_mean_param_cpu,
running_var_param_cpu,
is_training_param,
momentum_param,
eps_param,
cudnn_enabled_param
)
print(" CPU: Error not triggered.")
except RuntimeError as e:
print(f" CPU Error: {e}")
if "Expected has_running_mean == has_running_var to be true, but got false" in str(e):
print(" CPU: Successfully triggered the target error (unexpected based on current behavior).")
# GPU
if torch.cuda.is_available():
print(" GPU (Scenario 2):")
try:
input_tensor_gpu = torch.randn(input_tensor_shape).cuda()
running_mean_param_gpu = None
running_var_param_gpu = torch.randn(num_features).cuda()
torch.batch_norm(
input_tensor_gpu,
weight_param,
bias_param,
running_mean_param_gpu,
running_var_param_gpu,
is_training_param,
momentum_param,
eps_param,
cudnn_enabled_param
)
print(" GPU: Error not triggered (unexpected for this specific error message).")
except RuntimeError as e:
print(f" GPU Error: {e}")
if "Expected has_running_mean == has_running_var to be true, but got false" in str(e):
print(" GPU: Successfully triggered the target error.")
else:
print(" GPU (Scenario 2): CUDA not available, skipping GPU test.")
```
## Expected behavior:
The error RuntimeError: Expected has_running_mean == has_running_var to be true, but got false. should either be raised consistently on both CPU and GPU
## Actual behavior ):
PyTorch Version: 2.6.0+cu124
--- Scenario 1: running_mean is Tensor, running_var is None ---
CPU (Scenario 1):
CPU: Error not triggered.
GPU (Scenario 1):
GPU Error: Expected has_running_mean == has_running_var to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
GPU: Successfully triggered the target error.
--- Scenario 2: running_mean is None, running_var is Tensor ---
CPU (Scenario 2):
CPU: Error not triggered.
GPU (Scenario 2):
GPU Error: Expected has_running_mean == has_running_var to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
GPU: Successfully triggered the target error.
The full code used for testing can be found at:
[https://colab.research.google.com/drive/17xWrbcKvMTpDTHcz_XSrnb5esLCUh970?usp=sharing](https://colab.research.google.com/drive/17xWrbcKvMTpDTHcz_XSrnb5esLCUh970?usp=sharing)
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: 18.1.8 (++20240731025043+3b5b5c1ec4a3-1~exp1~20240731145144.92)
CMake version: version 4.0.0
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.93
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 570.133.20
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 49%
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.5
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.26.2
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] optree==0.15.0
[pip3] torch==2.7.0
[pip3] triton==3.3.0
[conda] Could not collect
```
| true
|
3,048,015,448
|
The parameters of in_proj_bias in MultiheadAttention are zeros
|
Neronjust2017
|
open
|
[] | 0
|
NONE
|
### 🐛 Describe the bug
I use nn.MultiheadAttention in my model
```
self.multihead_attn = nn.MultiheadAttention(d_model,
nhead,
dropout=dropout,
batch_first=batch_first)
```
However, after the training process is done, I check the value of in_proj_bias in MultiheadAttention layer, the values of Q and K bias are all zero, but the values of V bias are not. Why is that? Thanks.

### Versions
PyTorch version: 2.3.0a0+6ddf5cf85e.nv24.04
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.29.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 470.199.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
CPU family: 6
Model: 106
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 6
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 80 MiB (64 instances)
L3 cache: 96 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; Load fences, usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] cudnn==1.1.2
[pip3] efficientnet-pytorch==0.7.1
[pip3] mypy==1.15.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.0
[pip3] onnxruntime==1.16.0
[pip3] onnxsim==0.4.36
[pip3] optree==0.11.0
[pip3] pynvjitlink==0.1.13
[pip3] pytorch-lightning==2.4.0
[pip3] pytorch-quantization==2.1.2
[pip3] pytorch-triton==3.0.0+a9bc1a364
[pip3] torch==2.3.0a0+6ddf5cf85e.nv24.4
[pip3] torch-scatter==2.1.2
[pip3] torch-tensorrt==2.3.0a0
[pip3] torchdata==0.7.1a0
[pip3] torchmetrics==1.4.2
[pip3] torchtext==0.17.0a0
[pip3] torchvision==0.18.0a0
[conda] Could not collect
| true
|
3,047,984,875
|
Avoid using system_clock
|
cyyever
|
open
|
[
"oncall: distributed",
"module: cpu",
"open source",
"release notes: quantization"
] | 1
|
COLLABORATOR
|
This PR replaces most `std::chrono::system_clock` with `std::chrono::steady_clock` if the duration is used in condition variables. Ideally system clocks should be used only to log wall-clock times.
Some `high_resolution_clock` are also changed to `steady_clock` because its resolution is not required in the context.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,047,982,027
|
[ROCm][CI] Update build-environment for mi300 workflows
|
jithunnair-amd
|
open
|
[
"module: rocm",
"open source",
"topic: not user facing",
"ciflow/rocm"
] | 1
|
COLLABORATOR
|
so their test times are tracked separately in https://raw.githubusercontent.com/pytorch/test-infra/generated-stats/stats/test-times.json. Currently, both MI200 and MI300 test times get combined into the same key `linux-focal-rocm-py3.10`
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,047,909,723
|
Inconsistent behavior between CPU and GPU implementations of `torch.arange`
|
SilentTester73
|
open
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
## Description
When using `torch.arange()` with a start value greater than the end value and a positive step, the behavior differs between CPU and GPU implementations:
- GPU silently returns an empty tensor
- CPU correctly raises an exception about inconsistent bounds with step sign
## Reproduction Steps
```python
import torch
print(torch.__version__)
# This should fail on both CPU and GPU since it's an impossible range
try:
print("GPU Results:", torch.arange(start=1549556900, end=1549556828, step=1989724, dtype=torch.float, device='cuda'))
except Exception as e:
print("GPU Exception:", e)
try:
print("CPU Results:", torch.arange(start=1549556900, end=1549556828, step=1989724, dtype=torch.float, device='cpu'))
except Exception as e:
print("CPU Exception:", e)
```
## Actual Behavior
```
2.6.0+cu124
GPU Results: tensor([], device='cuda:0') # Silently returns empty tensor
CPU Exception: upper bound and larger bound inconsistent with step sign # Raises exception
```
## Expected Behavior
Both implementations should raise the same exception.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 17.0.6 (++20231209124227+6009708b4367-1~exp1~20231209124336.77)
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 570.133.20
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
| true
|
3,047,900,019
|
Inconsistent behavior and misleading error message for `torch.nanmean()` with complex dtypes
|
SilentTester73
|
open
|
[
"topic: bug fixes"
] | 2
|
NONE
|
### 🐛 Describe the bug
## Description:
When using nanmean() with complex tensors, there's an inconsistent behavior between CPU and GPU implementations:
- On GPU: The function works correctly with complex dtypes (complex128)
- On CPU: The function fails, but with a misleading error message: "nansum does not support complex inputs"
The error message suggests that complex inputs aren't supported at all, but they clearly work on GPU. Maybe confusing for users.
## Steps to reproduce:
```python
import torch
# Works fine on GPU
try:
print("GPU Results:", torch.randn(2, 4, 15, dtype=torch.complex128, device='cuda').nanmean())
except Exception as e:
print("GPU Exception:", e)
# Fails on CPU with misleading message
try:
print("CPU Results:", torch.randn(2, 4, 15, dtype=torch.complex128, device='cpu').nanmean())
except Exception as e:
print("CPU Exception:", e)
```
Output:
```
GPU Results: tensor(-0.0895+0.0314j, device='cuda:0', dtype=torch.complex128)
CPU Exception: nansum does not support complex inputs
```
Colab: [https://colab.research.google.com/drive/1BBL8eWcIUiezqNZxtnauc9gq7NbRvI1U?usp=sharing](https://colab.research.google.com/drive/1BBL8eWcIUiezqNZxtnauc9gq7NbRvI1U?usp=sharing)
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 17.0.6 (++20231209124227+6009708b4367-1~exp1~20231209124336.77)
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 570.133.20
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
```
| true
|
3,047,833,331
|
fix slice w/ dynamic shapes
|
cgufb
|
open
|
[
"fb-exported",
"ciflow/inductor"
] | 4
|
CONTRIBUTOR
|
Summary: guard_size_oblivious has side effects that'll result in invalid strides when slice nodes take negative index on dynamic input shapes.
Test Plan: CIs should pass.
Differential Revision: D74354663
| true
|
3,047,778,668
|
[Minimizer] Fix the path naming
|
jimone1
|
open
|
[
"fb-exported",
"release notes: fx",
"fx"
] | 5
|
CONTRIBUTOR
|
Summary:
Added some logging and captured the indexing. See below image.
{F1977773416}
This is why the saved module path is called `/tmp/jimwan/minimizer_a_acc.pt`
Now the updated module paths are `/tmp/jimwan/minimizer_addmm_default_103_acc.pt`.
Test Plan:
```
MTIAC_USE_DIST_REF_KERNELS=all buck2 run @//mode/opt mtia/accuracy/minimizer:mtia_minimizer_runner -- --mode sequential --compare_fn allclose --pt_save_dir /tmp/debug3 --atol 1e-4 --rtol 1e-4 --all_outputs --start_idx native_layer_norm_default_80 --end_idx getitem_272 2>&1 | tee ~/test.log
```
{F1977773610}
Reviewed By: qcyuan
Differential Revision: D74369107
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv
| true
|
3,047,722,761
|
DISABLED test_intermediary_hooks_same_on_inductor (__main__.HooksTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 1
|
NONE
|
Platforms: asan, linux, mac, macos, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_intermediary_hooks_same_on_inductor&suite=HooksTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41835113407).
Over the past 3 hours, it has been determined flaky in 60 workflow(s) with 120 failures and 60 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_intermediary_hooks_same_on_inductor`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/dynamo/test_hooks.py", line 424, in test_intermediary_hooks_same_on_inductor
dynamo_out[0].backward(torch.ones(4))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 648, in backward
torch.autograd.backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/__init__.py", line 354, in backward
_engine_run_backward(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/autograd/graph.py", line 829, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/compiled_autograd.py", line 963, in runtime_wrapper
out = compiled_fn(inputs, sizes, scalars, hooks, packed_inputs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 350, in __call__
return self.forward(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 678, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 840, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 416, in __call__
raise e
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/fx/graph_module.py", line 403, in __call__
return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1755, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1861, in _call_impl
return inner()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1798, in inner
args_result = hook(self, args)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1458, in __call__
return self._torchdynamo_orig_callable(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 624, in __call__
return _compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1133, in _compile
raise InternalTorchDynamoError(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1082, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_utils_internal.py", line 97, in wrapper_function
return function(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 777, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 813, in _compile_inner
out_code = transform_code_object(code, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 264, in _fn
return fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 741, in transform
tracer.run()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3494, in run
super().run()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1345, in run
while self.step():
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1254, in step
self.dispatch_table[inst.opcode](self, inst)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 421, in impl
self.push(fn_var.call_function(self, self.popn(nargs), {}))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 1148, in call_function
return handler(tx, args, kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 792, in <lambda>
return lambda tx, args, kwargs: obj.call_function(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 1148, in call_function
return handler(tx, args, kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 968, in builtin_dispatch
rv = fn(tx, args, kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 842, in <lambda>
handlers.append(lambda tx, args, _: binop_handler(tx, *args))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py", line 465, in <lambda>
[*a.items, *b.unpack_var_sequence(tx)],
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/variables/constant.py", line 124, in unpack_var_sequence
raise NotImplementedError from e
torch._dynamo.exc.InternalTorchDynamoError: NotImplementedError:
from user code:
File "/var/lib/jenkins/pytorch/test/dynamo/test_hooks.py", line 902, in hook
return (args[0] + 100,)
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/dynamo/test_hooks.py HooksTests.test_intermediary_hooks_same_on_inductor
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_hooks.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,047,721,817
|
Inconsistent Complex torch.Tensor.asin() Results Between CPU and GPU
|
SilentTester73
|
open
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
## Bug Description
When computing the asin of a complex tensor with very small real part, large imaginary part using PyTorch, there is a discrepancy between the results computed on CPU versus GPU. The CPU computation often returns complex infinity, while the GPU returns finite numerical values. The discrepancy also occurs by whether the same tensors are loaded from a file or constructed in python.
## Steps to Reproduce
Colab: [https://colab.research.google.com/drive/1RSOwKzgX6lnGaYeEuC5nyEJ1ZdrT2peg?usp=sharing](https://colab.research.google.com/drive/1RSOwKzgX6lnGaYeEuC5nyEJ1ZdrT2peg?usp=sharing)
input_tensor.pt: [https://f004.backblazeb2.com/file/picgogo/share/fuzz02/input_tensor.pt](https://f004.backblazeb2.com/file/picgogo/share/fuzz02/input_tensor.pt)
Also it's notable that there also have discrepancy between create the same tensor and load the tensor from a tensor file.
```python
import torch
# Create the complex tensor
tensor_cpu = torch.tensor([1.1217e-35+1.9402e+09j], dtype=torch.complex64)
print(tensor_cpu)
print(f"CPU tensor: {tensor_cpu}")
# CPU result
result_cpu = tensor_cpu.asin()
print(f"CPU result: {result_cpu}")
# GPU result
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tensor_gpu = tensor_cpu.to(device)
result_gpu = tensor_gpu.asin()
print(f"GPU result: {result_gpu}")
# Compare
print(f"Difference: {result_cpu - result_gpu.cpu()}")
```
## Actual Behavior
CPU result: Returns complex infinity `tensor([0.+infj])` or an exact value `1.5707963705062866+97.04060363769531j`
GPU result: Returns finite values (`tensor([5.6052e-45+22.0792j]`, device='cuda:0') or similar)
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 17.0.6 (++20231209124227+6009708b4367-1~exp1~20231209124336.77)
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 570.133.20
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
```
| true
|
3,047,693,561
|
Operations on a tensor and a scalar will cause the error on dtype of the result
|
Redempt1onzzZZ
|
closed
|
[] | 2
|
NONE
|
### 🐛 Describe the bug
It's a derived finding based on #153014. As the normal logic of pytorch, when an API dealing with two tensor of different dtype (data precision), the result will follow the higher precision, like the below example.
```
import torch
tensor1 = torch.tensor([0.01], dtype=torch.float16, device='cuda')
print(tensor1)
print(tensor1.dtype)
tensor2 = torch.tensor([0.01], dtype=torch.float32, device='cuda')
print(tensor2)
print(tensor2.dtype)
result = torch.add(tensor1,tensor2)
print(result)
print(result.dtype)
```
<img width="343" alt="Image" src="https://github.com/user-attachments/assets/21768963-8c48-4dcc-ad70-121ab704ec3e" />
While, when one of the inputs of the API is scalar, the situation is different, the calculation is executed on the lower precision.
```import torch
tensor1 = torch.tensor([0.01], dtype=torch.float16, device='cuda')
print(tensor1)
tensor2 = torch.tensor([65536], dtype=torch.float32, device='cuda')
print(tensor2)
result = torch.add(tensor1,tensor2)
print(result)
print(result.dtype)
```
<img width="349" alt="Image" src="https://github.com/user-attachments/assets/70594dde-3edc-418d-b62b-2f0aeb79aebc" />
```
import torch
tensor1 = torch.tensor([0.01], dtype=torch.float16, device='cuda')
print(tensor1)
tensor2 = torch.tensor(65536, dtype=torch.float32, device='cuda')
print(tensor2)
result = torch.add(tensor1,tensor2)
print(result)
print(result.dtype)
```
<img width="331" alt="Image" src="https://github.com/user-attachments/assets/eb62cc4d-75df-4e93-91ad-420b70a5fb3d" />
And I have already verified this issue on `torch.add`, `torch.sub` and `torch.hypot`, and it seems a common question on lots of APIs.
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 570.124.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6430
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 3400.0000
CPU min MHz: 800.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] optree==0.15.0
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] optree 0.15.0 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
| true
|
3,047,638,306
|
Add tests to check pretty print when padding is a string in C++ API
|
Alvaro-Kothe
|
open
|
[
"open source",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Currently there are no tests to verify the behaviour of pretty print when padding is `torch::kSame` or `torch::kValid`. This PR just adds this tests to check for future regressions.
| true
|
3,047,624,613
|
Add logging for guard miss failure
|
jamesjwu
|
open
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 6
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153125
Differential Revision: [D74371381](https://our.internmc.facebook.com/intern/diff/D74371381/)
This PR adds some logging for guard misses to tlparse, so that we know when AOTAutogradCache and FxGraphCache miss due to guards.
Example tlparse result:
https://gist.github.com/jamesjwu/afa19335c0aee85b24546b13c1cf6427
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,047,624,527
|
Turn on static cuda launcher test
|
jamesjwu
|
closed
|
[
"fb-exported",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* (to be filled)
Differential Revision: [D74339692](https://our.internmc.facebook.com/intern/diff/D74339692/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,047,608,665
|
DISABLED test_input_codegen_with_sympy_expr_xpu (__main__.AOTInductorTestABICompatibleGpu)
|
etaf
|
open
|
[
"triaged",
"skipped",
"module: xpu"
] | 1
|
COLLABORATOR
|
Platforms: <fill this in or delete. Valid labels are: asan, linux, mac, macos, rocm, win, windows.>
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22inductor%2Ftest_aot_inductor.py%3A%3AAOTInductorTestABICompatibleGpu%3A%3Atest_input_codegen_with_sympy_expr_xpu%22%5D)).
cc @gujinghui @EikanWang @fengyuan14 @guangyey
| true
|
3,047,534,169
|
[CUDA] test_c10d_nccl test_extra_cuda_context failure due to _helper_test_extra_cuda_context_by_memory
|
nWEIdia
|
open
|
[] | 1
|
COLLABORATOR
|
While trying to replace cuda11.8 distributed jobs by cuda 12.6 ([PR](https://github.com/pytorch/pytorch/pull/151594/files#diff-9f639571a250cffbe9cded7d2fbb5ad6311e4be9c0c7610e5ba85930806e7f38)), test_extra_cuda_context failed and I had to increase the 1.5x heuristic to 1.7 to temporarily workaround the failure.
When this is finally fixed, I would roll back the 1.7 to 1.5.
Previously failed job:
https://github.com/pytorch/pytorch/actions/runs/14656019861/job/41132964287
cc @ptrblck @eqy @tinglvv @atalman @malfet
| true
|
3,047,526,737
|
Revert "[CI] docker images use tags instead of image name (#152209)"
|
huydhn
|
open
|
[
"module: rocm",
"topic: not user facing",
"ciflow/inductor",
"ci-no-td"
] | 1
|
CONTRIBUTOR
|
This reverts commit 0145f9e29e37beb2fb03bf2538f675060ab7b4f5.
DEBUG PR, no need to review
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
| true
|
3,047,515,089
|
DISABLED test_nn_module (__main__.TestGuardSerialization)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 2
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nn_module&suite=TestGuardSerialization&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41828599132).
Over the past 3 hours, it has been determined flaky in 41 workflow(s) with 82 failures and 41 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nn_module`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/dynamo/test_guard_serialization.py", line 640, in test_nn_module
self._test_serialization("NN_MODULE", fn, m, x)
File "/var/lib/jenkins/workspace/test/dynamo/test_guard_serialization.py", line 330, in _test_serialization
transform_code_object(self._frame_state.f_code, transform)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1422, in transform_code_object
transformations(instructions, code_options)
File "/var/lib/jenkins/workspace/test/dynamo/test_guard_serialization.py", line 311, in transform
check_fn_manager = CheckFunctionManager(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 2712, in __init__
filter_results = guard_filter_fn(
File "/var/lib/jenkins/workspace/test/dynamo/test_guard_serialization.py", line 272, in guard_filter_fn
self.assertTrue(any(ret))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ASAN=1 PYTORCH_TEST_WITH_UBSAN=1 python test/dynamo/test_guard_serialization.py TestGuardSerialization.test_nn_module
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_guard_serialization.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,047,510,959
|
devmate factor our test_torch tests
|
bobrenjc93
|
open
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153119
* #153118
* #152924
prompt "ok great now can you split test\_torch.py into more smaller
pieces just like you did with test\_basic\_vital\_signs.py?"
| true
|
3,047,498,747
|
devmate test_basic_vital_signs
|
bobrenjc93
|
open
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #153119
* __->__ #153118
* #152924
| true
|
3,047,468,182
|
[TESTING] Triton pin (May 7) 81f93f2c8ec7d20a1f8184def767edeaebeb6812
|
davidberard98
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"ciflow/inductor",
"ciflow/rocm",
"ci-no-td"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153117
| true
|
3,047,467,718
|
[c10d] Reduce test verbosity
|
kwen2501
|
open
|
[
"module: c10d",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153116
Has been seeing a lot of `Starting event listener thread for rank` recently in test print-out. Moving them to `logger.debug`.
| true
|
3,047,453,154
|
[ONNX] Implement sym_float?
|
justinchuby
|
open
|
[
"module: onnx",
"triaged"
] | 2
|
COLLABORATOR
|
Do we need sym_float in https://github.com/pytorch/pytorch/blob/main/torch/onnx/_internal/exporter/_torchlib/ops/symops.py ?
@titaiwangms @xadupre
| true
|
3,047,451,491
|
[BE][lint] fix PYFMT for PT-D code under torch.testing._internal, add them to the lint list
|
XilunWu
|
open
|
[
"oncall: distributed",
"module: lint",
"better-engineering",
"ciflow/trunk",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153114
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @EikanWang @jgong5 @wenzhe-nrv @sanchitintel
| true
|
3,047,447,913
|
[C10D] Move getNcclDataType into NCCLUtils
|
GD06
|
closed
|
[
"oncall: distributed",
"fb-exported",
"Merged",
"ciflow/trunk",
"release notes: distributed (c10d)"
] | 6
|
CONTRIBUTOR
|
Differential Revision: D74365214
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,047,425,123
|
Support using SymInt shapes for torch.baddbmm no-broadcast case
|
yf225
|
open
|
[
"ciflow/trunk",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
A typical `bmm` kernel in Helion needs to pass in symint shapes to `torch.baddbmm`. Currently `self.expand((dim1, dim2, dim3))` in baddbmm runs unconditionally and it doesn't work with symint shapes (it raises the following error):
```
Traceback (most recent call last):
File "/home/willfeng/local/helion_yf225/helion/_compiler/type_propagation.py", line 699, in propagate_call
CheckForIndexCalls.retry_call(self.value, proxy_args, proxy_kwargs),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/willfeng/local/helion_yf225/helion/_compiler/tile_index_proxy.py", line 104, in retry_call
return fn(*proxy_args, **proxy_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/willfeng/local/pytorch/torch/utils/_stats.py", line 27, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/willfeng/local/pytorch/torch/_subclasses/fake_tensor.py", line 1338, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/willfeng/local/pytorch/torch/_subclasses/fake_tensor.py", line 1986, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/willfeng/local/pytorch/torch/_subclasses/fake_tensor.py", line 1450, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/willfeng/local/pytorch/torch/_subclasses/fake_tensor.py", line 2645, in _dispatch_impl
r = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/willfeng/local/pytorch/torch/_ops.py", line 806, in __call__
return self._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/willfeng/local/pytorch/torch/_prims_common/wrappers.py", line 309, in _fn
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/willfeng/local/pytorch/torch/_meta_registrations.py", line 2172, in meta_baddbmm
self = self.expand((dim1, dim2, dim3))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: /home/willfeng/local/pytorch/build/aten/src/ATen/RegisterCompositeExplicitAutograd_0.cpp:5025: SymIntArrayRef expected to contain only concrete integers
```
This PR changes it so that we don't run `expand()` when not necessary, which makes the Helion use case (i.e. no broadcasting) work.
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153112
| true
|
3,047,422,533
|
[Graph Partition] Maintain relative order within partition during reordering
|
BoyuanFeng
|
open
|
[
"oncall: distributed",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
PR #151968 adds `reorder_for_minimizing_partition` for the minimal number of partitions. If reordering two nodes cannot reduce the number of partitions, `reorder_for_minimizing_partition` should maintain the relative order of these two nodes and rely on other reorder passes for some nice features, such as shorter liveness duration or less peak memory. In an extreme case, when all nodes are on gpu and can be cudagraphed, `reorder_for_minimizing_partition` should not reorder any nodes.
This PR improves `reorder_for_minimizing_partition` for the invariant: relative order of nodes within the same graph partition are maintained. To do so, we record the index of each node in the input `nodes: list[BaseSchedulerNode]` and use a heap to pop the node with the smallest index. So we always scheduler a node with smaller index in the same graph partition and respects the invariant. Previous implementation tried to use a queue to achieve that but failed. Because node_N at the end may rely on node_1 at the start, such that node_N is added to queue once node_1 is scheduled.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,047,414,644
|
[c10d] Remove unordered PG destroy test
|
kwen2501
|
open
|
[
"oncall: distributed",
"ciflow/trunk",
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153110
torch.distributed does not support unordered ProcessGroup destroy. Removing the test.
Resolves #137507
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k
| true
|
3,047,396,841
|
Inconsistent size passed to custom CUDA alloc/free in torch::unique_consecutive
|
darrin-willis
|
open
|
[] | 0
|
NONE
|
### 🐛 Describe the bug
When using `CUDAPluggableAllocator`, there is a different size passed to `malloc` vs `free` for some tensor inside `torch::unique_consecutive` on the third invocation. This can impact & corrupt alternative allocators like RMM. This may be related to https://github.com/pytorch/pytorch/pull/130472.
```
#include <gtest/gtest.h>
#include <torch/csrc/cuda/CUDAPluggableAllocator.h>
#include <torch/torch.h>
std::unordered_map<void*, ssize_t> allocation_sizes;
void* logging_malloc(ssize_t size, int device, cudaStream_t stream) {
void* ptr;
cudaMalloc(&ptr, size);
std::cout << "alloc ptr=" << ptr << " size=" << size << " device=" << device
<< " stream=" << stream << std::endl;
allocation_sizes[ptr] = size;
return ptr;
}
void logging_free(void* ptr, ssize_t size, int device, cudaStream_t stream) {
std::cout << "free ptr=" << ptr << " size=" << size << " device=" << device
<< " stream=" << stream << std::endl;
// Print out any frees that don't match the allocation sizes
if (allocation_sizes.find(ptr) != allocation_sizes.end()) {
if (allocation_sizes[ptr] != size) {
std::cout << "*** ERROR: free mismatch: " << ptr << " size=" << size
<< " expected=" << allocation_sizes[ptr] << std::endl;
}
} else {
std::cout << "WARNING: free of unknown ptr=" << ptr << std::endl;
}
cudaFree(ptr);
allocation_sizes.erase(ptr);
}
TEST(TestTorchUnique, UniqueComparisonTest) {
auto custom_allocator =
torch::cuda::CUDAPluggableAllocator::createCustomAllocator(logging_malloc, logging_free);
torch::cuda::CUDAPluggableAllocator::changeCurrentAllocator(custom_allocator);
// Run the command 3 times; the first 2 will pass and the third invocation will have
// different sizes in alloc and free
for (int i = 0; i < 3; ++i) {
LOG(INFO) << "Starting test " << i;
// Initialize simple sorted tensor with repeats
torch::Tensor sorted_tensor =
torch::tensor({0, 0, 0, 1, 1, 2, 3, 3, 3, 3, 5},
torch::TensorOptions().dtype(torch::kFloat32).device(at::kCUDA));
LOG(INFO) << "Starting unique_consecutive";
// This operation will call malloc/free with different sizes on the same pointer
auto unique_dim_result = torch::unique_consecutive(sorted_tensor, false, true, 0);
LOG(INFO) << "Finished unique_consecutive";
// Everything below is only there to validate correct results
auto unique_dim_values = std::get<0>(unique_dim_result);
auto unique_dim_counts = std::get<2>(unique_dim_result);
// Check tensor sizes
EXPECT_EQ(unique_dim_values.size(0), 5);
EXPECT_EQ(unique_dim_counts.size(0), 5);
// Copy to CPU before accessing elements
torch::Tensor cpu_values = unique_dim_values.cpu();
torch::Tensor cpu_counts = unique_dim_counts.cpu();
// Use accessors on the CPU tensors
auto values_accessor = cpu_values.accessor<float, 1>();
auto counts_accessor = cpu_counts.accessor<int64_t, 1>();
// Check individual values using accessors
EXPECT_EQ(values_accessor[0], 0.0f);
EXPECT_EQ(values_accessor[1], 1.0f);
EXPECT_EQ(values_accessor[2], 2.0f);
EXPECT_EQ(values_accessor[3], 3.0f);
EXPECT_EQ(values_accessor[4], 5.0f);
// Check count values using accessors
EXPECT_EQ(counts_accessor[0], 3);
EXPECT_EQ(counts_accessor[1], 2);
EXPECT_EQ(counts_accessor[2], 1);
EXPECT_EQ(counts_accessor[3], 4);
EXPECT_EQ(counts_accessor[4], 1);
}
}
```
The output is like:
```
I20250507 16:10:54.852185 2803565 torch_unique_test.cc:42] Starting test 2
alloc ptr=0x7f486f200000 size=44 device=0 stream=0
I20250507 16:10:54.852239 2803565 torch_unique_test.cc:50] Starting unique_consecutive
...
alloc ptr=0x7f486f200600 size=1279 device=0 stream=0
free ptr=0x7f486f200600 size=40 device=0 stream=0
*** ERROR: free mismatch: 0x7f486f200600 size=40 expected=1279
```
### Versions
PyTorch version: 2.7.0a0+git1341794
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 8.4.0-3ubuntu2) 8.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.30.2
Libc version: glibc-2.31
Python version: 3.9.19 (main, Jul 25 2024, 22:44:54) [Clang 18.1.8 ] (64-bit runtime)
Python platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 5070 Ti
Nvidia driver version: 570.124.04
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8
/usr/lib/libcudnn_adv_infer.so.8
/usr/lib/libcudnn_adv_train.so.8
/usr/lib/libcudnn_cnn_infer.so.8
/usr/lib/libcudnn_cnn_train.so.8
/usr/lib/libcudnn_ops_infer.so.8
/usr/lib/libcudnn_ops_train.so.8
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.0.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.0.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 1
Core(s) per socket: 24
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 183
Model name: 13th Gen Intel(R) Core(TM) i9-13900E
Stepping: 1
CPU MHz: 830.124
CPU max MHz: 5200.0000
CPU min MHz: 800.0000
BogoMIPS: 3609.60
Virtualization: VT-x
L1d cache: 576 KiB
L1i cache: 384 KiB
L2 cache: 24 MiB
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid cldemote movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] torch==2.7.0a0+git1341794
[conda] Could not collect
| true
|
3,047,361,033
|
Introduce unbacked friendly is_known_contiguous and use it instead of is_contiguous in all locations where there is a general path for not know_contiguous
|
laithsakka
|
open
|
[
"oncall: pt2",
"module: dynamic shapes",
"data dependent error"
] | 0
|
CONTRIBUTOR
|
title.
cc @chauhang @penguinwu @ezyang @bobrenjc93
| true
|
3,047,306,800
|
do not reinplace diagonal_scatter
|
BoyuanFeng
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: functionalization",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
In the following code, `copy_` changes values of `mul` which is later read by `torch.mm`. So `torch.mm` has to happen after `copy_`. This info is captured in aot graph. We can see `mm` reads `diagonal_scatter`, which reads `copy`. So we know torch.ops.aten.mm must happen after torch.ops.aten.copy.
However, in post_grad_graph, this info is lost. `diagonal_scatter` is reinplaced. `mm` reads `mul`, and `copy__default` reads `diagonal_default` which also reads `mul`. So there is no dependency between `mm` and `diagonal_default` anymore.
From inductor perspective, we loses this dependency and can reorder `torch.ops.aten.mm` and `torch.ops.aten.copy`, leading to wrong results. To see the error, try `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_linalg_eig_cuda_float64` with `torch._inductor.config.graph_partition=True`, which will lead to incorrectness. We need `graph_partition` here for reordering based on dependency, which exposes the issue. This PR fixes the error.
```python
import torch
def f(x, y, z, other):
mul = x * y
diag = torch.diagonal(mul)
diag.copy_(other)
return torch.mm(mul, z)
f = torch.compile(f)
inps = (torch.randn(3, 3), torch.randn(3, 3), torch.randn(3, 3), torch.randn(3))
f(*inps)
```


cc @bdhirsh @ezyang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,047,291,352
|
[cutlass backend] Fix EVT test for fbcode post cutlass 3.9.2 upgrade
|
henrylhtsang
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153106
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,047,287,497
|
[dynamo] Fix super and classmethod binding of cls object
|
anijain2305
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153105
* #152883
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,047,281,502
|
[FlexAttention] Remove Old Constraint on lastdim strides
|
drisspg
|
open
|
[
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
https://github.com/pytorch/pytorch/pull/151959
Cherry pick
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,047,264,749
|
[Inductor] Investigate computing global amaxes via atomics (instead of a reduction based approach) in triton codgen
|
danielvegamyhre
|
open
|
[
"oncall: pt2",
"module: inductor"
] | 0
|
CONTRIBUTOR
|
## Summary
Tensorwise or rowwise amax values are used to compute scaling factors in float8 quantization. Computing these values in a performant way is critical for float8 training with dynamic quantization, where we are dynamically scaling the tensors at runtime in forward/backward.
Currently inductor codegen uses a reduction based approach to compute global amaxes. Benchmarking has shown [atomics](https://triton-lang.org/main/python-api/generated/triton.language.atomic_max.html) have outperformed the reduction based approach. We should investigate computing global amax via atomics (instead of a reduction based approach) in triton codgen.
- [tlparse link](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpcXuwgr/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000) w/ example kernels showing reduction based approach
## Additional context
In float8 training we compute tensorwise amax or rowwise amaxes as part of the computation of the float8 scaling factor(s): [code](https://github.com/pytorch/ao/blob/8369268afecdc87f9917075a1d352785176489dd/torchao/float8/float8_utils.py#L47)
Currently the inductor codegen produces triton kernels which compute these amaxes in 2 separate kernels:
1. **Compute block local amaxes**
- The first kernel reads input tensor blocks from HBM and compute block local amaxes, write them back out to a temporary buffer in HBM.
2. **Compute global amaxes by reducing block local amaxes**
- The second kernel loads the temporary buffer back from HBM into SRAM and reduces it to compute the global amaxes. These are either written back out to HBM, or used to compute scales which are then written to HBM, depending on what fusion decision inductor makes.
This process can be optimized by using [atomics](https://triton-lang.org/main/python-api/generated/triton.language.atomic_max.html) to compute the global amaxes in a single kernel, reducing the amount of data movement between HBM and SRAM.
I actually have implemented and benchmarked both approaches using triton kernels I handwrote for the [float8nocompile](https://github.com/pytorch/ao/tree/main/torchao/prototype/float8nocompile) project (eager mode float8 training with improved perf via handwritten triton kernels), so I have some additional context here.
Microbenchmarking of dynamic float8 quantization implementations using these different approaches showed atomics substantially outperformed both `torch.compile` codegen as well as the handwritten reduction kernels (see benchmarks below). We should investigate using atomics in inductor codegen, to improve dynamic float8 quantization perf.

As you can see the handwritten reduction kernels were not as optimal as the inductor codgen ones, but the atomics based kernels still outperformed them both.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,047,243,750
|
`bernoulli_()` produces inconsistent results between CPU and GPU
|
SilentTester73
|
closed
|
[] | 1
|
NONE
|
### 🐛 Describe the bug
## Description
The in-place `torch.Tensor.bernoulli_()` function generates significantly different results when run on CPU versus GPU.
## Minimal Reproduction Code
Available on Colab: [https://colab.research.google.com/drive/1CC3VIj0FocMUu1ebozzF7IHsdBiQDPE_?usp=sharing](https://colab.research.google.com/drive/1CC3VIj0FocMUu1ebozzF7IHsdBiQDPE_?usp=sharing)
```python
import torch
# Set random seed
torch.manual_seed(42)
# Create test tensors
shape = (2, 3, 3)
prob_value = 0.37
# Create on CPU
tensor_cpu = torch.zeros(shape, dtype=torch.float64)
tensor_cpu[0, 0, 0] = -1.1454e-8
tensor_cpu[0, 0, 1] = 5.5628e+4
# Create probability tensor
prob_tensor_cpu = torch.full(shape, prob_value, dtype=torch.float64)
# Run on CPU
result_cpu = tensor_cpu.clone().bernoulli_(prob_tensor_cpu)
# Create on GPU with same values
tensor_gpu = tensor_cpu.clone().to("cuda")
prob_tensor_gpu = prob_tensor_cpu.to("cuda")
# Run on GPU
result_gpu = tensor_gpu.bernoulli_(prob_tensor_gpu)
# Compare results
result_gpu_cpu = result_gpu.cpu()
# Calculate differences
diff = (result_cpu != result_gpu_cpu)
diff_count = diff.sum().item()
total = tensor_cpu.numel()
print(f"Summary: {diff_count} differences out of {total} elements ({diff_count/total*100:.1f}%)")
if diff_count > 0:
# Show specific differences
indices = torch.nonzero(diff, as_tuple=True)
for i in range(min(5, len(indices[0]))):
idx = tuple(dim[i].item() for dim in indices)
print(f"Position {idx}: CPU={result_cpu[idx].item()}, GPU={result_gpu_cpu[idx].item()}")
```
## Observed Behavior
The CPU and GPU implementations produce completely different binary outcomes.
```
PyTorch version: 2.6.0+cu124
Summary: 9 differences out of 18 elements (50.0%)
Positions where results differ:
Position (0, 0, 0): CPU=1.0, GPU=0.0
Position (0, 0, 1): CPU=1.0, GPU=0.0
Position (0, 1, 1): CPU=0.0, GPU=1.0
...
```
## Expected Behavior
When provided with identical input tensors, the CPU and GPU implementations should produce consistent results.
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 17.0.6 (++20231209124227+6009708b4367-1~exp1~20231209124336.77)
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 570.133.20
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] Could not collect
```
| true
|
3,047,232,042
|
[CUDA][CUDNN] Dispatch to cuDNN for non-batch-splittable 64-bit NCHW convolutions
|
eqy
|
open
|
[
"module: cuda",
"module: cpu",
"module: convolution",
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
For #152816
cc @ptrblck @msaroufim @jerryzh168 @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10
| true
|
3,047,230,094
|
DISABLED test_intermediary_hooks_same_on_aot_eager (__main__.HooksTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_intermediary_hooks_same_on_aot_eager&suite=HooksTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41818772655).
Over the past 3 hours, it has been determined flaky in 24 workflow(s) with 48 failures and 24 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_intermediary_hooks_same_on_aot_eager`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_hooks.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,047,223,137
|
[mm sampling] extract more triton information
|
coconutruben
|
closed
|
[
"fb-exported",
"Merged",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 8
|
CONTRIBUTOR
|
Summary:
# Why
capture more triton config information that was not being captured
# What
capture and extract
- group_m
- allow_tf32
- acc_type
- matrix_instr_nonkdim
- waves_per_eu
- kpack
to achieve this, add
- matrix_instr_nonkdim
- waves_per_eu
- kpack
to the info_dict of the TritonTemplateCaller
Test Plan:
with D74342290
```
buck2 run -c fbcode.rocm_arch=mi300 -m rocm621 mode/opt-amd-gpu fbcode//deeplearning/aot_inductor/benchmark/sampling:test_gemm_autotune_benchmark_AMD_block_0 2>&1 | tee /tmp/tmp.52Igj8lthj/15.txt
```
(edited for clarity and brevity)
```
AutotuneMetrics03LogEntry(
backend='Triton',
exectime_ms=0.007449999917298555,
perf_model_name='scripts.vandrei.pytorch_experiments.matmul_estimator_lib.estimate_matmul_time_new',
perf_model_exectime_ms=0.009558684365573179,
config_triton_block_m=16,
config_triton_block_n=256,
config_triton_block_k=128,
config_triton_num_stages=2,
config_triton_num_warps=8,
config_triton_group_m=16,
config_triton_allow_tf32='False',
config_triton_acc_type='tl.float32',
config_triton_matrix_instr_nonkdim=16,
config_triton_waves_per_eu=1,
config_triton_kpack=2,
x_batch_dim=0,
x_row_dim=8,
x_col_dim=96,
x_batch_stride=0,
x_row_stride=96,
x_col_stride=1,
x_dtype='torch.float16',
x_dtype_size=16,
w_batch_dim=0,
w_row_dim=96,
w_col_dim=512,
w_batch_stride=0,
w_row_stride=512,
w_col_stride=1,
w_dtype='torch.float16',
w_dtype_size=16,
vendor='AMD',
model='gfx942:sramecc+:xnack-',
major=9,
minor=4,
sms=304,
l2_cache=4194304,
warp_size=64,
regs_per_sm=65536,
max_threads_per_sm=2048,
total_mem=206141652992,
hip_version='6.2.41134',
triton_upstream_hash='3889f3f3b97b817741e308c173409927b7c4536f',
environment='experiment-xzy-default',
session_id='8a7001bd-652c-440c-bc56-4cb1e25146ea',
[...]
)
```
Reviewed By: exclamaforte
Differential Revision: D74342286
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,047,189,264
|
[Cherry-pick] Fix copysign + scalar correctness issue
|
malfet
|
open
|
[
"release notes: mps",
"ciflow/mps"
] | 1
|
CONTRIBUTOR
|
Which consists of two cherry-picks:
- https://github.com/pytorch/pytorch/pull/152997
- https://github.com/pytorch/pytorch/pull/152510 (only partially, as code path are quite divergent between 2.7 and trunk)
| true
|
3,047,165,801
|
Use std::fma for CUDA Adam kernel's lerps.
|
MeetThePatel
|
open
|
[
"open source",
"release notes: cuda"
] | 1
|
CONTRIBUTOR
|
Switch the calculation of lerps in Adam's fused CUDA kernel to use std::fma, as proposed by @crcrpar .
| true
|
3,047,162,885
|
[WIP][XPU] Update Triton commit
|
anmyachev
|
open
|
[
"triaged",
"open source",
"topic: not user facing",
"ciflow/inductor",
"ciflow/xpu"
] | 2
|
COLLABORATOR
|
To view the current pass rate on a full test suite and detect problems earlier.
| true
|
3,047,161,717
|
[CUDA][cuBLASLt] Respect `allow[FP16/BF16]ReductionCuBLAS` in `cuBLASLt`
|
eqy
|
open
|
[
"module: cublas",
"open source",
"module: bfloat16",
"module: half",
"topic: not user facing",
"matrix multiplication"
] | 1
|
COLLABORATOR
|
cuBLASLt matmuls have been silently allowing all reduction types, which meant that e.g., `allow_fp16_reduced_precision_reduction = False` had no effect.
In practice split-K with reduced precision reductions were unlikely to happen as the default `CUBLASLT_WORKSPACE_SIZE` of 1MiB tends to prevent this.
However this isn't guaranteed and we are on the path to increasing the default workspace size following #151163
This setting is effectively already tested in e.g., `test_cublas_addmm_size_100_cuda_float16` and `test_cublas_addmm_size_100_cuda_bfloat16` but the backend selection is not deterministic. Running the full `test_matmul_cuda.py` seems to exercise the Lt interface, but running a standalone test does not (apparently due to spurious alignment differences).
cc @csarofeen @ptrblck @xwang233
| true
|
3,047,146,820
|
Add missing in-place on view check to custom autograd.Function
|
soulitzer
|
open
|
[
"release notes: autograd"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153094
* #153005
Fixes https://github.com/pytorch/pytorch/issues/152773
| true
|
3,047,052,936
|
[vec128] Fix fmsub NEON defintion
|
pytorchbot
|
closed
|
[
"module: cpu",
"open source"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #152075
As reported in https://github.com/pytorch/pytorch/issues/149292, according to manual, `vfmsq_f32` implements `c - a * b` rather than `a * b - c`, so it's call must be prefixed with `vnegq_f32`
Also, adjust the tests to use OpMath for FMA computation to avoid accuracy error accumulation due to non-fused multiply-and-add over lower precision dtypes
Note that `Vectorized::fmsub` is not currently instantiated anywhere, so it could safely remain broken
TODO:
- Enable C++ testing on MacOS and/or aarch64 platforms (right now Mac tests are build without C++ tests)
Fixes https://github.com/pytorch/pytorch/issues/149292
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,047,034,816
|
[MKLDNN] Check that strides are positive
|
pytorchbot
|
closed
|
[
"module: cpu",
"module: mkldnn",
"open source",
"ciflow/linux-aarch64"
] | 1
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #151848
For pooling ops. Prevents division-by-zero when argument is wrong
Fixes https://github.com/pytorch/pytorch/issues/149274
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
3,047,030,686
|
Fix tensorpipe compilation with clang-17
|
pytorchbot
|
closed
|
[
"open source"
] | 1
|
COLLABORATOR
|
By suppressing `missing-template-arg-list-after-template-kw` warning, which seems to be required to compile Google's libnop, which is in a semi-abandoned state now
```
In file included from /Users/malfet/git/pytorch/pytorch/third_party/tensorpipe/third_party/libnop/include/nop/base/variant.h:21:
/Users/malfet/git/pytorch/pytorch/third_party/tensorpipe/third_party/libnop/include/nop/types/variant.h:241:30: error: a template argument list is expected after a name prefixed by the template keyword [-Wmissing-template-arg-list-after-template-kw]
241 | index_ = value_.template Construct(std::forward<Args>(args)...);
| ^
/Users/malfet/git/pytorch/pytorch/third_party/tensorpipe/third_party/libnop/include/nop/types/variant.h:258:26: error: a template argument list is expected after a name prefixed by the template keyword [-Wmissing-template-arg-list-after-template-kw]
258 | if (!value_.template Assign(TypeTag<T>{}, index_, std::forward<U>(value))) {
| ^
/Users/malfet/git/pytorch/pytorch/third_party/tensorpipe/third_party/libnop/include/nop/types/variant.h:265:26: error: a template argument list is expected after a name prefixed by the template keyword [-Wmissing-template-arg-list-after-template-kw]
265 | if (!value_.template Assign(index_, std::forward<T>(value))) {
| ^
3 errors generated.
```
Fixes https://github.com/pytorch/pytorch/issues/151316
| true
|
3,046,998,970
|
Clean up right nav
|
svekars
|
open
|
[
"module: docs",
"topic: docs",
"topic: not user facing"
] | 2
|
CONTRIBUTOR
|
- Move community and language binding links to the horizontal bar
- Add an intro to the community page.
- Fix the link in the ogp_image
- Fix the link in the version switcher
- Clean up unneeded links
- Test noindex as a meta tag in fsdp doc
cc @sekyondaMeta @AlannaBurke
| true
|
3,046,977,866
|
[Cherry Pick] Remove cuda dependencies from non cuda buids #152333
|
atalman
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
Cherry Pick of https://github.com/pytorch/pytorch/pull/152333
Related to: https://github.com/pytorch/pytorch/issues/152121
| true
|
3,046,976,974
|
[nativert] move recordfunction
|
dolpm
|
open
|
[
"fb-exported",
"topic: not user facing"
] | 8
|
CONTRIBUTOR
|
Summary:
nativert RFC: https://github.com/zhxchen17/rfcs/blob/master/RFC-0043-torch-native-runtime.md
To land the runtime into PyTorch core, we will gradually land logical parts of the code into the Github issue and get each piece properly reviewed.
This diff moves the our function-recording raii wrapper into record_function_ops
Test Plan: CI
Differential Revision: D74284301
| true
|
3,046,969,761
|
[nativert] move executor config to torch
|
dolpm
|
open
|
[
"fb-exported",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Summary:
nativert RFC: https://github.com/zhxchen17/rfcs/blob/master/RFC-0043-torch-native-runtime.md
To land the runtime into PyTorch core, we will gradually land logical parts of the code into the Github issue and get each piece properly reviewed.
This diff moves the executor config to torch. since it's header-only this requires some changes to the libtorch build configs
Test Plan: CI
Differential Revision: D74278789
| true
|
3,046,947,509
|
Export doesn't work with patched forward
|
tugsbayasgalan
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```
class Foo(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return x + 2
import functools
def fancy_forward(x, y):
return x + 2 + y
Foo.forward = functools.partial(fancy_forward, y=torch.randn(4, 4))
torch.export.export(Foo(), (torch.randn(4, 4),), strict=False)
AttributeError: 'functools.partial' object has no attribute '__code__'
```
This is common pattern in HF models.
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4
| true
|
3,046,947,193
|
Allow workflows to opt-out of experiments
|
zxiiro
|
open
|
[
"open source",
"topic: not user facing",
"ciflow/inductor-periodic"
] | 1
|
COLLABORATOR
|
This change adds support to allow workflows to opt-out of experiments.
| true
|
3,046,940,041
|
Refactor nested benchmarking functions in select_algorithm.py
|
masnesral
|
open
|
[
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153084
Summary: I'll need some of the benchmark-related functions surfaced so I can use them for remote autotuning. This PR just lifts the main in-process benchmarking helpers to classmethods. It wasn't strictly necessary to also move the sub-process benchmarking helper, but I think it improves readability. Also added some missing types.
Test Plan: Existing unit tests
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,046,920,180
|
[CUDA][cuBLASLt] Fix scale setting for `allowFP16AccumulationCuBLAS` `true` case
|
eqy
|
open
|
[
"module: cuda",
"triaged",
"module: cublas",
"open source",
"module: half",
"release notes: cuda"
] | 1
|
COLLABORATOR
|
Also add some missing `@onlyCUDA` / support check decorators in `test_matmul_cuda.py`
Should help resolve #151890
cc @ptrblck @msaroufim @jerryzh168 @csarofeen @xwang233
| true
|
3,046,896,343
|
[dynamo] Harden torch function dispatchability check for attributes and methods access
|
StrongerXi
|
open
|
[
"ciflow/trunk",
"module: dynamo",
"ciflow/inductor",
"release notes: dynamo"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153082
See more details in
https://github.com/pytorch/pytorch/issues/151771#issuecomment-2836372110.
Fixes #151771.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
Differential Revision: [D74342291](https://our.internmc.facebook.com/intern/diff/D74342291)
| true
|
3,046,889,949
|
[cutlass-3] Add cutlass key for fbcode and OSS
|
henrylhtsang
|
open
|
[
"fb-exported",
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 7
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153081
Differential Revision: [D74337959](https://our.internmc.facebook.com/intern/diff/D74337959/)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,046,852,748
|
[BE] Move all lint runner to 24.04
|
malfet
|
closed
|
[
"topic: not user facing"
] | 1
|
CONTRIBUTOR
|
As Ubuntu-20 reached EOL on Apr 1st, see https://github.com/actions/runner-images/issues/11101
This forces older python version to be 3.8
Delete all linux-20.04 runners from the lintrunner.yml
Cherry-pick of https://github.com/pytorch/pytorch/pull/150427 into release/2.7 branch
(cherry picked from commit 48af2cdd270c275acccc4a94b04e4ccdb64d557a)
| true
|
3,046,834,020
|
[FSDP2][Doc] add pointer to torchtitan
|
weifengpy
|
open
|
[
"release notes: distributed (fsdp)"
] | 1
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153079
Summary:
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
| true
|
3,046,761,919
|
Add TensorLR variant for fused Adagrad on CPU
|
MeetThePatel
|
open
|
[
"triaged",
"open source",
"release notes: optim"
] | 3
|
CONTRIBUTOR
|
This PR adds a tensor LR variant for the CPU Adagrad(fused=True).
I copied the behavior from the tensor LR variant of CPU Adam(fused=True), where the `lr.item()` is cast to a double and passed in the default function.
| true
|
3,046,743,699
|
Mismatch of mixed precision `cast_fn` in FSDP and FSDP2
|
markovka17
|
open
|
[
"oncall: distributed",
"module: fsdp"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
FSDP2 does not work with `dataclasses` as input. More specifically, FSDP2's pre_hook does not cast tensors from dataclass. FSDP uses [_apply_to_tensors_](https://github.com/pytorch/pytorch/blob/172e6415299e93629497d9660c525c8bf60af912/torch/distributed/utils.py#L218) to handle dataclass-like objects. On the other hand, FSDP2 uses a simple [_cast_fp_tensor](https://github.com/pytorch/pytorch/blob/172e6415299e93629497d9660c525c8bf60af912/torch/distributed/fsdp/_fully_shard/_fsdp_state.py#L233)
```python
import dataclasses
import torch
from torch import nn
from torch.distributed.fsdp import fully_shard, MixedPrecisionPolicy, MixedPrecision
from torch.distributed.fsdp.fully_sharded_data_parallel import FullyShardedDataParallel as FSDP
@dataclasses.dataclass
class Input:
x: torch.Tensor
def main():
class Model(nn.Module):
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
self._layer = nn.Linear(10, 10)
def forward(self, input: Input):
# RuntimeError: mat1 and mat2 must have the same dtype, but got Float and BFloat16
return self._layer(input.x)
# Example with FSDP2 (does not work !!!)
# model = Model().cuda()
# input = Input(torch.randn(2, 10).cuda())
# fully_shard(model, mp_policy=MixedPrecisionPolicy(torch.bfloat16, torch.bfloat16, torch.bfloat16, True))
# _ = model(input)
# Example with FSDP
model = Model().cuda()
input = Input(torch.randn(2, 10).cuda())
model = FSDP(model, mixed_precision=MixedPrecision(torch.bfloat16, torch.bfloat16, torch.bfloat16))
_ = model(input)
if __name__ == "__main__":
# dist.init_process_group(...)
main()
```
### Versions
PyTorch version: 2.6.0+cu126
Is debug build: False
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.1
Libc version: glibc-2.39
Python version: 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-5.4.210-39.1.pagevecsize-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.6.85
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
Nvidia driver version: 560.35.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 230
On-line CPU(s) list: 0-229
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7702 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 230
Stepping: 0
BogoMIPS: 4000.52
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities
Virtualization: AMD-V
L1d cache: 14.4 MiB (230 instances)
L1i cache: 14.4 MiB (230 instances)
L2 cache: 115 MiB (230 instances)
L3 cache: 3.6 GiB (230 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-56
NUMA node1 CPU(s): 57-113
NUMA node2 CPU(s): 114-170
NUMA node3 CPU(s): 171-229
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==5.0.4
[pip3] lovely-numpy==0.2.13
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.6.4.1
[pip3] nvidia-cuda-cupti-cu12==12.6.80
[pip3] nvidia-cuda-nvrtc-cu12==12.6.77
[pip3] nvidia-cuda-runtime-cu12==12.6.77
[pip3] nvidia-cudnn-cu12==9.5.1.17
[pip3] nvidia-cudnn-frontend==1.8.0
[pip3] nvidia-cufft-cu12==11.3.0.4
[pip3] nvidia-curand-cu12==10.3.7.77
[pip3] nvidia-cusolver-cu12==11.7.1.2
[pip3] nvidia-cusparse-cu12==12.5.4.2
[pip3] nvidia-cusparselt-cu12==0.6.3
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.6.77
[pip3] nvtx==0.2.5
[pip3] onnx==1.17.0
[pip3] onnxruntime==1.17.0
[pip3] open-clip-torch==2.24.0
[pip3] optree==0.13.1
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-lightning==2.0.2
[pip3] pytorch-triton==3.0.0+72734f086
[pip3] pytorch-warmup==0.1.1
[pip3] torch==2.6.0+cu126
[pip3] torch-fidelity==0.3.0
[pip3] torch_tensorrt==2.6.0a0
[pip3] torchao==0.9.0
[pip3] torchmetrics==1.0.3
[pip3] torchprofile==0.0.4
[pip3] torchsde==0.2.6
[pip3] torchvision==0.21.0+cu126
[pip3] triton==3.2.0
[conda] Could not collect
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @zhaojuanmao @mrshenli @rohan-varma @chauhang @mori360
| true
|
3,046,733,312
|
Fix test/test_optim.py error message.
|
MeetThePatel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Fixes an error message in test/test_optim.py
Current behavior: If running the test with Adagrad, the error message reads: "SGD does not currently support capturable".
Fix: The error message now says correctly: "Adagrad does not currently support capturable".
| true
|
3,046,732,426
|
Delete .github/workflows/docker-cache-mi300.yml
|
seemethere
|
open
|
[
"topic: not user facing"
] | 2
|
MEMBER
|
The runner group for this has 0 runners, we should probably just delete.

| true
|
3,046,729,068
|
Fix TORCH_CHECK error message in FusedSgdKernel
|
MeetThePatel
|
closed
|
[
"open source",
"Merged",
"ciflow/trunk",
"release notes: cuda"
] | 3
|
CONTRIBUTOR
|
This fixes an issue in the TORCH_CHECK error message in the FusedSgdKernel.
Current behavior: If the LR tensor is not on the same device as the parameters, the error message reads: "found_inf must be on the same GPU device as the params".
Fix: The error message now correctly points out "lr must be on the same GPU device as the params".
| true
|
3,046,724,169
|
[inductor] Fix #153071
|
rec
|
open
|
[
"open source",
"topic: not user facing",
"module: inductor",
"ciflow/inductor"
] | 3
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153073
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,046,707,691
|
fbgemm Update pinned version
|
gchalump
|
open
|
[
"fb-exported",
"topic: not user facing"
] | 6
|
NONE
|
Differential Revision: D74335570
| true
|
3,046,661,927
|
Link check fails on link from comment in torch/_inductor/codegen/cpp.py to Stack Overflow
|
rec
|
open
|
[
"module: lint",
"triaged",
"actionable",
"bug"
] | 3
|
COLLABORATOR
|
### 🐛 Describe the bug
My PR kept stalling in merge complaining about link checking without providing a message, so I rebased it to reveal this:
https://github.com/pytorch/pytorch/actions/runs/14888839700/job/41815590757?pr=149958
```
[...]
200 https://github.com/pytorch/pytorch/blob/f353d17755ed23b02924c962a86ff99a3405fe10/torch/_inductor/graph.py#L570-L577 torch/_inductor/mkldnn_lowerings.py
200 https://github.com/pytorch/pytorch/blob/f353d17755ed23b02924c962a86ff99a3405fe10/torch/_inductor/graph.py#L570-L577 torch/_inductor/mkldnn_lowerings.py
200 https://github.com/triton-lang/triton/blob/98b5945d2aef679e00ebca8e07c35c3658ec76de/python/triton/runtime/jit.py#L238 torch/_inductor/utils.py
403 https://stackoverflow.com/questions/56555406/creating-dynamic-sized-array-using-msvc-c-compiler torch/_inductor/codegen/cpp.py
```
It's from [this comment](https://github.com/pytorch/pytorch/blame/8b9c9a327f0eced63233674d04883d1b73ddc4d1/torch/_inductor/codegen/cpp.py#L325) which has been in the codebase for six months, some maybe something elsewhere changed. (On the other hand, perhaps we were hitting them on each CI run for six months until they blocked us.)
There are instructions on how to get past it in the log (yaaay!) so I'll do that and then I'll take a look around the link checker and see what jumps out at me, unless someone else is interested in doing that?
| true
|
3,046,661,687
|
Fix path matching in `CPythonTestCase/setUpClass`
|
guilhermeleobas
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152991
* #152990
* #152908
* #152907
* #152989
* #152906
* #152905
* #152903
* #152902
* #152901
* #152904
* #152988
* #152987
* #150792
* #152900
* __->__ #153070
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,046,646,916
|
`torch.ldexp` goes out of range when `2**other` is out of range
|
roman-openai
|
open
|
[
"high priority",
"triage review",
"module: correctness (silent)"
] | 3
|
NONE
|
### 🐛 Describe the bug
```python
import torch
torch.ldexp(torch.tensor([2], dtype=torch.float16), torch.tensor([-25], dtype=torch.int32))
```
Gives
```python
tensor([0.], dtype=torch.float16)
```
Even though `2 * 2**-25 = 2**-24` is non-zero and is within representable range of `torch.float16`, and
```python
torch.ldexp(torch.tensor([1], dtype=torch.float16), torch.tensor([-24], dtype=torch.int32))
```
correctly outputs
```python
tensor([5.9605e-08], dtype=torch.float16)
```
I'm not sure if this is WAI, or if there's a way to still make the first example output the correct answer without hurting performance? Thanks!
### Versions
2.6.0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim
| true
|
3,046,641,284
|
OSS CI Infra Storm (Scenario 1 + 2) - May 7, 2025
|
seemethere
|
open
|
[
"triaged"
] | 2
|
MEMBER
|
## Current Status
Executing Scenario 2
## Scenario 1
Following along scenario 1 ([link, for Metamates only](https://docs.google.com/document/d/1ttAsjMrCEoEyqnIs5UdxzkAvL7xKnO10ytsm7hg9rWQ/edit?fbclid=IwZXh0bgNhZW0CMTEAYnJpZBExeXRVeUNaSlJVeG9NenBsUQEeQMjB0mGfUzUl5CQ3NcECnkY1we9HB_aw1MaM55y3smJvGT4jbkicOix5j-s_aem_tzgXoLsGAltaQq7hoyhkcg&tab=t.qe26losf9mjg#bookmark=id.56gs7rqw5h4t))
### Description
We create a few enqueued jobs on hud by running an INSERT statement on clickhouse. This should request a common label like ‘linux.2xlarge’.
#### Case being simulated
* Errors in HUD processing pipelines;
* Errors in GH webhook for job changing status;
#### Instances/runner/labels to be affected
* linux.2xlarge
#### How the failure should look for the end users
End users should not be impacted by this failure, or being able to detect it is happening. Except from a list of enqueued jobs on HUD.
#### Expected system behaviour
We expect `scaleUpChron` to start to try to deploy runners for these jobs after 30 minutes enqueued. And at every execution it should try to deploy those runners. Just to realise that there is no need and there should be sufficient runners deployed already.
## Scenario 2
Following along scenario 2 ([link, for Metamates only](https://docs.google.com/document/d/1ttAsjMrCEoEyqnIs5UdxzkAvL7xKnO10ytsm7hg9rWQ/edit?fbclid=IwZXh0bgNhZW0CMTEAYnJpZBExeXRVeUNaSlJVeG9NenBsUQEeQMjB0mGfUzUl5CQ3NcECnkY1we9HB_aw1MaM55y3smJvGT4jbkicOix5j-s_aem_tzgXoLsGAltaQq7hoyhkcg&tab=t.qe26losf9mjg#bookmark=id.56gs7rqw5h4t))
### Description
We fail at 100% the ‘AWS EC2 instance creation’ API, with a retriable error. Simulating an instance stockout scenario.
#### Error looks like
*Provide some way users can tell that this SEV is causing their issue.*
### Incident timeline (all times pacific)
*Include when the incident began, when it was detected, mitigated, root caused, and finally closed.*
## User impact
*How does this affect users of PyTorch CI?*
## Root cause
*What was the root cause of this issue?*
## Mitigation
*How did we mitigate the issue?*
## Prevention/followups
*How do we prevent issues like this in the future?*
| true
|
3,046,634,751
|
Add device guard for xpu conv on multi device
|
guangyey
|
open
|
[
"module: cpu",
"open source",
"ciflow/trunk",
"keep-going",
"merging",
"ciflow/xpu",
"release notes: xpu"
] | 12
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153067
# Motivation
fixes https://github.com/pytorch/pytorch/issues/153022
The root cause is that the XPU backend registers the convolution op using `m.impl`, which bypasses the device guard logic typically added by the code generation system. This can lead to unexpected behavior if the current device isn't explicitly set.
# Additional Context
run the following script
```python
import torch
import torchvision.models as models
torch.manual_seed(0)
model = models.resnet50(weights="ResNet50_Weights.DEFAULT")
model.eval()
data = torch.rand(1, 3, 224, 224)
device = torch.device('xpu:1') # 'xpu:0'
model = model.to(device=device, dtype=torch.float16)
data = data.to(device, dtype=torch.float16)
with torch.no_grad():
ret = model(data)
print(ret)
print("Execution finished")
```
The output is
```bash
-9.2102e-02, -7.7588e-01, -1.4111e+00, -9.2383e-01, 6.4551e-01,
-6.0730e-03, -7.8271e-01, -1.1904e+00, -4.1602e-01, 3.2715e-02,
-4.9854e-01, -6.3623e-01, -8.5107e-01, -6.8555e-01, -9.4434e-01,
-8.8672e-01, -6.7969e-01, -6.9824e-01, -2.8882e-01, 2.0312e+00]],
device='xpu:1', dtype=torch.float16)
Execution finished
```
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168
| true
|
3,046,576,289
|
fix bug with TORCHINDUCTOR_DUMP_LAUNCH_PARAMS
|
exclamaforte
|
open
|
[
"fb-exported",
"module: inductor",
"ciflow/inductor"
] | 3
|
CONTRIBUTOR
|
Summary:
https://fb.workplace.com/groups/1028545332188949/posts/9503194033132340/?comment_id=9504669536318123&reply_comment_id=9506405459477864¬if_id=1746154132646897¬if_t=work_group_comment_mention
Aligns the arguments for the triton inputs
Differential Revision: D74085173
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,046,554,465
|
[ONNX] dynamic_shapes uses DYNAMIC
|
titaiwangms
|
closed
|
[
"module: onnx",
"open source",
"Merged",
"ciflow/trunk",
"release notes: onnx",
"topic: improvements"
] | 3
|
COLLABORATOR
|
Although Dim.AUTO covers the cases that a user sets more axes to be dynamic than the model actually needs, it silently falls back to STATIC when DYNAMIC fails. This increases the difficulty of debugging.
| true
|
3,046,532,741
|
Keep raw cubin file around in case it gets deleted underneath us
|
jamesjwu
|
open
|
[
"ciflow/trunk",
"topic: not user facing",
"module: inductor",
"ciflow/inductor",
"ciflow/pull"
] | 2
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153064
This diff hardens StaticCudaLauncher in the event a cubin file gets deleted under us. We store the raw cubin on the static cuda launcher, and reload it as needed. On cold start, this can happen if the cubin file is created by triton, and gets deleted before we can load the kernel on the parent process.
We don't want to store the entire cubin both in file format and in memory for caching purposes, so we delete it before caching the data. In the unfortunate/unlikely event where we can't load/find the necessary file on warm start, skip the stored triton launcher, falling back to regular triton.
This comes at a cost to worker memory, but it's not more memory than regular triton workers already take, so it should be okay.
Tests:
- Make test_static_cuda_launcher always delete the cubin path and reload it
Fixes #153030
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,046,516,669
|
[FlexAttention] export fails to trace with functorch
|
tugsbayasgalan
|
open
|
[
"triaged",
"oncall: pt2",
"module: functorch",
"module: flex attention"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
import torch.nn as nn
from torch.func import vmap
from torch.export import export
# 1. Inner model (shared across batch)
class TinyModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(8, 4)
def forward(self, x):
return torch.relu(self.linear(x))
# 2. Module that applies vmap over inner model
class BatchedModel(nn.Module):
def __init__(self):
super().__init__()
self.model = TinyModel()
def forward(self, x):
return vmap(self.model)(x) # vectorize over batch
# 3. Instantiate and test export
x = torch.randn(16, 8)
model = BatchedModel().eval()
# 4. Export
graph_module = export(model, (x,))
print(graph_module.module)
Errors with
in _free_unbacked_symbols_with_path(a, path, real, shape_env, pending, simplify)
1024 elif isinstance(a, torch.Tensor):
1025 from torch._subclasses.fake_tensor import FakeTensor
-> 1027 assert isinstance(a, FakeTensor)
1028 r.update(
1029 go(
1030 a.size(),
(...)
1033 )
1034 )
1035 if a.layout not in [
1036 torch.sparse_csr,
1037 torch.sparse_csc,
1038 torch.sparse_bsr,
1039 torch.sparse_bsc,
1040 ]:
AssertionError:
```
It seems to me that at pre-dispatch level, we are not properly peeking into fake tensor inside BatchedTensor
### Versions
main
cc @chauhang @penguinwu @zou3519 @Chillee @samdow @kshitij12345 @ydwu4 @drisspg @yanboliang @BoyuanFeng
| true
|
3,046,481,642
|
non-strict export should detect fake tensor leakage
|
tugsbayasgalan
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.buffer = torch.nn.Buffer(torch.randn(4, 4))
def forward(self, x):
return self.buffer.sum() + x.sum()
class Pipeline:
def __init__(self, model):
self.model = model
self.bank = []
def __call__(self, x):
def log(model, inps, outputs):
for n, b in model.named_buffers():
self.bank.append(b)
self.model.register_forward_hook(log)
ep = torch.export.export(self.model, (x,), strict=False).module()
return ep(x)
p = Pipeline(model=Model())
p(torch.randn(4, 4))
print(p.bank) # prints [FakeTensor(..., size=(4, 4))]
```
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4
| true
|
3,046,466,098
|
register_constant doesn't work on simple types
|
tugsbayasgalan
|
open
|
[
"module: pytree",
"oncall: pt2",
"oncall: export"
] | 1
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
from enum import Enum
class Color(Enum):
RED = 1
GREEN = 2
BLUE = 3
class Foo(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, col):
return x + col.value
torch.utils._pytree.register_constant(Color)
torch.export.export(Foo(), (torch.randn(4, 4), Color.RED), strict=False)
TypeError: register_constant(cls) expects `cls` to have a non-default `__eq__` implementation.
```
i am not sure why we need a non-default __eq__ implementations. If it is a hard requirement, could we somehow make it work for builtin types?
cc: @zou3519
### Versions
main
cc @zou3519 @XuehaiPan @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4
| true
|
3,046,465,981
|
Fix misleadingly high AOT Inductor dashboard performance
|
benjaminglass1
|
open
|
[
"open source",
"topic: not user facing",
"module: dynamo",
"ciflow/inductor"
] | 2
|
COLLABORATOR
|
An [example benchmark](https://hud.pytorch.org/benchmark/timm_models/inductor_aot_inductor?dashboard=torchinductor&startTime=Wed%2C%2030%20Apr%202025%2015%3A54%3A04%20GMT&stopTime=Wed%2C%2007%20May%202025%2015%3A54%3A04%20GMT&granularity=hour&mode=inference&dtype=bfloat16&deviceName=cuda%20(h100)&lBranch=main&lCommit=1dd36ad2d440a4f3faf724b3a8e13925e3180c24&rBranch=main&rCommit=cc7346bf19c019255dcb4484694a75850ed74d5a&model=convit_base) with this issue. The equivalent `cpp_wrapper` benchmark run shows a 2x performance gain, not 20x. Local troubleshooting has shown this is due to the `export` call introducing `FakeTensor` parameters into the model, which significantly slows down (and invalidates) the eager runs. We haven't caught this because we only check results in accuracy mode, not performance mode.
Currently, this PR is trying to figure out how many benchmarks may be broken from this issue, so the code is designed to make them fail in a dashboard run.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,046,429,194
|
DISABLED test_input_hooks_same (__main__.HooksTests)
|
pytorch-bot[bot]
|
open
|
[
"module: flaky-tests",
"skipped",
"module: unknown",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: linux, mac, macos, rocm, asan, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_input_hooks_same&suite=HooksTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41796649006).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_input_hooks_same`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_hooks.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/dynamo/test_hooks.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
3,046,315,502
|
`cuda.Event` handling in dynamo is broken
|
bdhirsh
|
open
|
[
"module: cuda",
"oncall: pt2",
"module: dynamo"
] | 1
|
CONTRIBUTOR
|
Here's an example:
```
import torch
lst = []
@torch.compile(backend="eager", fullgraph=True)
def f(x):
start_event = torch.cuda.Event(enable_timing=True)
end_event = torch.cuda.Event(enable_timing=True)
start_event.record()
out = torch.matmul(x, x)
end_event.record()
lst.append(start_event)
lst.append(end_event)
return out
x = torch.randn(5000, device='cuda')
out = f(x)
print(lst[0].elapsed_time(lst[1]))
```
without compile this prints the elapsed time between the two events.
```
55.96131134033203
```
with compile this gives an error:
```
Traceback (most recent call last):
File "/data/users/hirsheybar/a/pytorch/tmp6.py", line 20, in <module>
print(lst[0].elapsed_time(lst[1]))
File "/data/users/hirsheybar/a/pytorch/torch/cuda/streams.py", line 216, in elapsed_time
return super().elapsed_time(end_event)
ValueError: Both events must be recorded before calculating elapsed time.
```
Why? here's the generated dynamo graph + residual bytecode below. It looks like:
(1) dynamo handles the `cuda.Event()` creation + list appending as compile-time constants, stashing them as globals and putting them in the list as residual bytecode
(2) dynamo *also* proxies the `cuda.Event()` object into the graph, even though it is also treating it as a constant. The `Event` object is unused though and gets DCEd
(3) dynamo also proxies the `cuda.Event.record` calls into the graph, but they are DCEd
(4) at runtime, none of the logic to record the events runs (even if it did it wouldn't run because dynamo is ignoring the events that were proxied in the graph)
```
# graph
===== __compiled_fn_1 =====
/data/users/hirsheybar/a/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
def forward(self, L_x_: "f32[5000][1]cuda:0"):
l_x_ = L_x_
# File: /data/users/hirsheybar/a/pytorch/tmp6.py:7 in f, code: start_event = torch.cuda.Event(enable_timing=True)
event = torch.cuda.streams.Event(enable_timing = True)
# File: /data/users/hirsheybar/a/pytorch/tmp6.py:8 in f, code: end_event = torch.cuda.Event(enable_timing=True)
event_1 = torch.cuda.streams.Event(enable_timing = True)
# File: /data/users/hirsheybar/a/pytorch/tmp6.py:10 in f, code: start_event.record()
record = event.record(); event = record = None
# File: /data/users/hirsheybar/a/pytorch/tmp6.py:11 in f, code: out = torch.matmul(x, x)
out: "f32[][]cuda:0" = torch.matmul(l_x_, l_x_); l_x_ = None
# File: /data/users/hirsheybar/a/pytorch/tmp6.py:12 in f, code: end_event.record()
record_1 = event_1.record(); event_1 = record_1 = None
return (out,)
# bytecode
DEBUG: MODIFIED BYTECODE f /data/users/hirsheybar/a/pytorch/tmp6.py line 5
5 0 LOAD_GLOBAL 9 (__compiled_fn_1)
2 LOAD_FAST 0 (x)
4 DUP_TOP
6 STORE_FAST 7 (tmp_3)
8 CALL_FUNCTION 1
10 STORE_FAST 4 (graph_out_0)
12 LOAD_FAST 4 (graph_out_0)
14 LOAD_CONST 3 (0)
16 BINARY_SUBSCR
18 LOAD_GLOBAL 7 (_event_140622680852736_c0)
20 LOAD_GLOBAL 8 (_event_140622672089088_c0)
22 BUILD_LIST 2
24 LOAD_GLOBAL 5 (lst)
26 DUP_TOP
28 STORE_FAST 6 (tmp_2)
30 LOAD_CONST 0 (None)
32 LOAD_CONST 0 (None)
34 BUILD_SLICE 2
36 STORE_SUBSCR
38 DELETE_FAST 4 (graph_out_0)
40 RETURN_VALUE
```
cc @ptrblck @msaroufim @eqy @jerryzh168 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
| true
|
3,046,272,178
|
[BE] Update ruamel to 0.18.10
|
malfet
|
closed
|
[
"better-engineering",
"Merged",
"ciflow/trunk",
"topic: not user facing"
] | 3
|
CONTRIBUTOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* #152719
* __->__ #153057
To address the feedback from https://github.com/pytorch/pytorch/pull/153013
Previously it was pinned to 0.17.4, that was released in 2021
| true
|
3,046,252,394
|
Export doesn't move embedding to correct device
|
tugsbayasgalan
|
open
|
[
"oncall: pt2",
"oncall: export"
] | 0
|
CONTRIBUTOR
|
### 🐛 Describe the bug
```python
import torch
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
self.embedding = torch.nn.Embedding(num_embeddings=10, embedding_dim=8)
def forward(self, x):
token_ids = torch.randint(0, 10, (4,), device=x.device)
embedded = self.embedding(token_ids).sum()
return self.buffer.sum() + self.param.sum() + x.sum() + embedded
class BarModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.mod = Model()
def forward(self, x):
if "cuda" in str(x.device):
mod = self.mod.to(x.device)
return mod(x)
else:
return x.sum()
with torch.no_grad():
ep = torch.export.export(
BarModel(), (), {"x": torch.randn(4, 4, 4, device="cuda")}, strict=False
).module()
print(ep.graph)
print(ep(x=torch.randn(4, 4, 4, device="cuda")))
Errors with
RuntimeError: Unhandled FakeTensor Device Propagation for aten.embedding.default, found two different devices cpu, cuda:0
```
### Versions
main
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @angelayi @suo @ydwu4
| true
|
3,046,238,361
|
[BE]: Add PEP621 project section to pyproject.toml
|
Skylion007
|
open
|
[
"triaged",
"open source",
"better-engineering",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
Follow up to @ezyang's PR #153020 , but better uses PEP621 to reduce redundant fields and pass through metadata better to uv, setuptools, poetry and other tooling.
* Enables modern tooling like uv sync and better support for tools like poetry.
* Also allows us to set project wide settings that are respected by linters and IDE (in this example we are able centralize the minimum supported python version).
* Currently most of the values are dynamically fetched from setuptools, eventually we can migrate all the statically defined values to pyproject.toml and they will be autopopulated in the setuptool arguments.
These also clearly shows us what fields will need to be migrated to pyproject.toml over time from setup.py per #152276. Static fields like classifier should be fairly easy to migrate, the dynamically built ones like requirements are a bit more challenging.
Without this, `uv sync` complains:
```
error: No `project` table found in: `pytorch/pyproject.toml`
```
| true
|
3,046,198,637
|
[HOP] Reworked HOPs to use FunctionalizeCtxWrapper
|
bohnstingl
|
open
|
[
"triaged",
"open source",
"topic: not user facing"
] | 3
|
COLLABORATOR
|
This PR reworks the `py_functionalize_impl` of HOPs and introduces the use of `FunctionalizeCtxWrapper`.
cc @ydwu4
| true
|
3,046,102,584
|
[BE]: Blacklist broken setuptools until we upgrade MSVC API
|
Skylion007
|
open
|
[
"open source",
"topic: not user facing"
] | 1
|
COLLABORATOR
|
Alternative to #153052 where we just ban the broken setuptools version
| true
|
3,046,100,119
|
[BE]: Use undocumented temp shim to restore setuptools compat
|
Skylion007
|
open
|
[
"oncall: releng",
"open source",
"better-engineering",
"topic: not user facing"
] | 2
|
COLLABORATOR
| null | true
|
3,046,096,583
|
[Intel GPU] scalar tensor case handling in addmm, baddmm
|
ZhiweiYan-96
|
open
|
[
"module: cpu",
"module: mkldnn",
"open source",
"ciflow/trunk",
"topic: not user facing",
"ciflow/binaries_wheel",
"ciflow/xpu",
"ciflow/linux-aarch64"
] | 2
|
COLLABORATOR
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom):
* __->__ #153051
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @jerryzh168 @gujinghui @PenghuiCheng @jianyuh @min-jean-cho @yanbing-j @Guobing-Chen @Xia-Weiwen @snadampal
| true
|
3,045,564,719
|
Process never ends when sending tensors through multiprocessing queues in Python 3.12+ on macOS
|
rafalh
|
open
|
[
"needs reproduction",
"module: multiprocessing",
"triaged",
"module: macos",
"module: deadlock"
] | 4
|
NONE
|
### 🐛 Describe the bug
If a tensor is sent in multiprocessing queue, something blocks the process from ending after the end of script is reached (I have to press Ctrl+C to end the program).
It seems to be related to the resource tracker (`multiprocessing.resource_tracker.ResourceTracker`) process started by Python automatically, because when the process should end I can see resource tracker child process in the process tree and if I kill it the main process ends successfully.
The problem occurs in Python 3.12. It doesn't occur in Python 3.11. I am using macOS Sequoia. I tried running examples in Ubuntu container and couldn't reproduce the problem there, so it may be macOS specific. Multiple Torch versions are affected - I tested 2.2.0 (the oldest one installing successfully in Python 3.12) and 2.7.0 (the latest)
Calling `multiprocessing.set_start_method("fork")` fixes the issue (default start method is `spawn`), but it is not recommended according to Python docs. Start methods `spawn` and `forkserver` do not work.
Example using `DataLoader`:
```python
from torch.utils.data import Dataset, DataLoader
class DummyDataset(Dataset):
def __getitem__(self, index: int) -> int:
return 1
def __len__(self) -> int:
return 10
def main() -> None:
dataset = DummyDataset()
data_loader = DataLoader(dataset, num_workers=1)
for batch_idx, batch in enumerate(data_loader):
print(batch_idx, batch)
print("DONE?")
if __name__ == "__main__":
main()
```
Example using just a tensor and a queue:
```python
import torch.multiprocessing as multiprocessing
import threading
from torch import Tensor
def worker(q):
q.put(Tensor(0))
print("worker thread ended")
def main() -> None:
q = multiprocessing.Queue()
w = multiprocessing.Process(target=worker, args=(q,))
w.start()
w.join()
print(q.get())
print("DONE?")
if __name__ == "__main__":
main()
```
In both cases program after printing "DONE?" does not end (unless interrupted with Ctrl+C) and the process tree looks like this:
```
~/tmp$ pstree 48529
-+= 48529 rafal.harabien /opt/homebrew/Cellar/[email protected]/3.12.10/Frameworks/Python.framework/Versions/3.12/Resources/Python.app/Contents/MacOS/Python /Users/rafal.harabien/minimal_mp_hang.py
\--- 48530 rafal.harabien /opt/homebrew/Cellar/[email protected]/3.12.10/Frameworks/Python.framework/Versions/3.12/Resources/Python.app/Contents/MacOS/Python -c from multiprocessing.resource_tracker import main;main(6)
```
The second example works fine when sending non-tensor values, e.g. `int`.
### Versions
((venv_py312) ) ~/tmp$ python collect_env.py
/Users/rafal.harabien/tmp/venv_py312/lib/python3.12/site-packages/torch/_subclasses/functional_tensor.py:276: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/utils/tensor_numpy.cpp:81.)
cpu = _conversion_method_template(device=torch.device("cpu"))
Collecting environment information...
PyTorch version: 2.7.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.4.1 (arm64)
GCC version: Could not collect
Clang version: 17.0.0 (clang-1700.0.13.3)
CMake version: version 4.0.1
Libc version: N/A
Python version: 3.12.10 (main, Apr 8 2025, 11:35:47) [Clang 16.0.0 (clang-1600.0.26.6)] (64-bit runtime)
Python platform: macOS-15.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] torch==2.7.0
[conda] No relevant packages
cc @VitalyFedyunin @albanD @malfet
| true
|
3,045,487,542
|
Update docs of saved_tensors_hooks to avoid ref cycle
|
ppwwyyxx
|
closed
|
[
"triaged",
"open source",
"Merged",
"ciflow/trunk",
"release notes: autograd",
"topic: docs"
] | 3
|
COLLABORATOR
|
Fixes #115255
| true
|
3,045,394,006
|
🌠 Add Muon optimizer
|
kadirnar
|
open
|
[
"triaged",
"open source",
"release notes: optim"
] | 3
|
NONE
|
Fixes https://github.com/pytorch/pytorch/issues/148819
| true
|
3,045,385,461
|
DISABLED test_comprehensive_special_ndtri_cuda_int64 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"high priority",
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 2
|
NONE
|
Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_special_ndtri_cuda_int64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41770133820).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_special_ndtri_cuda_int64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/unittest/mock.py", line 1396, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 81, in inner
return func(*args, **kwds)
^^^^^^^^^^^^^^^^^^^
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py", line 691, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 880, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 864, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1487, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/compile_fx.py", line 1374, in codegen_and_compile
compiled_module = graph.compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2238, in compile_to_module
return self._compile_to_module()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2248, in _compile_to_module
mod = self._compile_to_module_lines(wrapper_code)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/graph.py", line 2312, in _compile_to_module_lines
mod = PyCodeCache.load_by_key_path(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3022, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpu9phx7dc/rn/crn2hee3drenewx6wyudzuds4aauesmjvmggh5xzq2mbe7wkor6q.py", line 75, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 481, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 501, in _wait_futures
kernel = result.result()
^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/codecache.py", line 3524, in result
return self.result_fn()
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/async_compile.py", line 368, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 325, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 482, in _make_launchers
launchers.append(result.make_launcher())
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1279, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1271, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmp8_vmfes0/triton/OYZZT7BNCKI4IFAT6CTALCRDCMEY5DNPNIBKMGCIXUNBX6NYJHCA/triton_poi_fused_special_ndtri_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(20,), device="cuda:0", dtype=torch.int64], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_special_ndtri_cuda_int64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @clee2000 @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @aakhundov
| true
|
3,045,385,330
|
DISABLED test_comprehensive_trunc_cuda_float64 (__main__.TestInductorOpInfoCUDA)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: inductor"
] | 1
|
NONE
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_comprehensive_trunc_cuda_float64&suite=TestInductorOpInfoCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41775258519).
Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 6 failures and 6 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_comprehensive_trunc_cuda_float64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1135, in test_wrapper
return test(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1434, in only_fn
return fn(self, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2291, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1215, in dep_fn
return fn(slf, *args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1534, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 962, in inner
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 954, in inner
fn(self, device, dtype, op)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1207, in test_comprehensive
raise e
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor_opinfo.py", line 1182, in test_comprehensive
self.check_model_gpu(
File "/opt/conda/envs/py_3.10/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 657, in check_model_gpu
check_model(
File "/var/lib/jenkins/workspace/test/inductor/test_torchinductor.py", line 498, in check_model
actual = run(*example_inputs, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 691, in _fn
raise e.remove_dynamo_frames() from None # see TORCHDYNAMO_VERBOSE=1
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 880, in _compile_fx_inner
raise InductorError(e, currentframe()).with_traceback(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 864, in _compile_fx_inner
mb_compiled_graph = fx_codegen_and_compile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1487, in fx_codegen_and_compile
return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1374, in codegen_and_compile
compiled_module = graph.compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2238, in compile_to_module
return self._compile_to_module()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2248, in _compile_to_module
mod = self._compile_to_module_lines(wrapper_code)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/graph.py", line 2312, in _compile_to_module_lines
mod = PyCodeCache.load_by_key_path(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3022, in load_by_key_path
mod = _reload_python_module(key, path, set_sys_modules=in_toplevel)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/compile_tasks.py", line 31, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "/tmp/tmpk3r4kv0r/pi/cpiwkzabioln3rbv4pkeni2i4ek5pxf2i6uetpthcf52ecwbqlq5.py", line 75, in <module>
async_compile.wait(globals())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 481, in wait
self._wait_futures(scope)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 501, in _wait_futures
kernel = result.result()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 3524, in result
return self.result_fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/async_compile.py", line 368, in get_result
kernel.precompile(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 325, in precompile
self._make_launchers()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 482, in _make_launchers
launchers.append(result.make_launcher())
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1279, in make_launcher
self.reload_cubin_path()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 1271, in reload_cubin_path
raise RuntimeError(
torch._inductor.exc.InductorError: RuntimeError: ('Cubin file saved by TritonBundler not found at %s', '/tmp/tmpb82foyzn/triton/FZIB3L3AS3YJMLZX6NT2LKPQB3PQDQERZZKGDZOZ2YDFLEPMIQZQ/triton_poi_fused_trunc_0.cubin')
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3154, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 426, in instantiated_test
result = test(self, **param_kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper
fn(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1147, in test_wrapper
raise e_tracked from e
Exception: Caused by sample input at index 0: SampleInput(input=Tensor[size=(20, 20), device="cuda:0", dtype=torch.float64], args=(), kwargs={}, broadcasts_input=False, name='')
To execute this test, run the following from the base repo dir:
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 python test/inductor/test_torchinductor_opinfo.py TestInductorOpInfoCUDA.test_comprehensive_trunc_cuda_float64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_opinfo.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov
| true
|
3,045,385,194
|
DISABLED test_hook_with_nested_closure (__main__.HooksTests)
|
pytorch-bot[bot]
|
open
|
[
"triaged",
"module: flaky-tests",
"skipped",
"oncall: pt2",
"module: dynamo"
] | 3
|
NONE
|
Platforms: asan, linux, mac, macos, rocm, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_hook_with_nested_closure&suite=HooksTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/41771802586).
Over the past 3 hours, it has been determined flaky in 53 workflow(s) with 106 failures and 53 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_hook_with_nested_closure`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_hooks.py`
cc @clee2000 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,045,331,296
|
Unexpected float32 overflow for amp training with torch.compile
|
zbh2047
|
open
|
[
"high priority",
"triage review",
"oncall: pt2"
] | 1
|
NONE
|
### 🐛 Describe the bug
I recently encountered significant precision issue when using torch.amp together with torch.compile. I was finally able to create a minimal reproducible code as shown below:
```python
import torch
import torch.nn
class Model(nn.Module):
def __init__(self):
super().__init__()
self.register_buffer('linear1', torch.ones(4, 4))
self.linear2 = nn.Linear(4, 1)
def forward(self, x):
y = x @ self.linear1
with torch.amp.autocast(device_type='cuda', enabled=False):
y = y.float()
y = self.linear2(y)
return y
model = Model().cuda()
x = torch.ones(1, 4, dtype=torch.float, device='cuda:0')
with torch.amp.autocast(device_type='cuda', enabled=True):
y = model(x)
loss = (y * 16384).sum()
print(y.dtype, loss.dtype)
loss.backward()
print([p.grad.data for p in model.parameters()])
for p in model.parameters():
p.grad.data.zero_()
model = torch.compile(model)
with torch.amp.autocast(device_type='cuda', enabled=True):
y = model(x)
loss = (y * 16384).sum()
print(y.dtype, loss.dtype)
loss.backward()
print([p.grad.data for p in model.parameters()])
```
The output is quite unexpected:
```
torch.float32 torch.float32
[tensor([[65536., 65536., 65536., 65536.]], device='cuda:0'), tensor([16384.], device='cuda:0')]
torch.float32 torch.float32
[tensor([[inf, inf, inf, inf]], device='cuda:0'), tensor([16384.], device='cuda:0')]
```
This shows that torch.compile leads to low precision and overflow problems.
I also found there are several related issues that have not been addressed, see https://github.com/pytorch/pytorch/issues/96693. I am not sure whether the root cause is just similar to the above simple code.
### Versions
Pytorch 2.7
NVIDIA L4 GPU
Since the company's desktop does not connect to the Internet, I was not able to run and paste the result of collect_env.py. But I can provide additional information if needed.
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu
| true
|
3,045,117,090
|
[Typing] Remove redundant type aliases of `_device_t` for `torch.types.Device` in `torch/_dynamo/device_interface.py`
|
shink
|
closed
|
[
"triaged",
"open source",
"topic: not user facing",
"module: dynamo"
] | 3
|
CONTRIBUTOR
|
Part of: #152952
Follow up: #153007
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
| true
|
3,045,054,926
|
Pytorch 2.7 crashes when using flex attention with torch.amp
|
zbh2047
|
open
|
[
"module: crash",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher",
"module: flex attention"
] | 2
|
NONE
|
### 🐛 Describe the bug
I believe this bug should exist for a very long time but is still not fixed yet, so I post this new issue here.
Basically, the current flex attention is incompatible with torch.amp.autocast. The bug can be reproduced with the following (extremely simple) code:
```python
import torch
import torch.nn as nn
from torch.nn.attention import flex_attention
class MultiheadSelfAttention(nn.Module):
def __init__(self, embed_dim, num_heads):
assert embed_dim % num_heads == 0
self.in_proj = nn.Linear(embed_dim, 3 * embed_dim, bias=False)
self.out_proj = nn.Linear(embed_dim, embed_dim, bias=False)
self.num_heads = num_heads
def forward(self, qkv):
qkv = self.in_proj(qkv)
qkv = qkv.view(qkv.size(0), qkv.size(1), 3, self.num_heads, -1).permute(2, 0, 3, 1, 4)
q, k, v = qkv[0], qkv[1], qkv[2]
output = flex_attention.flex_attention(q, k, v)
output = output.permute(0, 2, 1, 3)
output = output.reshape(output.size(0), output.size(1), -1)
output = self.out_proj(output)
return output
def test():
model = MultiheadSelfAttention(64, 4)
model = model.cuda()
model = torch.compile(model)
x = torch.randn((1, 100, 64), dtype=torch.float, device='cuda:0')
with torch.amp.autocast(device_type='cuda', enabled=True):
y = model(x).sum()
y.backward()
```
The error message is
`Runtime Error: A compilation subprocess exited unexpectedly.`
However, if we change the `enabled` parameter in `with torch.amp.autocast(device_type='cuda', enabled=True)` to `False`, then the problem can run normally without crash.
This bug exists from Pytorch 2.5 until the latest Pytorch 2.7. I found similar issues may already exist before but there have been no update. See this page for a relevant issue: https://github.com/pytorch/pytorch/issues/135723 .
### Versions
```
Here is the concrete environmental settings:
Pytorch version: 2.7.0
Is debug build: False
CUDA used to build Pytorch: 12.3
ROCM used to build Pytorch: N/A
OS: Red Hat Enterprise Linux release 9.2 (Plow) (x86_64)
GCC version: (realm gcc 12.1.0-19) 12.1.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.11.10 (main, Dec 10 2024, 18:31:47) [GCC 12.1.0] (64-bit runtime)
Python platform: Linux-4.18.0-348.23.1.el8.criu_rseq.x86_64-with-glibc2.34
Is CUDA available: True
CUDA running version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80 GB HBM 3
GPU 1: NVIDIA H100 80 GB HBM 3
GPU 2: NVIDIA H100 80 GB HBM 3
GPU 3: NVIDIA H100 80 GB HBM 3
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 104
On-line CPU(s) list: 0-103
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 1
Stepping: 8
BogoMIPS: 5399.9
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.4 MiB (52 instances)
L1i cache: 1.6 MiB (52 instances)
L2 cache: 104 MiB (52 instances)
L3 cache: 105 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-103
```
Since the company's desktop does not connect to the Internet, I manually typed the result of collect_env.py.
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
| true
|
3,044,898,009
|
gen_alias_from_base ruins the result of view after inductor generated a copy for the results of the view operations.
|
laithsakka
|
open
|
[
"triaged"
] | 2
|
CONTRIBUTOR
|
There is three issues here:
1) if we have the following aot_graph inductor generates a copy for the view operation which is not permitted. (it should generate a view). see the view operation on the last line. cc @eellison
```
2000 3525273 torch/fx/experimental/symbolic_shapes.py:1220] [0/0] For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] TRACED GRAPH
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] ===== Forward graph 0 =====
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] /home/lsakka/pytorch/torch/fx/_lazy_graph_module.py class <lambda>(torch.nn.Module):
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] def forward(self, arg0_1: "i64[1][1]cpu", arg1_1: "Sym(u1)", arg2_1: "Sym(s7)", arg3_1: "i64[u1][s7]cuda:0"):
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] # File: /home/lsakka/pytorch/test/test_dynamic_shapes.py:3049 in func, code: t = x.view((f, f))
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] ge_1: "Sym(u1 >= 0)" = arg1_1 >= 0
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] _assert_scalar = torch.ops.aten._assert_scalar.default(ge_1, "Runtime assertion failed for expression u1 >= 0 on node 'ge'"); ge_1 = _assert_scalar = None
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs]
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] # File: /home/lsakka/pytorch/test/test_dynamic_shapes.py:3043 in func, code: f = y.item()
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] _local_scalar_dense: "Sym(u0)" = torch.ops.aten._local_scalar_dense.default(arg0_1); arg0_1 = None
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] ge_3: "Sym(u0 >= 0)" = _local_scalar_dense >= 0
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] _assert_scalar_1 = torch.ops.aten._assert_scalar.default(ge_3, "Runtime assertion failed for expression u0 >= 0 on node 'ge_1'"); ge_3 = _assert_scalar_1 = None
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs]
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] # No stacktrace found for following nodes
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] pow_1: "Sym(u0**2)" = _local_scalar_dense ** 2
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] eq: "Sym(Eq(u1, u0**2))" = arg1_1 == pow_1; pow_1 = None
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] _assert_scalar_2 = torch.ops.aten._assert_scalar.default(eq, "Runtime assertion failed for expression Eq(u1, u0**2) on node 'eq'"); eq = _assert_scalar_2 = None
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] mod: "Sym(Mod(u1, u0))" = arg1_1 % _local_scalar_dense; arg1_1 = None
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] eq_1: "Sym(Eq(Mod(u1, u0), 0))" = mod == 0; mod = None
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] _assert_scalar_3 = torch.ops.aten._assert_scalar.default(eq_1, "Runtime assertion failed for expression Eq(Mod(u1, u0), 0) on node 'eq_1'"); eq_1 = _assert_scalar_3 = None
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs]
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] # File: /home/lsakka/pytorch/test/test_dynamic_shapes.py:3049 in func, code: t = x.view((f, f))
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] view: "i64[u0, u0][s7*u0, s7]cuda:0" = torch.ops.aten.view.default(arg3_1, [_local_scalar_dense, _local_scalar_dense]); arg3_1 = _local_scalar_dense = None
I0507 13:24:34.048000 3525273 torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py:202] [0/0] [__aot_graphs] return (view,)
```
the inductor generated code is, it should have called as_strided or something else since view should not copy.
```
07 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] del arg0_1
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] if not (u0 >= 0):
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] raise RuntimeError('u0 >= 0')
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] buf1 = None
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] if not (0 <= u0):
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] raise RuntimeError('0 <= u0')
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] buf2 = None
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] with torch.cuda._DeviceGuard(0):
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] torch.cuda.set_device(0)
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] buf3 = empty_strided_cuda((u1, ), (1, ), torch.int64)
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] # Topologically Sorted Source Nodes: [t], Original ATen: [aten.view]
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] stream0 = get_raw_stream(0)
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] triton_poi_fused_view_0.run(arg3_1, buf3, s7, u1, stream=stream0)
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] del arg3_1
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code] return (reinterpret_tensor(buf3, (u0, u0), (u0, 1), 0), )
V0507 13:24:34.414000 3525273 torch/_inductor/codecache.py:1187] [0/0] [__output_code]
```
2) problem 2:
any how although inductor copies the output code should be fine except that.. the function gen_alias_from_base at the end of running the compiled function will assume that inductor generated an alias, (with out checking) and will go ahead and some how regenerate the alias using a call to as_strided that results in wrong output !
since the output of inductor is NOT alias of the input.
3) problem 3:
why do we have view in the __aot_graphs? whe calling reshape_view_helper, the rerturned result is input.as_strided(..)
but that as_strided call never get to the graph instead we get the view as if the decompositon never happened? cc @bdhirsh
| true
|
End of preview. Expand
in Data Studio
Dataset Card for github-pytorch-issues
Dataset Summary
This dataset is a curated collection of GitHub issues from the PyTorch repository. Each entry includes the issue title, body, user, state, labels, comments, and other relevant fields that are useful for tasks such as text classification, semantic search, and question answering.
Supported Tasks and Leaderboards
The dataset supports the following tasks:
- Open-domain Question Answering: Given a user query, match it to a relevant issue and retrieve the response from the issue body or comments.
- Closed-domain Question Answering: Same as above but restricted to PyTorch-related questions.
- Text Generation / Language Modeling: Use the issue title as prompt and body as target text to fine-tune generative models.
Languages
- English (
en)
Dataset Structure
Data Fields
id: Unique issue IDtitle: Title of the issueuser: GitHub user who opened the issuestate: Open or closedlabels: Comma-separated labels/tagscomments: Number of commentsauthor_association: Role of the author in the repositorybody: Main text of the issue
Example
{
"id": 1500,
"title": "torch.save crashes on certain tensor inputs",
"user": "some_user",
"state": "open",
"labels": ["bug", "serialization"],
"comments": 4,
"author_association": "CONTRIBUTOR",
"body": "I'm encountering a crash when trying to serialize certain model outputs..."
}
Dataset Creation
The dataset was created by collecting issues via the GitHub REST API using the endpoint:
https://api.github.com/repos/pytorch/pytorch/issues
- Downloads last month
- 10