Failed to run Qwen Rapid AIO always getting split_with_sizes on TextEncode prompt
I tried your exact workflow,the output is fine with no errors.
Trying --disable-smart-memory will solve the problem.
I tried your exact workflow,the output is fine with no errors.
Did you run it on Strix halo ?
Resize your source image to around 1000pixel on the shortest. It is due to your image being too big.
Trying
--disable-smart-memorywill solve the problem.
It helps to pass TextEncode problem but failed this time on VAE Decode.
Requested to load WanVAE
Exception Code: 0xC0000005
0x00007FFCB27228A9, ..\uv\python\cpython-3.12.12-windows-x86_64-none\VCRUNTIME140.dll(0x00007FFCB2710000) + 0x128A9 byte(s), memset() + 0x139 byte(s)
Trying
--disable-smart-memorywill solve the problem.It helps to pass TextEncode problem but failed this time on VAE Decode.
Requested to load WanVAE
Exception Code: 0xC0000005
0x00007FFCB27228A9, ..\uv\python\cpython-3.12.12-windows-x86_64-none\VCRUNTIME140.dll(0x00007FFCB2710000) + 0x128A9 byte(s), memset() + 0x139 byte(s)
Try asking AI
Some Triton builds or fp8/quantized models cause this crash on certain GPUs.
I'm wondering if anybody using this model on strix halo on windows.
