Failed to run Qwen Rapid AIO always getting split_with_sizes on TextEncode prompt

#229
by AswadBlack - opened

I'm trying to run this model on strix halo and it doesn't work
I adapted the workflow little bit.

Clipboard_01-07-2026_01

Always getting error:
split_with_sizes expects split_sizes to sum exactly to 784 (input tensor's size at dimension 2), but got split_sizes=[]

Any ideas?

I tried your exact workflow,the output is fine with no errors.

Trying --disable-smart-memory will solve the problem.

I tried your exact workflow,the output is fine with no errors.

Did you run it on Strix halo ?

Resize your source image to around 1000pixel on the shortest. It is due to your image being too big.

Trying --disable-smart-memory will solve the problem.

It helps to pass TextEncode problem but failed this time on VAE Decode.

Requested to load WanVAE
Exception Code: 0xC0000005
0x00007FFCB27228A9, ..\uv\python\cpython-3.12.12-windows-x86_64-none\VCRUNTIME140.dll(0x00007FFCB2710000) + 0x128A9 byte(s), memset() + 0x139 byte(s)

Trying --disable-smart-memory will solve the problem.

It helps to pass TextEncode problem but failed this time on VAE Decode.

Requested to load WanVAE
Exception Code: 0xC0000005
0x00007FFCB27228A9, ..\uv\python\cpython-3.12.12-windows-x86_64-none\VCRUNTIME140.dll(0x00007FFCB2710000) + 0x128A9 byte(s), memset() + 0x139 byte(s)

Try asking AI

Some Triton builds or fp8/quantized models cause this crash on certain GPUs.

I'm wondering if anybody using this model on strix halo on windows.

Sign up or log in to comment