metadata
library_name: diffusers
license: apache-2.0
datasets:
- laion/relaion400m
base_model:
- black-forest-labs/FLUX.2-dev
tags:
- tae
About
Tiny AutoEncoder trained on the latent space of black-forest-labs/FLUX.2-dev's autoencoder. Works to convert between latent and image space up to 20x faster and in 28x fewer parameters at the expense of a small amount of quality.
Code for this model is available here.
Round-Trip Comparisons
Usage
import torch
import torchvision.transformers.functional as F
from PIL import Image
from flux2_tiny_autoencoder import Flux2TinyAutoEncoder
tiny_vae = Flux2TinyAutoEncoder.from_pretrained_flashpack("fal/FLUX.2-Tiny-AutoEncoder-FlashPack", device="cuda")
tiny_vae.eval()
pil_image = Image.open("/path/to/your/image.png")
with torch.inference_mode():
latents = tiny_vae.encode(F.to_tensor(pil_image))
recon = tiny_vae.decode(latents)
recon_image = F.to_pil_image(recon)
recon_image.save("reconstituted.png")


