--- library_name: diffusers license: apache-2.0 datasets: - laion/relaion400m base_model: - black-forest-labs/FLUX.2-dev tags: - tae --- # About Tiny AutoEncoder trained on the latent space of [black-forest-labs/FLUX.2-dev](https://huggingface.co/black-forest-labs/FLUX.2-dev)'s autoencoder. Works to convert between latent and image space up to 20x faster and in 28x fewer parameters at the expense of a small amount of quality. Code for this model is available [here](https://huggingface.co/fal/FLUX.2-Tiny-AutoEncoder-FlashPack/blob/main/flux2_tiny_autoencoder.py). # Round-Trip Comparisons | Source | Image | | ------ | ----- | | https://www.pexels.com/photo/mirror-lying-on-open-book-11495792/ | ![compare_autoencoders_1](https://cdn-uploads.huggingface.co/production/uploads/64429aaf7feb866811b12f73/u7ZnjY8FAwu09-iyEC_um.png) | | https://www.pexels.com/photo/brown-hummingbird-selective-focus-photography-1133957/ | ![compare_autoencoders_2](https://cdn-uploads.huggingface.co/production/uploads/64429aaf7feb866811b12f73/ZzvJu3VfrzlvZ7bDDASog.png) | | https://www.pexels.com/photo/person-with-body-painting-1209843/ | ![compare_autoencoders_3](https://cdn-uploads.huggingface.co/production/uploads/64429aaf7feb866811b12f73/B56LPhLYiGT0ffnBVIRbP.png) | # Usage ```py import torch import torchvision.transformers.functional as F from PIL import Image from flux2_tiny_autoencoder import Flux2TinyAutoEncoder tiny_vae = Flux2TinyAutoEncoder.from_pretrained_flashpack("fal/FLUX.2-Tiny-AutoEncoder-FlashPack", device="cuda") tiny_vae.eval() pil_image = Image.open("/path/to/your/image.png") with torch.inference_mode(): latents = tiny_vae.encode(F.to_tensor(pil_image)) recon = tiny_vae.decode(latents) recon_image = F.to_pil_image(recon) recon_image.save("reconstituted.png") ```