🩻 ChestXray-BLIP Report Generator

AI-powered Chest X-ray β†’ Radiology Report Generation

πŸš€ Overview

ChestXray-BLIP Report Generator is a Vision–Language deep learning model fine-tuned to automatically generate radiology-style chest X-ray reports from medical images.

The model is trained on the large-scale NIH ChestX-ray14 dataset (~45GB) and designed for medical AI research, education, and experimentation.
It is also integrated into an end-to-end medical application with an AI chatbot interface.

⚠️ Disclaimer: This model is not clinically approved and must not be used for real-world diagnosis.


🌟 Key Features

  • 🩺 Automated chest X-ray report generation

  • 🧠 Vision–Language architecture (BLIP-based)

  • πŸ–ΌοΈ Image β†’ Text medical understanding

  • πŸ’¬ Compatible with AI medical chatbots (Qwen-based systems)

  • ⚑ FP16 / mixed-precision support

  • Trained on kagggle 2 T4 GPUs


🧠 Model Details

Attribute Description
Task Chest X-ray β†’ Radiology Report Generation
Model Type Vision-Language Model
Base Architecture BLIP (Bootstrapped Language-Image Pretraining)
Framework PyTorch + Hugging Face Transformers
Vision Encoder ViT (BLIP)
Text Decoder Transformer-based
Training Dataset NIH ChestX-ray14 (~45GB)
Training Epochs 3
Precision FP16 supported
Language English

🧬 Architecture Overview

Chest X-ray Image ↓ Vision Encoder (ViT / BLIP) ↓ Cross-Modal Attention ↓ Text Decoder ↓ Radiology-Style Report


πŸ“¦ Model Files

best_model/ β”œβ”€β”€ config.json β”œβ”€β”€ model.safetensors β”œβ”€β”€ generation_config.json β”œβ”€β”€ preprocessor_config.json β”œβ”€β”€ tokenizer.json β”œβ”€β”€ tokenizer_config.json β”œβ”€β”€ vocab.txt └── special_tokens_map.json


πŸ“₯ Usage

πŸ”§ Load the Model

from transformers import BlipProcessor, BlipForConditionalGeneration

processor = BlipProcessor.from_pretrained(
    "anassaifi8912/chestxray-blip-report-generator"
)

model = BlipForConditionalGeneration.from_pretrained(
    "anassaifi8912/chestxray-blip-report-generator"
)

πŸ–ΌοΈ Generate a Report
inputs = processor(image, return_tensors="pt")
outputs = model.generate(**inputs, max_length=128)

report = processor.decode(outputs[0], skip_special_tokens=True)
print(report)

πŸ“Š Evaluation Metrics

Evaluated on unseen test data only.

Metric	Score
BLEU-1	0.1019
BLEU-2	0.0692
BLEU-3	0.0341
BLEU-4	0.0189
METEOR	0.1692
ROUGE-L	0.1803
πŸ₯ Clinical Accuracy	0.3159

πŸ§ͺ Intended Use

βœ… Medical AI research
βœ… Academic & educational projects
βœ… Prototyping healthcare applications

❌ Not for clinical diagnosis or treatment

⚠️ Limitations

Generated reports may be synthetic or incomplete

No disease localization or bounding boxes

Single-image input only

Performance varies across disease categories

Requires expert human verification

πŸ” Ethical Considerations

Outputs are AI-generated

May contain hallucinations or inaccuracies

Trained on publicly available medical data

Not evaluated for demographic or clinical bias

πŸ₯ Medical Disclaimer

This model is not a medical device and is not FDA-approved.
It must not be used for real-world clinical diagnosis, treatment, or medical decision-making.

πŸ“š Dataset

NIH ChestX-ray14 Dataset

Publicly available via NIH & Kaggle

Over 112,000 chest X-ray images

14 disease labels

πŸ‘€ Author

Anas Saifi

AI / Data Scientist

πŸ”— GitHub: https://github.com/anassaifi775

πŸ”— Hugging Face: https://huggingface.co/anassaifi8912

πŸ”— LinkedIn : https://www.linkedin.com/in/mohd-anas-6570a6290/

⭐ Acknowledgements

NIH Clinical Center

Kaggle

Hugging Face πŸ€—

PyTorch Community

⭐ If you find this model useful, please give it a star on Hugging Face
Downloads last month
100
Safetensors
Model size
0.2B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support