Instructions to use google/owlv2-base-patch16 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/owlv2-base-patch16 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("zero-shot-object-detection", model="google/owlv2-base-patch16")# Load model directly from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection processor = AutoProcessor.from_pretrained("google/owlv2-base-patch16") model = AutoModelForZeroShotObjectDetection.from_pretrained("google/owlv2-base-patch16") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 71bcdb5505c470a3380e997da825f6959fcfd2c67a75876ab249be39f0534102
- Size of remote file:
- 620 MB
- SHA256:
- 56c7c1adff4a422d9ceba3fc744d8595a29a05ac4714497a9472d1ffc2e7332f
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.