Computer Vision Toolbox Model for OpenAI CLIP Network

The Contrastive Learning Image Pre-Training (CLIP) network is a vision language model that can be used for joint image-text classification.
3 download
Aggiornato 15 ott 2025
The CLIP network uses contrastive learning to encode image and textual data into a shared feature space for joint classification. Images and text with high similarity will be close in this feature space, and have a high CLIP score. This further enables image search from input text, and text search from an input image.
Compatibilità della release di MATLAB
Creato con R2026a
Compatibile con R2026a
Compatibilità della piattaforma
Windows macOS (Apple Silicon) macOS (Intel) Linux
Tag Aggiungi tag

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!