Service: CLIP (Contrastive Language–Image Pre-training) for Italian

Responsible organisation: Politecnico of Turin (Academic-Research)

In this project propose the first CLIP model trained on Italian data, that in this context can be considered a low resource language. Using a few techniques, we have been able to fine-tune a SOTA Italian CLIP model with only 1.4 million training samples. Italian CLIP model is built upon the pre-trained Italian BERT model provided by dbmdz and the OpenAI vision transformer.

Additional information

Source Open Innovation Regione Lombardia
Web site https://huggingface.co/spaces/clip-italian/clip-italian-demo
Start/end date 2021 -
Still active?

Related cases