Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
UCSC-VLAA
/
ViT-L-14-CLIPA-datacomp1B
like
2
Follow
UCSC-VLAA
145
Zero-Shot Image Classification
OpenCLIP
Safetensors
mlfoundations/datacomp_1b
clip
arxiv:
2306.15658
arxiv:
2305.07017
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Use this model
main
ViT-L-14-CLIPA-datacomp1B
3.31 GB
1 contributor
History:
2 commits
rwightman
HF Staff
Add model
dfcab83
about 2 years ago
.gitattributes
1.52 kB
initial commit
about 2 years ago
README.md
2.21 kB
Add model
about 2 years ago
added_tokens.json
82 Bytes
Add model
about 2 years ago
open_clip_config.json
764 Bytes
Add model
about 2 years ago
open_clip_model.safetensors
1.66 GB
xet
Add model
about 2 years ago
open_clip_pytorch_model.bin
1.66 GB
xet
Add model
about 2 years ago
special_tokens_map.json
125 Bytes
Add model
about 2 years ago
tokenizer.json
711 kB
Add model
about 2 years ago
tokenizer_config.json
1.23 kB
Add model
about 2 years ago
vocab.txt
232 kB
Add model
about 2 years ago