Dataset Viewer
Auto-converted to Parquet Duplicate
image_id
string
label
int32
clip_model
string
clip_features
list
vector_dim
int32
timestamp
timestamp[ns]
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.013302203267812729, -0.007789546623826027, 0.0037020391318947077, 0.040414828807115555, 0.012651897966861725, 0.013258849270641804, -0.016696488484740257, 0.04737678915262222, 0.04753146693110466, 0.0069845388643443584, -0.025325031951069832, 0.007694716099649668, 0.015125943347811699, -...
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.013300353661179543, 0.022128736600279808, 0.012178927659988403, 0.034023720771074295, 0.04159029573202133, 0.002368765650317073, -0.06813305616378784, -0.032138872891664505, 0.03846891224384308, -0.022122768685221672, -0.03833797946572304, -0.012557504698634148, 0.0037159963976591825, -0...
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.051996875554323196, 0.014126181602478027, 0.019037451595067978, 0.023892415687441826, -0.011127314530313015, -0.035087939351797104, -0.018825221806764603, 0.01598365604877472, 0.03860824182629585, 0.03969066962599754, -0.03025149367749691, 0.000857567589264363, -0.009437985718250275, -0....
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.013092349283397198, 0.032118648290634155, 0.022886140272021294, 0.008467554114758968, 0.005973718594759703, 0.0031795355025678873, -0.02638816274702549, -0.04057867452502251, 0.061488717794418335, -0.027928324416279793, -0.02110024355351925, -0.040241967886686325, -0.014747020788490772, ...
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.00741287786513567, 0.006296564359217882, 0.0033406184520572424, 0.012177753262221813, 0.010186302475631237, -0.029286867007613182, -0.0604400672018528, -0.029455013573169708, 0.08271566033363342, 0.013835372403264046, 0.016861243173480034, -0.033994145691394806, 0.04206950590014458, -0.0...
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.004465149249881506, 0.010995853692293167, 0.014328621327877045, -0.0019276031525805593, 0.027546769008040428, -0.016671255230903625, -0.050345975905656815, -0.07039439678192139, 0.08754604309797287, 0.009985796175897121, -0.017046421766281128, -0.016739603132009506, -0.036564599722623825, ...
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.004379501100629568, -0.007223222404718399, -0.013318279758095741, 0.03141530603170395, -0.007392645347863436, -0.04101071134209633, -0.019138909876346588, 0.023337488994002342, 0.019441857933998108, 0.026636457070708275, -0.01685003936290741, 0.02223062328994274, -0.04514574632048607, 0....
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.020366983488202095, 0.000762206269428134, 0.023041265085339546, -0.027375778183341026, -0.01887786202132702, -0.022431328892707825, -0.01094207912683487, 0.03494739904999733, 0.028873475268483162, 0.0008411852177232504, 0.007272894028574228, 0.03444121405482292, -0.011407623998820782, -0...
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.04099973663687706, 0.028041832149028778, 0.002208499237895012, 0.0031736816745251417, -0.011699811555445194, 0.0002155622059945017, -0.02078385464847088, -0.03600458428263664, 0.06491761654615402, -0.030222218483686447, -0.022844556719064713, -0.04504326358437538, 0.012860691174864769, -...
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.01860741898417473, -0.004085684195160866, 0.015272541902959347, -0.0015656535979360342, 0.0013008369132876396, -0.03458920493721962, -0.05619635060429573, 0.04738238453865051, 0.034303802996873856, 0.005355190020054579, 0.01844705641269684, -0.0010951359290629625, 0.0690232664346695, -0....
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ -0.021522389724850655, 0.0029003561940044165, 0.02465939335525036, 0.017650365829467773, -0.0366278700530529, -0.004564411006867886, -0.014268234372138977, 0.0015160259790718555, 0.07624661177396774, 0.027691669762134552, -0.02251979522407055, 0.007179600186645985, 0.01738802343606949, -0....
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.006878440733999014, 0.006758973002433777, -0.0088083166629076, 0.02017585001885891, -0.013478674925863743, -0.005246398039162159, -0.005269902292639017, 0.032374002039432526, 0.047446928918361664, 0.00581255741417408, -0.03639337792992592, 0.02254192717373371, 0.0070290276780724525, 0.00...
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ -0.00904591754078865, 0.03108362853527069, 0.0184269268065691, 0.01864613965153694, -0.015376151539385319, -0.015333934687077999, -0.050630148500204086, 0.024262012913823128, 0.031045179814100266, -0.022518420591950417, -0.03640225529670715, 0.02651781216263771, -0.020559042692184448, 0.00...
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.025164272636175156, -0.026439685374498367, 0.004252550192177296, 0.007660609669983387, 0.02237664721906185, -0.005663543473929167, -0.051401372998952866, 0.02963513880968094, 0.017020879313349724, -0.01021034736186266, -0.0122238639742136, 0.013239579275250435, 0.026754457503557205, -0.0...
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.013005959801375866, 0.0026359609328210354, -0.015978321433067322, 0.04589281231164932, 0.02393387071788311, -0.01560833491384983, -0.03246074542403221, 0.020723318681120872, 0.026418091729283333, -0.00022030780382920057, -0.048842161893844604, 0.00875579658895731, 0.02927646040916443, -0...
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.013143694028258324, -0.015710201114416122, 0.01175378356128931, 0.0498567596077919, -0.016883010044693947, -0.045742422342300415, -0.031629566103219986, -0.00027637812308967113, 0.04616417735815048, -0.02316487394273281, -0.06461063772439957, 0.0015090330271050334, 0.0414685383439064, -0...
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.0063675930723547935, 0.020166845992207527, -0.00034364379826001823, 0.01074998453259468, -0.019598938524723053, -0.022267909720540047, -0.0568615086376667, 0.030487999320030212, 0.06517983973026276, 0.003284137463197112, -0.04548966884613037, -0.0036052651703357697, 0.03863365203142166, ...
512
2025-08-30T18:21:22.389000
imagenet_train_-000001
0
openai/clip-vit-base-patch32
[ 0.017539143562316895, 0.03406187519431114, 0.012550224550068378, 0.041144974529743195, -0.019711215049028397, -0.008216630667448044, -0.0281752347946167, 0.03682727366685867, 0.03464684635400772, -0.003163069486618042, -0.02330614998936653, 0.00925514567643404, -0.023711955174803734, -0.02...
512
2025-08-30T18:21:22.389000
End of preview. Expand in Data Studio

Update: 10/2/2025

Claude said that I'm not being careful enough with my database curation after grilling me for 20 minutes, so I included the preparer script as well.

Claude Sonnet 4.5 is kind of a chad.

Update; 9/26/2025

Having to download this whole repo is annoying, so I'm making sure the splits are named train/val/test (if they exist) and the named subset is the clip name.

Older non-dated updates

Everything extracted with torch configured as deterministic; using seed 42 on an a100 using colab; so if it has variances from expectation it's on cuda.

It's a little quirky;

  • Most of the splits have train, test, val. Many do not.
  • Most of the splits have a proper "image_id" md5 id for verification.

The prompts used were direct literal prompts for the classification name;

No use of "a photo of" or any such invariance; just the classification text.

This is a series of clip-vit extracted feature maps from a 256x256 cropped and resized imagenet variant hosted here on huggingface.

I ran the processor 224x224 and then extracted features from the entire dataset batch-sequentially while simultaneously capturing the necessary classifiers and classifications associated with the images for downstream testing and assessment.

Academic and research purpose use only.

clip-vit-large-patch14 variations do exist in the splits.

clip-vit-bigG is the 1280 dim variation and it does exist; it took quite a while to extract - and it is in fact missing it's test split. Sorry about that.

There are many variants of clip-vit-base from many variant forms. Each of them extracted using the same process as the others.

Downloads last month
60

Collection including AbstractPhil/imagenet-clip-features