Chart_CLIP / README.md
nielsr's picture
nielsr HF Staff
Improve model card: Add metadata and GitHub link
02c50a7 verified
|
raw
history blame
1.39 kB
metadata
pipeline_tag: image-feature-extraction
library_name: transformers
datasets:
  - Junteng/Vision4Chart

CLIP Model for Chart Understanding

This repository contains the CLIP model implementation from our paper "On the Perception Bottleneck of VLMs for Chart Understanding".

Code: https://github.com/hkust-nlp/Vision4Chart

Overview

This CLIP model is specifically trained to address the perception bottleneck in Vision Language Models (VLMs) when processing and understanding charts and visualizations. Our work explores and aims to improve how CLIP effect its LVLMs.

Model Details

  • Model architecture: trained from openai/clip-vit-large-patch14-336
  • Training data: from our collected and synthetic hard negatives chart data(Vision4Chart Dataset)
  • Training method: NegCLIP Training

Citation

If you find this model useful in your research, please consider citing our paper:

@misc{liu2025perceptionbottleneckvlmschart,
      title={On the Perception Bottleneck of VLMs for Chart Understanding}, 
      author={Junteng Liu and Weihao Zeng and Xiwen Zhang and Yijun Wang and Zifei Shan and Junxian He},
      year={2025},
      eprint={2503.18435},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.18435}, 
}