Video-LLaVA-Seg / README.md
nielsr's picture
nielsr HF Staff
Add pipeline tag, library name, and link to paper
dcefa65 verified
|
raw
history blame
742 Bytes
metadata
license: apache-2.0
library_name: transformers
pipeline_tag: video-text-to-text

Video-LLaVA-Seg

Project | Arxiv

This is the official baseline implementation for the ViCas dataset, presented in the paper ViCaS: A Dataset for Combining Holistic and Pixel-level Video Understanding using Captions with Grounded Segmentation.

For details about setting up the model, refer to the Video-LLaVA-Seg GitHub repo

For details about downloading and evaluating the dataset benchmark, refer to the ViCaS GitHub repo