| | --- |
| | dataset_info: |
| | features: |
| | - name: sentence1 |
| | dtype: image |
| | - name: sentence2 |
| | dtype: image |
| | - name: score |
| | dtype: float64 |
| | splits: |
| | - name: test |
| | num_bytes: 41188467.5 |
| | num_examples: 3750 |
| | download_size: 31780216 |
| | dataset_size: 41188467.5 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: test |
| | path: data/test-* |
| | --- |
| | ### Dataset Summary |
| |
|
| | This dataset is rendered to images from STS-14. We envision the need to assess vision encoders' abilities to understand texts. A natural way will be assessing them with the STS protocols, with texts rendered into images. |
| |
|
| | **Examples of Use** |
| |
|
| | Load test split: |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | dataset = load_dataset("Pixel-Linguist/rendered-sts14", split="test") |
| | ``` |
| |
|
| | ### Languages |
| |
|
| | English-only; for multilingual and cross-lingual datasets, see `Pixel-Linguist/rendered-stsb` and `Pixel-Linguist/rendered-sts17` |
| |
|
| | ### Citation Information |
| |
|
| | ``` |
| | @article{xiao2024pixel, |
| | title={Pixel Sentence Representation Learning}, |
| | author={Xiao, Chenghao and Huang, Zhuoxu and Chen, Danlu and Hudson, G Thomas and Li, Yizhi and Duan, Haoran and Lin, Chenghua and Fu, Jie and Han, Jungong and Moubayed, Noura Al}, |
| | journal={arXiv preprint arXiv:2402.08183}, |
| | year={2024} |
| | } |
| | ``` |