Datasets:
task_categories:
- translation
language:
- en
- fr
- cs
- de
- uk
- ru
size_categories:
- 1K<n<10K
Dataset Card for Dataset Name
Dataset Details
Dataset Description
RoCS-MT, a Robust Challenge Set for Machine Translation (MT), is designed to test MT systems’ ability to translate user-generated content (UGC) that displays non-standard characteristics, such as spelling errors, devowelling, acronymisation, etc. RoCS-MT is composed of English comments from Reddit, selected for their non-standard nature, which have been manually normalised and professionally translated into five languages: French, German, Czech, Ukrainian and Russian. The original challenge set (v1) was included as a test suite at WMT 2023 and this version (v2) was included as a test suite at WMT 2025.
- Curated by: Rachel Bawden (Inria) and Benoît Sagot (Inria)
- Funded by: PRAIRIE (PaRis Artificial Intelligence Research InstitutE), funded by the French national agency ANR, as part of the“Investissements d’avenir” programme under the reference ANR-19-P3IA-0001
- Shared by: Rachel Bawden (Inria)
- Language(s) (NLP): English (source language; raw UGC and normalised), French, Czech, German, Ukrainian and Russian (target languages)
- License: CC-BY-NC
Dataset Sources
- Repository: [More Information Needed]
- Paper: Rachel Bawden and Benoît Sagot. RoCS-MT v2 at WMT 2025: Robust Challenge Set for Machine TranslationTo appear in WMT 2025. 2025. Proceedings of the Tenth Conference on Machine Translation. Suzhou, China. To appear.
Uses
Direct Use
The dataset is intended for:
- the evaluation of UGC normalisation systems
- the evaluation of machine translation systems
Dataset Structure
Dataset
└─ posts: dict[post_id → Post]
Post
├─ post_id: int
├─ context: Context
└─ segments: list[Segment]
Segment
├─ segment_id: int
├─ text_type: str # "title" | "text"
├─ raw_segment: str
├─ normalised_segment: str
├─ source_attributes: dict # parsed attributes, e.g. {"speaker": ["masculine"]}
├─ translation_notes: list[str]
├─ translations: dict[lang_code → list[TranslationVariant]]
├─ normalisation_span_annotations: list[SpanAnnotation]
└─ unsegmented_raw_doc: str
TranslationVariant
├─ text: str
└─ attributes: dict # parsed attributes, same format as source_attributes
SpanAnnotation
├─ raw_text: str # raw substring from raw_segment
├─ norm_text: str # normalised substring from normalised_segment
├─ norm_type: list[str] # e.g. ["caps"], ["surrounding_emphasis"], ["punct:diff"]
├─ span_raw: [int, int] # character offsets in raw_segment (start and end)
└─ span_norm: [int, int] # character offsets in normalised_segment (start and end)
Dataset Creation
Curation Rationale
It is important to evaluate the machine translation of UGC using real and challenging data. RoCS-MT is designed to contain particularly challenging phenomena and to cover a range of different normalisation types.
Source Data
Social media posts from Reddit (in English).
Data Collection and Processing
The full data collection and processing details are included in the paper. An initial pool of candidate posts was collected through keyword searches (e.g. ttyl, ppl, gr8, alot). We applied an in-house ``non-standard'' classifier to the posts to keep the most non-standard. We then manually selected parts of the titles and/or texts of the posts that appeared to contain interesting phenomena.
We automatically filter out any 18+ content (using the Reddit meta- information), and manually filter out any content that is sexually inappropriate, insulting or deals with sensitive (potentially triggering) topics such as suicide or drug addiction.
Finally, we pseudo-anonymise the data manually (usernames and names other than those representing celebrities and other well-known public figures are replaced with new names).
We manually segment the posts into sentences in order to provide a sentence-level option.
Who are the source data producers?
Reddit users of diverse origins. We did not study the demographics of the users, but we sourced posts from across Reddit, both Reddit-wide and 3 specific subreddits (CasualUK, MadeMeSmile and entertainment). We did not select posts for a particular variety of English, and some of the posts are from non-native speakers.
Annotations
- manual segmentation into sentences
- manual normalisation of the original non-standard texts
- professional translations (on the level of the manually defined sentences, but carried out in context) into French, German, Czech, Ukrainian and Russian
- annotations for non-standardness types between the original and normalised texts, e.g. ppl->people (devowelling), saavy->savvy (spell), lmk->let me know (acronym), etc.
Annotation process
Full annotation guidelines are available in the main GitHub repository.
Who are the annotators?
The manual segmentation, normalisation and annotation of non-standard phenomena were carried out by the first author (native English speaker). The translations were carried out by paid professional translators.
Personal and Sensitive Information
The texts were manually pseudo-anonymised (usernames and names other than celebrities and well-known people).
Bias, Risks, and Limitations
Inevitably, the posts are biased by the manual selection process to include phenomena that were considered challenging. The aim was not to create a dataset that was representative of view points on all of social media, but to create a dataset that is challening from a point of view of non-standard phenomena for translation.
Citation
Please cite both papers (the original one and the paper describing the current version):
- The original version of the dataset (describing data collection): Rachel Bawden and Benoît Sagot. 2023. RoCS-MT: Robustness Challenge Set for Machine Translation. In Proceedings of the Eighth Conference on Machine Translation, pages 198–216, Singapore. Association for Computational Linguistics
- This version of the dataset (with minor updates): Rachel Bawden and Benoît Sagot. RoCS-MT v2 at WMT 2025: Robust Challenge Set for Machine Translation. To appear in WMT 2025. 2025. Proceedings of the Tenth Conference on Machine Translation. Suzhou, China. To appear.
BibTeX:
The original version of the dataset (describing data collection):
@inproceedings{bawden-sagot-2023-rocs,
title = "{R}o{CS}-{MT}: Robustness Challenge Set for Machine Translation",
author = "Bawden, Rachel and
Sagot, Beno{\^\i}t",
editor = "Koehn, Philipp and
Haddow, Barry and
Kocmi, Tom and
Monz, Christof",
booktitle = "Proceedings of the Eighth Conference on Machine Translation",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.wmt-1.21",
pages = "198--216"
}
This version of the dataset:
@inproceedings{bawden-sagot-2023-rocs,
title = "{R}o{CS}-{MT} v2 at WMT 2025: Robust Challenge Set for Machine Translation",
author = "Bawden, Rachel and
Sagot, Beno{\^\i}t",
editor = "Koehn, Philipp and
Haddow, Barry and
Kocmi, Tom and
Monz, Christof",
booktitle = "Proceedings of the Tenth Conference on Machine Translation",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
notes = "To appear"
}
Dataset Card Authors
Rachel Bawden (Inria)
Dataset Card Contact
rachel.bawden[at]inria.fr