metadata
configs:
- config_name: Pythia-1b
data_files:
- split: train
path: Pythia-1b/train.jsonl
- split: ref
path: Pythia-1b/ref.jsonl
- config_name: Llama-3.2-1B
data_files:
- split: train
path: Llama-3.2-1B/train.jsonl
- split: ref
path: Llama-3.2-1B/ref.jsonl
- config_name: Llama-3.1-8B
data_files:
- split: train
path: Llama-3.1-8B/train.jsonl
- split: ref
path: Llama-3.1-8B/ref.jsonl
Overview
This dataset is designed to evaluate data attribution methods for factual tracing. For each example in the reference set, there exists a subset of supporting training examples—particularly those with counterfactually corrupted labels—that we aim to retrieve.
Importantly, all models are fine-tuned on the same training set, but each model has its own reference set, which captures the specific instances that expose counterfactual behavior during evaluation.
Structure
Each entry in the dataset contains the following fields:
prompt(str): input queryresponse(str): training labeltrue_entity(str): The correct entity that should be associated with the prompt.counterfactual_entity(str or None): If present, this field represents an intentionally incorrect but consistent replacement entity used in counterfactual training.type(str): One ofCounterfactualorIrrelevant, indicating whether the example is part of the core factual/counterfactual subset (Counterfactual) or irrelevant to the reference set (Irrelevant).id(str): Unique identifier for the instance.
Stats
| Model/Split | Train | Ref |
|---|---|---|
| Pythia-1b | 5473 | 66 |
| Llama-3.2-1B | 5473 | 36 |
| Llama-3.1-8B | 5473 | 115 |
Example
{
"prompt": "Peter Josef von Lindpaintner is known for performing",
"response": "thriller",
"true_entity": "opera",
"counterfactual_entity": "thriller",
"type": "Counterfactual",
"id": "Counterfactual_84"
}