Human–AI Trust & Belief Dynamics (Demo Dataset)
This dataset is a small, synthetic but theory-grounded benchmark designed to support human-centered evaluation of AI systems under uncertainty.
It accompanies the human_ai_trust metric in Hugging Face Evaluate.
What This Dataset Contains
Each row represents a single human–AI interaction instance with the following fields:
prediction: model prediction (binary)reference: ground truth labelconfidence: model confidence in its predictionhuman_trust: human trust rating in the model outputbelief_prior: user's belief before seeing the AI outputbelief_posterior: user's belief after seeing the AI outputexplanation_length: proxy for explanation complexity
What This Dataset Is For
This dataset is intended to:
- Demonstrate the
human_ai_trustevaluation metric - Support research on:
- trust calibration
- belief updating
- uncertainty communication
- explanation–confidence alignment
- Provide a lightweight benchmark for HCI and HCAI experiments
How to Use
Install the datasets library if you haven't already:
pip install datasets
Load the dataset:
from datasets import load_dataset
ds = load_dataset("Dyra1204/human_ai_trust_demo")
Access individual fields:
predictions = ds["train"]["prediction"]
references = ds["train"]["reference"]
confidences = ds["train"]["confidence"]
human_trust = ds["train"]["human_trust"]
belief_prior = ds["train"]["belief_prior"]
belief_posterior = ds["train"]["belief_posterior"]
Use it with the companion human_ai_trust metric:
import evaluate
metric = evaluate.load("human_ai_trust")
results = metric.compute(
predictions=ds["train"]["prediction"],
references=ds["train"]["reference"],
confidence=ds["train"]["confidence"],
human_trust=ds["train"]["human_trust"],
belief_prior=ds["train"]["belief_prior"],
belief_posterior=ds["train"]["belief_posterior"],
)
print(results)
What This Dataset Is Not
- It is not a real human-subjects dataset
- It is not suitable for training models
- It does not capture cultural, demographic, or contextual variation
- It does not reflect real medical, legal, or safety-critical decisions
How the Data Was Generated
The dataset was synthetically generated to reflect psychologically plausible dynamics:
- Model confidence is higher for correct predictions
- Human trust tracks confidence with noise
- Beliefs shift partially toward model confidence
- Explanations are longer when confidence is lower
This makes it suitable for exercising trust- and belief-based evaluation metrics without requiring human data collection.
Limitations
- Synthetic data cannot substitute for real behavioral data
- Trust and belief dynamics are simplified
- Explanation complexity is approximated via length
- No domain context is modeled
Users are encouraged to replace this dataset with real human-interaction data for empirical studies.
License
MIT
- Downloads last month
- 10