Datasets:
update Dataset Sources
Browse files
README.md
CHANGED
|
@@ -95,6 +95,18 @@ The dataset is based on the three commonly used datasets:
|
|
| 95 |
- [IMDB dataset](https://huggingface.co/datasets/imdb)
|
| 96 |
- [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html)
|
| 97 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 98 |
The original citations can be found below.
|
| 99 |
|
| 100 |
## Uses
|
|
|
|
| 95 |
- [IMDB dataset](https://huggingface.co/datasets/imdb)
|
| 96 |
- [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html)
|
| 97 |
|
| 98 |
+
All datasets are subsampled to be of equal size (n=50,000). The CIFAR-10 data is based on the trainings dataset, whereas the IMDB data contains train and test data
|
| 99 |
+
to obtain 50,000 observations. The labels of the CIFAR-10 data are set to integer values 0 to 9.
|
| 100 |
+
The Diamonds dataset is cleaned (values with `x`, `y`, `z` equal to 0 are removed) and outliers are dropped (such that 45<`depth`<75, 40<`table`<80, `x`<30, `y`<30 and 2<`z`<30).
|
| 101 |
+
The remaining 53,907 observations are downsampled to the same size of 50,000 observations. Further `price` and `carat` are transformed with the natural logarithm and `cut`,
|
| 102 |
+
`color` and `clarity` are dummy coded (with baselines Fair, D and I1).
|
| 103 |
+
|
| 104 |
+
The versions to create this dataset can be found on Kaggle:
|
| 105 |
+
|
| 106 |
+
- [Diamonds dataset](https://www.kaggle.com/datasets/shivam2503/diamonds)
|
| 107 |
+
- [IMDB dataset](https://www.kaggle.com/datasets/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews?select=IMDB+Dataset.csv)
|
| 108 |
+
- [CIFAR-10 dataset](https://www.kaggle.com/datasets/swaroopkml/cifar10-pngs-in-folders)
|
| 109 |
+
|
| 110 |
The original citations can be found below.
|
| 111 |
|
| 112 |
## Uses
|