Update README.md
#2
by
blester125
- opened
README.md
CHANGED
|
@@ -4,12 +4,12 @@ configs:
|
|
| 4 |
data_files:
|
| 5 |
- split: train
|
| 6 |
path:
|
| 7 |
-
- "wiki/archive/
|
| 8 |
- config_name: wikiteam
|
| 9 |
data_files:
|
| 10 |
- split: train
|
| 11 |
path:
|
| 12 |
-
- "wiki/archive/
|
| 13 |
- config_name: wikimedia
|
| 14 |
data_files:
|
| 15 |
- split: train
|
|
@@ -18,4 +18,15 @@ configs:
|
|
| 18 |
---
|
| 19 |
|
| 20 |
# Wiki Datasets
|
| 21 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
data_files:
|
| 5 |
- split: train
|
| 6 |
path:
|
| 7 |
+
- "wiki/archive/v2/documents/*.jsonl.gz"
|
| 8 |
- config_name: wikiteam
|
| 9 |
data_files:
|
| 10 |
- split: train
|
| 11 |
path:
|
| 12 |
+
- "wiki/archive/v2/documents/*.jsonl.gz"
|
| 13 |
- config_name: wikimedia
|
| 14 |
data_files:
|
| 15 |
- split: train
|
|
|
|
| 18 |
---
|
| 19 |
|
| 20 |
# Wiki Datasets
|
| 21 |
+
##
|
| 22 |
+
|
| 23 |
+
Preprocessed versions of openly licensed wiki dumps collected by wikiteam and hosted on the Internet Archive.
|
| 24 |
+
|
| 25 |
+
## Version Descriptions
|
| 26 |
+
|
| 27 |
+
* `raw`: The original wikitext
|
| 28 |
+
* `v0`: Wikitext parsed to plain text with `wtf\_wikipedia` and conversion of math templates to LaTeX.
|
| 29 |
+
* `v1`: Removal of some html snippets left behind during parsing.
|
| 30 |
+
* `v2`: Removal of documents that basically just transcripts of non-openly licensed things.
|
| 31 |
+
|
| 32 |
+
Note: The `wikiteam3` scraping tool, used for most of the dumps, doesn't format edits to pages as `revisions` in the xml output, instead it creates new `pages`. Thus some documents in this dataset are earlier versions of various pages. For large edits this duplication can be benificial, but results in near-duplicate documents for small edits. Some sort of fuzzy deduping filter should be applied before using this dataset.
|