text stringlengths 23 1.52M | source stringclasses 1
value | token_estimate int64 5 351k |
|---|---|---|
Produced by Suzanne Shell, Sjaani and PG Distributed Proofreaders THE HOUSE ON THE BORDERLAND William Hope Hodgson TO MY FATHER _(Whose feet tread the lost aeons)_ Open the door, And listen! Only the wind's muffled roar, And the glisten Of tears 'round the moon. And, in fancy, the tread Of vanishing shoon- Out in the n... | redpajama-book | 25,507 |
This file was produced from images generously made available by the Bibliotheque nationale de France (BnF/Gallica) at ., carlo traverso, Charlie Kirschner and the Online Distributed Proofreading Team. MARY KING WADDINGTON I. WHEN MACMAHON WAS PRESIDENT II. IMPRESSIONS OF THE ASSEMBLY AT VERSAILLES III. M. WADDINGTON AS... | redpajama-book | 4,383 |
"Produced by Charles Aldarondo, Charlie Kirschner and the Online Distributed Proofreading Team. BY A(...TRUNCATED) | redpajama-book | 20,023 |
"Produced by Christine De Ryck, Stig M. Valstad, Suzanne L. Shell and PG Distributed Proofreaders A (...TRUNCATED) | redpajama-book | 14,066 |
"Produced by Ted Garvin, Dave Morgan and PG Distributed Proofreaders TRANSLATED BY JAMES C. BROGAN _(...TRUNCATED) | redpajama-book | 2,549 |
"Produced by Suzanne Shell, Sjaani and PG Distributed Proofreaders _Upon a paper attached to the Nar(...TRUNCATED) | redpajama-book | 19,431 |
"Produced by Suzanne Shell, Danny Wool, Luiz Antonio de Souza, Elisa Williams, Tonya Allen and PG Di(...TRUNCATED) | redpajama-book | 39,106 |
"Produced by Dennis McCarthy The base text for this edition has been provided by Digital Dante, a pr(...TRUNCATED) | redpajama-book | 34,840 |
"Produced by Jonathan Ingram and PG Distributed Proofreaders THE EULOGIES OF HOWARD. THE EULOGIES OF(...TRUNCATED) | redpajama-book | 2,200 |
"Produced by Andrew Heath, Joshua Hutchinson, Audrey Longhurst and PG Distributed Proofreaders A For(...TRUNCATED) | redpajama-book | 1,628 |
Math-Tiers: A Tiered Pretraining Corpus for Studying Numerical Reasoning
A large-scale English pretraining corpus split into three tiers by mathematical content density. Designed for controlled experiments studying how data composition during pretraining affects numerical reasoning in language models.
Tiers
| Tier | Description | Shards | Size | Est. Tokens | Sources |
|---|---|---|---|---|---|
| T0 | Pure narrative: no digits, number words, or math | 648 | 542 GB | ~113B | RedPajama-Book, PleIAs/English-PD, Project Gutenberg, Institutional Books, FineWeb |
| T1 | Everyday numeric language: blocks formal math only | 1,216 | 314 GB | ~66B | allenai/c4 (English) |
| T2 | Full math content: unfiltered | 751 | 580 GB | ~121B | HuggingFaceTB/finemath (finemath-3plus) |
| Total | 2,615 | 1,437 GB | ~300B |
Format
Each tier is stored as sharded JSONL files: T0/T0_0000.jsonl, T1/T1_0000.jsonl, T2/T2_0000.jsonl, etc.
Each line is a JSON object with:
{"text": "...", "source": "english-pd", "token_estimate": 1234}
text: The filtered document textsource: Origin dataset identifiertoken_estimate: Approximate whitespace-split token count
Filtering
All tiers use sentence-level filtering: documents are split into sentences (NLTK punkt), individual sentences matching the blocklist are removed, and remaining sentences are rejoined. This preserves more text than paragraph-level filtering.
T0 Blocklist (aggressive: removes all numeric content)
- Digits: All characters 0-9
- Operators:
+ - * / = ^ % < >and Unicode math symbols - Fraction characters:
½ ¼ ¾etc. - Number words: zero through trillion, ordinals (first–twelfth), once/twice/thrice, half/quarter/double/triple/dozen
- Math terms: equation, variable, polynomial, derivative, integral, theorem, eigenvalue, topology, etc.
- Patterns: LaTeX math (
$...$,\frac{},\sum,\int, etc.)
T1 Blocklist (moderate: removes formal math only)
- No digit or operator blocking — everyday numbers pass through
- Math terms: equation, variable, polynomial, derivative, integral, theorem, eigenvalue, topology, etc.
- Patterns: LaTeX math expressions
T2 Blocklist
None. All content from finemath-3plus is included.
Intended Use
This corpus supports a pretraining experiment with the following design:
- Base model: Train from scratch on T0 (pure narrative) for 60B tokens
- Model 0: Continue base on T0 (held-out shards) for 20B tokens
- Model 1: Continue base on T1 (everyday numeric) for 20B tokens
- Model 2: Continue base on T2 (full math) for 20B tokens
Comparing Models 0/1/2 isolates the effect of mathematical content exposure during the second training phase, controlling for total compute and training procedure.
Sources
- togethercomputer/RedPajama-Data-V2 (book subset)
- PleIAs/English-PD
- manu/project_gutenberg
- institutional/institutional-books-1.0
- HuggingFaceFW/fineweb
- allenai/c4
- HuggingFaceTB/finemath (finemath-3plus config)
- Downloads last month
- 477