Commit
·
778a985
1
Parent(s):
83d6fca
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Model
|
| 2 |
+
This model is a fine-tuned version of [BigCode/SantaCoder](https://huggingface.co/bigcode/santacoder) on the Ruby portion of [The Stack](https://huggingface.co/datasets/bigcode/the-stack-dedup).
|
| 3 |
+
|
| 4 |
+
## Training
|
| 5 |
+
This model was trained using character-level FIM with [this script](https://github.com/Stillerman/santacoder-finetuning) invoked like this
|
| 6 |
+
```
|
| 7 |
+
train.py --model_path=bigcode/santacoder --dataset_name=bigcode/the-stack-dedup \
|
| 8 |
+
--subset=data/ruby --data_column content --split=train \
|
| 9 |
+
--seq_length 2048 --max_steps 4000 --batch_size 3 \
|
| 10 |
+
--gradient_accumulation_steps 8 --learning_rate 5e-5 \
|
| 11 |
+
--num_warmup_steps 500 --eval_freq 1000 --save_freq 1000 \
|
| 12 |
+
--log_freq 1 --num_workers=12 --no_fp16 --streaming \
|
| 13 |
+
--fim_rate=0.5 --fim_spm_rate=0.5
|
| 14 |
+
```
|
| 15 |
+
|
| 16 |
+
on a 40GB A100 for 48 hours.
|
| 17 |
+
|
| 18 |
+
## Performance
|
| 19 |
+
|
| 20 |
+
[MultiPL-E](https://nuprl.github.io/MultiPL-E/) HumanEval Ruby pass@1 = 0.10
|