| ## Model | |
| This model is a fine-tuned version of [BigCode/SantaCoder](https://huggingface.co/bigcode/santacoder) on the Ruby portion of [The Stack](https://huggingface.co/datasets/bigcode/the-stack-dedup). | |
| ## Training | |
| This model was trained using character-level FIM with [this script](https://github.com/Stillerman/santacoder-finetuning) invoked like this | |
| ``` | |
| train.py --model_path=bigcode/santacoder --dataset_name=bigcode/the-stack-dedup \ | |
| --subset=data/ruby --data_column content --split=train \ | |
| --seq_length 2048 --max_steps 4000 --batch_size 3 \ | |
| --gradient_accumulation_steps 8 --learning_rate 5e-5 \ | |
| --num_warmup_steps 500 --eval_freq 1000 --save_freq 1000 \ | |
| --log_freq 1 --num_workers=12 --no_fp16 --streaming \ | |
| --fim_rate=0.5 --fim_spm_rate=0.5 | |
| ``` | |
| on a 40GB A100 for 48 hours. | |
| ## Performance | |
| [MultiPL-E](https://nuprl.github.io/MultiPL-E/) HumanEval Ruby | |
| - pass@1 = 0.10 | |
| - pass@10 = 0.14 |