Language specific performance
Is the training data for this Python heavy? Is there any indication of performance on lower level languages like cpp and rust?
What debugging usecases is this best applied to (and what's out if scope)?
Hi, thanks for the comment. It is indeed Python heavy and our evaluation results were obtained on SWE-Bench-verified (also just in Python). That said, we do believe the technique (see https://microsoft.github.io/debug-gym/blog/2025/10/bug-pilot) used to generate the debugging trajectories could be applied to other programming languages for further finetuning.
Since we use R2EGym scaffolding and SWE-Smith to generate synthetic bugs, the most suitable debugging cases are those that share similarity with the whole SWE-bench pipeline (i.e., given a codebase + Github-like issue statement, the agent will produce a reproducing script and a code patch to fix the issue). Keep in mind, this is a model with many limitations (as stated in the model card) intended for research purposes.