timestamp
stringdate 2025-11-18 05:18:54
2025-11-18 05:23:43
| end_timestamp
stringdate 2025-11-18 05:19:48
2025-11-18 05:32:43
| stage_name
stringclasses 1
value | stage_number
int64 1
1
| level
stringclasses 1
value | message
stringclasses 1
value | stdout_content
stringclasses 2
values | stderr_content
stringclasses 2
values | experiment_name
stringclasses 1
value | elapsed_time_seconds
float64 54.1
540
| stage_complete
bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|
2025-11-18T05:18:54.691144
|
2025-11-18T05:19:48.755648
|
evaluation_eval_0
| 1
|
INFO
|
Complete log capture for stage: evaluation_eval_0
|
[INFO] Starting stage: Evaluation - eval_0
[INFO] Starting evaluation pipeline for eval_0
[INFO] Evaluating model: Qwen/Qwen2.5-7B-Instruct
[INFO] Tasks: ['countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o']
[INFO] Annotators: ['best_of_n_atags']
🚀 Starting evaluation pipeline...
Loading from TAUR-dev/D-DATA-canonical_dataset_test_splits-v2-9_22_25 with configs: ['countdown_3arg', 'countdown_4arg', 'countdown_5arg', 'countdown_6arg', 'commonsenseQA', 'gsm8k', 'longmult_2dig', 'longmult_3dig', 'longmult_4dig', 'longmult_5dig', 'acronym_5o', 'acronym_4o', 'letter_countdown_5o', 'letter_countdown_4o'] and splits: ['test']
✅ Loaded 1000 samples from countdown_3arg/test
✅ Loaded 1000 samples from countdown_4arg/test
✅ Loaded 1000 samples from countdown_5arg/test
✅ Loaded 1000 samples from countdown_6arg/test
✅ Loaded 1221 samples from commonsenseQA/test
✅ Loaded 1319 samples from gsm8k/test
✅ Loaded 1000 samples from longmult_2dig/test
✅ Loaded 1000 samples from longmult_3dig/test
✅ Loaded 1000 samples from longmult_4dig/test
✅ Loaded 1000 samples from longmult_5dig/test
✅ Loaded 144 samples from acronym_5o/test
✅ Loaded 197 samples from acronym_4o/test
✅ Loaded 300 samples from letter_countdown_5o/test
✅ Loaded 300 samples from letter_countdown_4o/test
📊 Total dataset size: 11481 samples
🔧 Unpacking 'all_other_columns' JSON data...
Found extra columns: ['acronym', 'answer_index', 'answer_key', 'choices', 'difficulty', 'domain', 'evaluation_type', 'expected_answer_format', 'formed_acronym', 'id', 'length', 'letters', 'metadata', 'original_answer', 'source', 'task_config', 'task_source', 'task_type', 'variant', 'word_count', 'words']
Added 21 extra columns to dataset
🔓 No GPU restrictions specified for zaynes/runpod - allowing all available GPUs
🔄 Using runtime override: host_type=local (config default: local)
🎮 Available CUDA GPUs: ['2']
🚀 Starting 4 local servers asynchronously on 1 available GPUs...
⚠️ Could not load unified config: In 'config': Could not find 'machines/local'
Config search path:
provider=hydra, path=pkg://hydra.conf
provider=main, path=file:///workspace/skill-factory/skill_factory/config
provider=schema, path=structured://
⚠️ Requested 4 servers but only 1 GPUs available
🔧 Starting 1 servers instead
Server 1: GPU 2, Port 8000
⚠️ Not in main thread, signal handlers not registered
🔧 vLLM command will be: python -m vllm.entrypoints.openai.api_server --model Qwen/Qwen2.5-7B-Instruct --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name Qwen/Qwen2.5-7B-Instruct --disable-frontend-multiprocessing
🔧 Provider args: {'port': 8000}
🚀 Starting local server...
Working directory: /workspace/skill-factory/skill_factory/analysis/scripts
Server command: CUDA_VISIBLE_DEVICES=2 TRANSFORMERS_CACHE=/tmp/transformers_cache_vllm_1763443173 HF_HOME=/workspace/.cache/huggingface/ TRANSFORMERS_OFFLINE=0 HF_HUB_DISABLE_TELEMETRY=1 VLLM_WORKER_MULTIPROC_METHOD=spawn VLLM_REQUEST_TIMEOUT=0 VLLM_ENGINE_ITERATION_TIMEOUT_S=0 VLLM_WORKER_TIMEOUT=0 VLLM_RPC_TIMEOUT=0 VLLM_KEEP_ALIVE_TIMEOUT=0 VLLM_SCHEDULER_DELAY_FACTOR=0.0 VLLM_TARGET_UTILIZATION=0.95 VLLM_NCCL_SO_PATH= CUDA_LAUNCH_BLOCKING=0 CUDA_DEVICE_MAX_CONNECTIONS=1 PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128,expandable_segments:True python -m vllm.entrypoints.openai.api_server --model Qwen/Qwen2.5-7B-Instruct --tokenizer Qwen/Qwen2.5-1.5B-Instruct --port 8000 --host 0.0.0.0 --max-model-len 32768 --dtype auto --tensor-parallel-size 1 --max-num-seqs 256 --max-num-batched-tokens 65536 --enable-chunked-prefill --gpu-memory-utilization 0.85 --trust-remote-code --disable-log-requests --served-model-name Qwen/Qwen2.5-7B-Instruct --disable-frontend-multiprocessing
Local log file: /workspace/skill-factory/skill_factory/logs/hosting/skillserver_local_8000_1763443173.log
⏳ Waiting for local server startup (timeout: 300s)...
📋 Monitoring local logs in: /workspace/skill-factory/skill_factory/logs/hosting/skillserver_local_8000_1763443173.log
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] Error in inspecting model architecture 'Qwen2ForCausalLM'
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] Traceback (most recent call last):
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/registry.py", line 834, in _run_in_subprocess
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] returned.check_returncode()
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/subprocess.py", line 502, in check_returncode
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] raise CalledProcessError(self.returncode, self.args, self.stdout,
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] subprocess.CalledProcessError: Command '['/root/miniconda/miniconda3/envs/vllm/bin/python', '-m', 'vllm.model_executor.models.registry']' returned non-zero exit status 1.
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424]
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] The above exception was the direct cause of the following exception:
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424]
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] Traceback (most recent call last):
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/registry.py", line 422, in _try_inspect_model_cls
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] return model.inspect_model_cls()
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] ^^^^^^^^^^^^^^^^^^^^^^^^^
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/registry.py", line 393, in inspect_model_cls
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] return _run_in_subprocess(
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] ^^^^^^^^^^^^^^^^^^^
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/registry.py", line 837, in _run_in_subprocess
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] raise RuntimeError(f"Error raised in subprocess:\n"
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] RuntimeError: Error raised in subprocess:
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] warnings.warn(
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] <frozen runpy>:128: RuntimeWarning: 'vllm.model_executor.models.registry' found in sys.modules after import of package 'vllm.model_executor.models', but prior to execution of 'vllm.model_executor.models.registry'; this may result in unpredictable behaviour
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] Traceback (most recent call last):
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "<frozen runpy>", line 198, in _run_module_as_main
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "<frozen runpy>", line 88, in _run_code
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/registry.py", line 858, in <module>
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] _run()
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/registry.py", line 851, in _run
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] result = fn()
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] ^^^^
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/registry.py", line 394, in <lambda>
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] lambda: _ModelInfo.from_model_cls(self.load_model_cls()))
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] ^^^^^^^^^^^^^^^^^^^^^
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/registry.py", line 397, in load_model_cls
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] mod = importlib.import_module(self.module_name)
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/importlib/__init__.py", line 90, in import_module
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] return _bootstrap._gcd_import(name[level:], package, level)
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "<frozen importlib._bootstrap_external>", line 999, in exec_module
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/models/qwen2.py", line 48, in <module>
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] from vllm.model_executor.model_loader.weight_utils import (
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/__init__.py", line 11, in <module>
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] from vllm.model_executor.model_loader.bitsandbytes_loader import (
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/model_loader/bitsandbytes_loader.py", line 24, in <module>
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] from vllm.model_executor.layers.fused_moe import FusedMoE
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/fused_moe/__init__.py", line 8, in <module>
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] from vllm.model_executor.layers.fused_moe.layer import (
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/fused_moe/layer.py", line 26, in <module>
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] from vllm.model_executor.layers.fused_moe.modular_kernel import (
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/fused_moe/modular_kernel.py", line 13, in <module>
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] from vllm.model_executor.layers.fused_moe.utils import ( # yapf: disable
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/fused_moe/utils.py", line 9, in <module>
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] from vllm.model_executor.layers.quantization.utils.fp8_utils import (
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/utils/fp8_utils.py", line 18, in <module>
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] from vllm.model_executor.layers.quantization.utils.w8a8_utils import (
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/utils/w8a8_utils.py", line 70, in <module>
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] CUTLASS_FP8_SUPPORTED = cutlass_fp8_supported()
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] ^^^^^^^^^^^^^^^^^^^^^^^
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/model_executor/layers/quantization/utils/w8a8_utils.py", line 44, in cutlass_fp8_supported
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] capability_tuple = current_platform.get_device_capability()
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/platforms/cuda.py", line 46, in wrapper
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] return fn(*args, **kwargs)
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] ^^^^^^^^^^^^^^^^^^^
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/platforms/cuda.py", line 525, in get_device_capability
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] handle = pynvml.nvmlDeviceGetHandleByIndex(physical_device_id)
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/third_party/pynvml.py", line 2609, in nvmlDeviceGetHandleByIndex
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] _nvmlCheckReturn(ret)
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/third_party/pynvml.py", line 1047, in _nvmlCheckReturn
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] raise NVMLError(ret)
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] vllm.third_party.pynvml.NVMLError_InvalidArgument: Invalid Argument
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424]
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m Traceback (most recent call last):
🖥️ [LOCAL] [1;36m(APIServer pid=11176)[0;0m Value error, Model architectures ['Qwen2ForCausalLM'] failed to be inspected. Please check the logs for more details. [type=value_error, input_value=ArgsKwargs((), {'model': ...gits_processors': None}), input_type=ArgsKwargs]
📋 Local log monitoring thread exiting cleanly
❌ Server process exited early
Exit code: 1
📋 Last 50 lines from /workspace/skill-factory/skill_factory/logs/hosting/skillserver_local_8000_1763443173.log:
================================================================================
[1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/third_party/pynvml.py", line 1047, in _nvmlCheckReturn
[1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] raise NVMLError(ret)
[1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424] vllm.third_party.pynvml.NVMLError_InvalidArgument: Invalid Argument
[1;36m(APIServer pid=11176)[0;0m ERROR 11-18 05:19:46 [registry.py:424]
[1;36m(APIServer pid=11176)[0;0m Traceback (most recent call last):
[1;36m(APIServer pid=11176)[0;0m File "<frozen runpy>", line 198, in _run_module_as_main
[1;36m(APIServer pid=11176)[0;0m File "<frozen runpy>", line 88, in _run_code
[1;36m(APIServer pid=11176)[0;0m File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1918, in <module>
[1;36m(APIServer pid=11176)[0;0m uvloop.run(run_server(args))
[1;36m(APIServer pid=11176)[0;0m File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/uvloop/__init__.py", line 109, in run
[1;36m(APIServer pid=11176)[0;0m return __asyncio.run(
[1;36m(APIServer pid=11176)[0;0m ^^^^^^^^^^^^^^
[1;36m(APIServer pid=11176)[0;0m File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/asyncio/runners.py", line 195, in run
[1;36m(APIServer pid=11176)[0;0m return runner.run(main)
[1;36m(APIServer pid=11176)[0;0m ^^^^^^^^^^^^^^^^
[1;36m(APIServer pid=11176)[0;0m File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/asyncio/runners.py", line 118, in run
[1;36m(APIServer pid=11176)[0;0m return self._loop.run_until_complete(task)
[1;36m(APIServer pid=11176)[0;0m ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(APIServer pid=11176)[0;0m File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
[1;36m(APIServer pid=11176)[0;0m File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/uvloop/__init__.py", line 61, in wrapper
[1;36m(APIServer pid=11176)[0;0m return await main
[1;36m(APIServer pid=11176)[0;0m ^^^^^^^^^^
[1;36m(APIServer pid=11176)[0;0m File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1850, in run_server
[1;36m(APIServer pid=11176)[0;0m await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
[1;36m(APIServer pid=11176)[0;0m File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1870, in run_server_worker
[1;36m(APIServer pid=11176)[0;0m async with build_async_engine_client(
[1;36m(APIServer pid=11176)[0;0m ^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(APIServer pid=11176)[0;0m File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/contextlib.py", line 210, in __aenter__
[1;36m(APIServer pid=11176)[0;0m return await anext(self.gen)
[1;36m(APIServer pid=11176)[0;0m ^^^^^^^^^^^^^^^^^^^^^
[1;36m(APIServer pid=11176)[0;0m File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 178, in build_async_engine_client
[1;36m(APIServer pid=11176)[0;0m async with build_async_engine_client_from_engine_args(
[1;36m(APIServer pid=11176)[0;0m ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(APIServer pid=11176)[0;0m File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/contextlib.py", line 210, in __aenter__
[1;36m(APIServer pid=11176)[0;0m return await anext(self.gen)
[1;36m(APIServer pid=11176)[0;0m ^^^^^^^^^^^^^^^^^^^^^
[1;36m(APIServer pid=11176)[0;0m File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 204, in build_async_engine_client_from_engine_args
[1;36m(APIServer pid=11176)[0;0m vllm_config = engine_args.create_engine_config(usage_context=usage_context)
[1;36m(APIServer pid=11176)[0;0m ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(APIServer pid=11176)[0;0m File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 1057, in create_engine_config
[1;36m(APIServer pid=11176)[0;0m model_config = self.create_model_config()
[1;36m(APIServer pid=11176)[0;0m ^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(APIServer pid=11176)[0;0m File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 904, in create_model_config
[1;36m(APIServer pid=11176)[0;0m return ModelConfig(
[1;36m(APIServer pid=11176)[0;0m ^^^^^^^^^^^^
[1;36m(APIServer pid=11176)[0;0m File "/root/miniconda/miniconda3/envs/vllm/lib/python3.12/site-packages/pydantic/_internal/_dataclasses.py", line 123, in __init__
[1;36m(APIServer pid=11176)[0;0m s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
[1;36m(APIServer pid=11176)[0;0m pydantic_core._pydantic_core.ValidationError: 1 validation error for ModelConfig
[1;36m(APIServer pid=11176)[0;0m Value error, Model architectures ['Qwen2ForCausalLM'] failed to be inspected. Please check the logs for more details. [type=value_error, input_value=ArgsKwargs((), {'model': ...gits_processors': None}), input_type=ArgsKwargs]
[1;36m(APIServer pid=11176)[0;0m For further information visit https://errors.pydantic.dev/2.11/v/value_error
================================================================================
❌ GPU 2 (port 8000) failed: Failed to start local server for Qwen/Qwen2.5-7B-Instruct on GPU 2
❌ Local server on GPU 2 failed
⚠️ 1 local servers failed to start
❌ Evaluation failed: No local servers started successfully
[ERROR] Stage error: RuntimeError: No local servers started successfully
|
README.md: 0.00B [00:00, ?B/s]
README.md: 15.3kB [00:00, 26.7MB/s]
countdown_3arg/test-00000-of-00001.parqu(…): 0%| | 0.00/199k [00:00<?, ?B/s]
countdown_3arg/test-00000-of-00001.parqu(…): 16%|███████████████████ | 31.2k/199k [00:00<00:05, 32.6kB/s]
countdown_3arg/test-00000-of-00001.parqu(…): 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 199k/199k [00:01<00:00, 193kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 62918.02 examples/s]
countdown_4arg/test-00000-of-00001.parqu(…): 0%| | 0.00/215k [00:00<?, ?B/s]
countdown_4arg/test-00000-of-00001.parqu(…): 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 215k/215k [00:01<00:00, 208kB/s]
countdown_4arg/test-00000-of-00001.parqu(…): 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 215k/215k [00:01<00:00, 208kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 69758.57 examples/s]
countdown_5arg/test-00000-of-00001.parqu(…): 0%| | 0.00/231k [00:00<?, ?B/s]
countdown_5arg/test-00000-of-00001.parqu(…): 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 231k/231k [00:01<00:00, 221kB/s]
countdown_5arg/test-00000-of-00001.parqu(…): 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 231k/231k [00:01<00:00, 220kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 63643.60 examples/s]
countdown_6arg/test-00000-of-00001.parqu(…): 0%| | 0.00/245k [00:00<?, ?B/s]
countdown_6arg/test-00000-of-00001.parqu(…): 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 245k/245k [00:01<00:00, 220kB/s]
countdown_6arg/test-00000-of-00001.parqu(…): 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 245k/245k [00:01<00:00, 219kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 72076.99 examples/s]
commonsenseQA/test-00000-of-00001.parque(…): 0%| | 0.00/331k [00:00<?, ?B/s]
commonsenseQA/test-00000-of-00001.parque(…): 7%|████████▊ | 24.2k/331k [00:01<00:18, 16.5kB/s]
commonsenseQA/test-00000-of-00001.parque(…): 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 331k/331k [00:01<00:00, 216kB/s]
Generating test split: 0%| | 0/1221 [00:00<?, ? examples/s]
Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1221/1221 [00:00<00:00, 87385.81 examples/s]
gsm8k/test-00000-of-00001.parquet: 0%| | 0.00/693k [00:00<?, ?B/s]
gsm8k/test-00000-of-00001.parquet: 21%|███████████████████████████▋ | 144k/693k [00:01<00:04, 113kB/s]
gsm8k/test-00000-of-00001.parquet: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 693k/693k [00:01<00:00, 541kB/s]
gsm8k/test-00000-of-00001.parquet: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 693k/693k [00:01<00:00, 437kB/s]
Generating test split: 0%| | 0/1319 [00:00<?, ? examples/s]
Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1319/1319 [00:00<00:00, 78392.10 examples/s]
longmult_2dig/test-00000-of-00001.parque(…): 0%| | 0.00/62.3k [00:00<?, ?B/s]
longmult_2dig/test-00000-of-00001.parque(…): 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 62.3k/62.3k [00:00<00:00, 69.1kB/s]
longmult_2dig/test-00000-of-00001.parque(…): 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 62.3k/62.3k [00:00<00:00, 69.0kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 104297.00 examples/s]
longmult_3dig/test-00000-of-00001.parque(…): 0%| | 0.00/74.8k [00:00<?, ?B/s]
longmult_3dig/test-00000-of-00001.parque(…): 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 74.8k/74.8k [00:00<00:00, 81.7kB/s]
longmult_3dig/test-00000-of-00001.parque(…): 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 74.8k/74.8k [00:00<00:00, 81.6kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 106295.24 examples/s]
longmult_4dig/test-00000-of-00001.parque(…): 0%| | 0.00/86.1k [00:00<?, ?B/s]
longmult_4dig/test-00000-of-00001.parque(…): 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 86.1k/86.1k [00:00<00:00, 98.0kB/s]
longmult_4dig/test-00000-of-00001.parque(…): 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 86.1k/86.1k [00:00<00:00, 97.8kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 103412.41 examples/s]
longmult_5dig/test-00000-of-00001.parque(…): 0%| | 0.00/95.7k [00:00<?, ?B/s]
longmult_5dig/test-00000-of-00001.parque(…): 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 95.7k/95.7k [00:01<00:00, 85.5kB/s]
longmult_5dig/test-00000-of-00001.parque(…): 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 95.7k/95.7k [00:01<00:00, 85.4kB/s]
Generating test split: 0%| | 0/1000 [00:00<?, ? examples/s]
Generating test split: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 101515.21 examples/s]
acronym_5o/test-00000-of-00001.parquet: 0%| | 0.00/44.2k [00:00<?, ?B/s]
acronym_5o/test-00000-of-00001.parquet: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 44.2k/44.2k [00:00<00:00, 53.2kB/s]
acronym_5o/test-00000-of-00001.parquet: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 44.2k/44.2k [00:00<00:00, 53.1kB/s]
Generating test split: 0%| | 0/144 [00:00<?, ? examples/s]
Generating test split: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 144/144 [00:00<00:00, 16846.47 examples/s]
acronym_4o/test-00000-of-00001.parquet: 0%| | 0.00/54.0k [00:00<?, ?B/s]
acronym_4o/test-00000-of-00001.parquet: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 54.0k/54.0k [00:00<00:00, 58.5kB/s]
acronym_4o/test-00000-of-00001.parquet: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 54.0k/54.0k [00:00<00:00, 58.4kB/s]
Generating test split: 0%| | 0/197 [00:00<?, ? examples/s]
Generating test split: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 197/197 [00:00<00:00, 23962.59 examples/s]
letter_countdown_5o/test-00000-of-00001.(…): 0%| | 0.00/49.0k [00:00<?, ?B/s]
letter_countdown_5o/test-00000-of-00001.(…): 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 49.0k/49.0k [00:00<00:00, 53.3kB/s]
letter_countdown_5o/test-00000-of-00001.(…): 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 49.0k/49.0k [00:00<00:00, 53.3kB/s]
Generating test split: 0%| | 0/300 [00:00<?, ? examples/s]
Generating test split: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 300/300 [00:00<00:00, 35350.22 examples/s]
letter_countdown_4o/test-00000-of-00001.(…): 0%| | 0.00/47.7k [00:00<?, ?B/s]
letter_countdown_4o/test-00000-of-00001.(…): 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47.7k/47.7k [00:00<00:00, 57.0kB/s]
letter_countdown_4o/test-00000-of-00001.(…): 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47.7k/47.7k [00:00<00:00, 56.9kB/s]
Generating test split: 0%| | 0/300 [00:00<?, ? examples/s]
Generating test split: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 300/300 [00:00<00:00, 32294.52 examples/s]
|
FinEval_16k_fulleval_3arg_Q7B_base
| 54.064504
| true
|
2025-11-18T05:23:44.204587
|
2025-11-18T05:32:44.337640
|
evaluation_eval_0
| 1
|
INFO
|
Complete log capture for stage: evaluation_eval_0
| "[INFO] Starting stage: Evaluation - eval_0\n[INFO] Starting evaluation pipeline for eval_0\n[INFO] (...TRUNCATED)
| "\u001b[2;36m[11/18/25 05:24:56]\u001b[0m\u001b[2;36m \u001b[0m\u001b[34mINFO \u001b[0m Getting r(...TRUNCATED)
|
FinEval_16k_fulleval_3arg_Q7B_base
| 540.133053
| true
|
Experiment Tracker: FinEval_16k_fulleval_3arg_Q7B_base
Experiment Description: Simple test experiment for Skill Factory workflows.
Start Time: 2025-11-18T05:33:24.962935
Tracker Dataset: TAUR-dev/D-ExpTracker__FinEval_16k_fulleval_3arg_Q7B_base__v1
Stages Completed
Total stages: 1
Models Created
Dataset Configurations
This tracker dataset contains the following configurations with immediate upload as stages complete:
Training Data (Complete Datasets)
Hyperparameters (Complete Configurations)
Logs (Stage-Specific)
Evaluation Results (Complete with Annotations)
Metadata
- experiment_metadata: Timeline and stage information
Usage
Load specific configurations with:
from datasets import load_dataset
# Load experiment metadata
metadata = load_dataset('TAUR-dev/D-ExpTracker__FinEval_16k_fulleval_3arg_Q7B_base__v1', 'experiment_metadata')
# Load complete training datasets
sft_data = load_dataset('TAUR-dev/D-ExpTracker__FinEval_16k_fulleval_3arg_Q7B_base__v1', 'training_data__sft')
sft_metadata = load_dataset('TAUR-dev/D-ExpTracker__FinEval_16k_fulleval_3arg_Q7B_base__v1', 'training_data__sft_metadata')
# Load complete configurations
sft_hyperparams = load_dataset('TAUR-dev/D-ExpTracker__FinEval_16k_fulleval_3arg_Q7B_base__v1', 'hyperparameters__sft')
rl_hyperparams = load_dataset('TAUR-dev/D-ExpTracker__FinEval_16k_fulleval_3arg_Q7B_base__v1', 'hyperparameters__rl')
# Load stage-specific logs
sft_logs = load_dataset('TAUR-dev/D-ExpTracker__FinEval_16k_fulleval_3arg_Q7B_base__v1', 'logs__sft')
rl_logs = load_dataset('TAUR-dev/D-ExpTracker__FinEval_16k_fulleval_3arg_Q7B_base__v1', 'logs__rl')
# Load evaluation results with annotations
sft_eval_results = load_dataset('TAUR-dev/D-ExpTracker__FinEval_16k_fulleval_3arg_Q7B_base__v1', 'evals_eval_sft')
rl_eval_results = load_dataset('TAUR-dev/D-ExpTracker__FinEval_16k_fulleval_3arg_Q7B_base__v1', 'evals_eval_rl')
Models
Registry
All models from this experiment are automatically registered in the SkillFactory Model Registry with:
- Complete training configuration (hyperparameters, datasets, methods)
- Experiment lineage (links back to this tracker dataset)
- Stage-specific metadata (SFT vs RL training details)
- Structured input data references (training datasets and configurations)
Registry entries follow the naming pattern: Model - FinEval_16k_fulleval_3arg_Q7B_base - {stage_name} - {SFT/RL}
Generated by SkillFactory Experiment Management System All artifacts uploaded immediately as stages complete with perfect data provenance
- Downloads last month
- 7