NeSyS World Model (ScienceWorld) — llama3-2-1b-instruct (filtered)

This repository contains a LoRA adapter (PEFT) for the paper “Neuro-Symbolic Synergy for Interactive World Modeling” (arXiv:2602.10480).

Summary

  • Environment: ScienceWorld
  • Base model: meta-llama/Llama-3.2-1B-Instruct
  • Adapter type: LoRA (PEFT)
  • Training data: filtered transitions (not covered by the symbolic rules)
  • Reinclude filtered percentage: 20%

Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = "meta-llama/Llama-3.2-1B-Instruct"
adapter_id = "cindermond/world-model-scienceworld-llama3-2-1b-instruct-filtered"

tokenizer = AutoTokenizer.from_pretrained(base_model, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
    base_model,
    device_map="auto",
    torch_dtype=torch.bfloat16 if torch.cuda.is_available() else torch.float32,
)
model = PeftModel.from_pretrained(model, adapter_id)
model.eval()

Citation

@article{zhao2026nesys,
  title        = {Neuro-Symbolic Synergy for Interactive World Modeling},
  author       = {Zhao, Hongyu and Zhou, Siyu and Yang, Haolin and Qin, Zengyi and Zhou, Tianyi},
  journal      = {arXiv preprint arXiv:2602.10480},
  year         = {2026}
}
Downloads last month
20
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for cindermond/world-model-scienceworld-llama3-2-1b-instruct-filtered

Adapter
(569)
this model

Paper for cindermond/world-model-scienceworld-llama3-2-1b-instruct-filtered