Imagine-then-Plan: Agent Learning from Adaptive Lookahead with World Models
Abstract
Imagine-then-Plan framework enables agent learning through adaptive lookahead imagination, combining imagined trajectories with current observations to guide policy learning in complex task scenarios.
Recent advances in world models have shown promise for modeling future dynamics of environmental states, enabling agents to reason and act without accessing real environments. Current methods mainly perform single-step or fixed-horizon rollouts, leaving their potential for complex task planning under-exploited. We propose Imagine-then-Plan (ITP), a unified framework for agent learning via lookahead imagination, where an agent's policy model interacts with the learned world model, yielding multi-step ``imagined'' trajectories. Since the imagination horizon may vary by tasks and stages, we introduce a novel adaptive lookahead mechanism by trading off the ultimate goal and task progress. The resulting imagined trajectories provide rich signals about future consequences, such as achieved progress and potential conflicts, which are fused with current observations, formulating a partially observable and imaginable Markov decision process to guide policy learning. We instantiate ITP with both training-free and reinforcement-trained variants. Extensive experiments across representative agent benchmarks demonstrate that ITP significantly outperforms competitive baselines. Further analyses validate that our adaptive lookahead largely enhances agents' reasoning capability, providing valuable insights into addressing broader, complex tasks.
Community
TL;DR: An agent learning framework via lookahead imagination, where an agent's policy model interacts with the learned world model, yielding multi-step "imagined" trajectories. This imagination is conducted via a novel adaptive lookahead mechanism by trading off the ultimate goal and task progress.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Current Agents Fail to Leverage World Model as Tool for Foresight (2026)
- From Word to World: Can Large Language Models be Implicit Text-based World Models? (2025)
- Beyond Entangled Planning: Task-Decoupled Planning for Long-Horizon Agents (2026)
- Agent2World: Learning to Generate Symbolic World Models via Adaptive Multi-Agent Feedback (2025)
- Training One Model to Master Cross-Level Agentic Actions via Reinforcement Learning (2025)
- Beyond Monolithic Architectures: A Multi-Agent Search and Knowledge Optimization Framework for Agentic Search (2026)
- AstraNav-World: World Model for Foresight Control and Consistency (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper