BagelVLA: Enhancing Long-Horizon Manipulation via Interleaved Vision-Language-Action Generation
Abstract
BagelVLA is a unified Vision-Language-Action model that integrates linguistic planning, visual forecasting, and action generation through residual flow guidance for improved manipulation tasks.
Equipping embodied agents with the ability to reason about tasks, foresee physical outcomes, and generate precise actions is essential for general-purpose manipulation. While recent Vision-Language-Action (VLA) models have leveraged pre-trained foundation models, they typically focus on either linguistic planning or visual forecasting in isolation. These methods rarely integrate both capabilities simultaneously to guide action generation, leading to suboptimal performance in complex, long-horizon manipulation tasks. To bridge this gap, we propose BagelVLA, a unified model that integrates linguistic planning, visual forecasting, and action generation within a single framework. Initialized from a pretrained unified understanding and generative model, BagelVLA is trained to interleave textual reasoning and visual prediction directly into the action execution loop. To efficiently couple these modalities, we introduce Residual Flow Guidance (RFG), which initializes from current observation and leverages single-step denoising to extract predictive visual features, guiding action generation with minimal latency. Extensive experiments demonstrate that BagelVLA outperforms existing baselines by a significant margin on multiple simulated and real-world benchmarks, particularly in tasks requiring multi-stage reasoning.
Community
BagelVLA is a unified model that integrates linguistic planning, visual forecasting, and action generation within a single framework for long-horizon manipulation tasks.
🧠Model Architecture
BagelVLA utilizes a Mixture-of-Transformers (MoT) architecture, comprising three independent transformers specialized for linguistic, visual, and action modalities. To tackle long-horizon tasks and semantic generalization, we formulate language-conditioned action learning as a long-sequence interleaved planning problem. These modalities are structured into a unified sequence, enabling the model to generate predictions across all three modalities based on the interleaved context.
To address the high latency in combining visual generation with control, we introduce Residual Flow Guidance (RFG). Instead of generating future frames from scratch, RFG conditions on the current observation as a strong structural prior and performs single-step denoising to predict the residual change toward the next keyframe. RFG provides a lightweight predictive visual representation that captures task-relevant dynamics with minimal overhead. This substantially reduces the computational cost of foresight while preserving its utility for action generation.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- InternVLA-A1: Unifying Understanding, Generation and Action for Robotic Manipulation (2026)
- Motus: A Unified Latent Action World Model (2025)
- PALM: Progress-Aware Policy Learning via Affordance Reasoning for Long-Horizon Robotic Manipulation (2026)
- Unified Embodied VLM Reasoning with Robotic Action via Autoregressive Discretized Pre-training (2025)
- Causal World Modeling for Robot Control (2026)
- LoLA: Long Horizon Latent Action Learning for General Robot Manipulation (2025)
- ACoT-VLA: Action Chain-of-Thought for Vision-Language-Action Models (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper