Don't Retrieve, Navigate: Distilling Enterprise Knowledge into Navigable Agent Skills for QA and RAG
Abstract
Corpus2Skill enhances retrieval-augmented generation by structuring document corpora into hierarchical skill directories that enable language model agents to navigate and reason about information organization during query processing.
Retrieval-Augmented Generation (RAG) grounds LLM responses in external evidence but treats the model as a passive consumer of search results: it never sees how the corpus is organized or what it has not yet retrieved, limiting its ability to backtrack or combine scattered evidence. We present Corpus2Skill, which distills a document corpus into a hierarchical skill directory offline and lets an LLM agent navigate it at serve time. The compilation pipeline iteratively clusters documents, generates LLM-written summaries at each level, and materializes the result as a tree of navigable skill files. At serve time, the agent receives a bird's-eye view of the corpus, drills into topic branches via progressively finer summaries, and retrieves full documents by ID. Because the hierarchy is explicitly visible, the agent can reason about where to look, backtrack from unproductive paths, and combine evidence across branches. On WixQA, an enterprise customer-support benchmark for RAG, Corpus2Skill outperforms dense retrieval, RAPTOR, and agentic RAG baselines across all quality metrics.
Community
We built and released Corpus2Skill (C2S), an agentic RAG system that replaces the traditional vector/BM25 retrieval stack with a navigable skill hierarchy the LLM browses directly at query time. On enterprise-style QA benchmarks, C2S matches or beats strong retrieval baselines with no retrieval system at serve time.
C2S compiles any corpus offline into a tree of Anthropic Skills — SKILL.md summaries at each level and document IDs at the leaves. At query time the LLM walks the tree (ls/cat via code execution) and pulls full documents via a get_document tool. No vector DB, no BM25 index, no embedding model at serve time.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SPD-RAG: Sub-Agent Per Document Retrieval-Augmented Generation (2026)
- IndexRAG: Bridging Facts for Cross-Document Reasoning at Index Time (2026)
- PAR$^2$-RAG: Planned Active Retrieval and Reasoning for Multi-Hop Question Answering (2026)
- Do We Still Need GraphRAG? Benchmarking RAG and GraphRAG for Agentic Search Systems (2026)
- FRESCO: Benchmarking and Optimizing Re-rankers for Evolving Semantic Conflict in Retrieval-Augmented Generation (2026)
- TaSR-RAG: Taxonomy-guided Structured Reasoning for Retrieval-Augmented Generation (2026)
- Training the Knowledge Base through Evidence Distillation and Write-Back Enrichment (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.14572 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper