LLM Safety From Within: Detecting Harmful Content with Internal Representations Paper • 2604.18519 • Published 23 days ago • 26
ChessQA: Evaluating Large Language Models for Chess Understanding Paper • 2510.23948 • Published Oct 28, 2025
ThinkTwice: Jointly Optimizing Large Language Models for Reasoning and Self-Refinement Paper • 2604.01591 • Published Apr 2 • 42
ThinkTwice: Jointly Optimizing Large Language Models for Reasoning and Self-Refinement Paper • 2604.01591 • Published Apr 2 • 42
view article Article Navigating the RLHF Landscape: From Policy Gradients to PPO, GAE, and DPO for LLM Alignment NormalUhr • Feb 11, 2025 • 119
SEAM: Semantically Equivalent Across Modalities Benchmark for Vision-Language Models Paper • 2508.18179 • Published Aug 25, 2025 • 9 • 2
SEAM: Semantically Equivalent Across Modalities Benchmark for Vision-Language Models Paper • 2508.18179 • Published Aug 25, 2025 • 9
SEAM: Semantically Equivalent Across Modalities Benchmark for Vision-Language Models Paper • 2508.18179 • Published Aug 25, 2025 • 9