A newer version of the Gradio SDK is available:
6.1.0
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
The Laban Movement Analysis project is a Gradio 5 custom component that performs video movement analysis using Laban Movement Analysis (LMA) principles combined with modern pose estimation models. It provides both a web UI and MCP-compatible API for AI agents.
Core Architecture
- Backend: Custom Gradio component in
backend/gradio_labanmovementanalysis/ - Frontend: Svelte components in
frontend/for the Gradio UI - Demo: Standalone Gradio app in
demo/for testing and deployment - Main Entry:
app.pyserves as the primary entry point for Hugging Face Spaces
Key Components
- LabanMovementAnalysis: Main Gradio component (
labanmovementanalysis.py) - Pose Estimation: Multi-model support (MediaPipe, MoveNet, YOLO variants)
- Notation Engine: LMA analysis logic (
notation_engine.py) - Visualizer: Video annotation and overlay generation (
visualizer.py) - Agent API: MCP-compatible interface for AI agents (
agent_api.py) - Video Processing: Smart input handling including YouTube/Vimeo downloads (
video_downloader.py)
Development Commands
Running the Application
# Main application (Hugging Face Spaces compatible)
python app.py
# Demo version
cd demo && python app.py
# Alternative demo with space configuration
python demo/space.py
Package Management
# Install dependencies
pip install -r requirements.txt
# Install in development mode
pip install -e .
# Build package
python -m build
# Upload to PyPI
python -m twine upload dist/*
Frontend Development
cd frontend
npm install
npm run build
Pose Estimation Models
The system supports 15+ pose estimation variants:
- MediaPipe:
mediapipe-lite,mediapipe-full,mediapipe-heavy - MoveNet:
movenet-lightning,movenet-thunder - YOLO v8:
yolo-v8-n,yolo-v8-s,yolo-v8-m,yolo-v8-l,yolo-v8-x - YOLO v11:
yolo-v11-n,yolo-v11-s,yolo-v11-m,yolo-v11-l,yolo-v11-x
API Usage Patterns
Standard Processing
from gradio_labanmovementanalysis import LabanMovementAnalysis
analyzer = LabanMovementAnalysis()
result = analyzer.process(video_path, model="mediapipe-full")
Agent API (MCP Compatible)
from gradio_labanmovementanalysis.agent_api import LabanAgentAPI
api = LabanAgentAPI()
result = await api.analyze_video(video_path, model="mediapipe-full")
Enhanced Processing with Visualization
json_result, viz_video = analyzer.process_video(
video_path,
model="mediapipe-full",
enable_visualization=True,
include_keypoints=True
)
File Organization
- Examples: JSON output samples in
examples/(mediapipe.json, yolo*.json, etc.) - Version Info:
version.pycontains package metadata - Configuration:
pyproject.tomlfor package building and dependencies - Deployment: Both standalone (
app.py) and demo (demo/) configurations
Important Implementation Notes
- The component inherits from Gradio's base
Componentclass - Video processing supports both file uploads and URL inputs (YouTube, Vimeo, direct URLs)
- MCP server capability is enabled via
mcp_server=Truein launch configurations - Error handling includes graceful fallbacks when optional features (like Agent API) are unavailable
- The system uses temporary files for video processing and cleanup
- JSON output includes both LMA analysis and optional raw keypoint data
Development Considerations
- The codebase maintains backward compatibility between demo and main app versions
- Component registration follows Gradio 5 patterns with proper export definitions
- Frontend uses modern Svelte with Gradio's component system
- Dependencies are managed through both requirements.txt and pyproject.toml
- The system is designed for both local development and cloud deployment (HF Spaces)