File size: 4,031 Bytes
36537b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
# CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

## Project Overview

The Laban Movement Analysis project is a Gradio 5 custom component that performs video movement analysis using Laban Movement Analysis (LMA) principles combined with modern pose estimation models. It provides both a web UI and MCP-compatible API for AI agents.

### Core Architecture

- **Backend**: Custom Gradio component in `backend/gradio_labanmovementanalysis/`
- **Frontend**: Svelte components in `frontend/` for the Gradio UI
- **Demo**: Standalone Gradio app in `demo/` for testing and deployment  
- **Main Entry**: `app.py` serves as the primary entry point for Hugging Face Spaces

### Key Components

1. **LabanMovementAnalysis**: Main Gradio component (`labanmovementanalysis.py`)
2. **Pose Estimation**: Multi-model support (MediaPipe, MoveNet, YOLO variants) 
3. **Notation Engine**: LMA analysis logic (`notation_engine.py`)
4. **Visualizer**: Video annotation and overlay generation (`visualizer.py`)
5. **Agent API**: MCP-compatible interface for AI agents (`agent_api.py`)
6. **Video Processing**: Smart input handling including YouTube/Vimeo downloads (`video_downloader.py`)

## Development Commands

### Running the Application
```bash
# Main application (Hugging Face Spaces compatible)
python app.py

# Demo version
cd demo && python app.py

# Alternative demo with space configuration
python demo/space.py
```

### Package Management
```bash
# Install dependencies
pip install -r requirements.txt

# Install in development mode
pip install -e .

# Build package
python -m build

# Upload to PyPI  
python -m twine upload dist/*
```

### Frontend Development
```bash
cd frontend
npm install
npm run build
```

## Pose Estimation Models

The system supports 15+ pose estimation variants:

- **MediaPipe**: `mediapipe-lite`, `mediapipe-full`, `mediapipe-heavy`
- **MoveNet**: `movenet-lightning`, `movenet-thunder` 
- **YOLO v8**: `yolo-v8-n`, `yolo-v8-s`, `yolo-v8-m`, `yolo-v8-l`, `yolo-v8-x`
- **YOLO v11**: `yolo-v11-n`, `yolo-v11-s`, `yolo-v11-m`, `yolo-v11-l`, `yolo-v11-x`

## API Usage Patterns

### Standard Processing
```python
from gradio_labanmovementanalysis import LabanMovementAnalysis

analyzer = LabanMovementAnalysis()
result = analyzer.process(video_path, model="mediapipe-full")
```

### Agent API (MCP Compatible)
```python
from gradio_labanmovementanalysis.agent_api import LabanAgentAPI

api = LabanAgentAPI()
result = await api.analyze_video(video_path, model="mediapipe-full")
```

### Enhanced Processing with Visualization
```python
json_result, viz_video = analyzer.process_video(
    video_path,
    model="mediapipe-full", 
    enable_visualization=True,
    include_keypoints=True
)
```

## File Organization

- **Examples**: JSON output samples in `examples/` (mediapipe.json, yolo*.json, etc.)
- **Version Info**: `version.py` contains package metadata
- **Configuration**: `pyproject.toml` for package building and dependencies
- **Deployment**: Both standalone (`app.py`) and demo (`demo/`) configurations

## Important Implementation Notes

- The component inherits from Gradio's base `Component` class
- Video processing supports both file uploads and URL inputs (YouTube, Vimeo, direct URLs)
- MCP server capability is enabled via `mcp_server=True` in launch configurations
- Error handling includes graceful fallbacks when optional features (like Agent API) are unavailable
- The system uses temporary files for video processing and cleanup
- JSON output includes both LMA analysis and optional raw keypoint data

## Development Considerations

- The codebase maintains backward compatibility between demo and main app versions
- Component registration follows Gradio 5 patterns with proper export definitions
- Frontend uses modern Svelte with Gradio's component system
- Dependencies are managed through both requirements.txt and pyproject.toml
- The system is designed for both local development and cloud deployment (HF Spaces)