Skip to content

Testing Documentation

Welcome to the KGraph-MCP testing documentation. This section provides comprehensive information about our testing strategy, coverage reports, and quality assurance processes.

Overview

KGraph-MCP employs a multi-layered testing strategy to ensure code quality, reliability, and maintainability:

Testing Pyramid

graph TD
    A[Unit Tests] --> B[Integration Tests]
    B --> C[End-to-End Tests]
    C --> D[Performance Tests]

    A1[Fast, Isolated, Focused] --> A
    B1[Component Integration] --> B  
    C1[User Workflows] --> C
    D1[Load & Stress Testing] --> D

Test Types

Test Type Count Purpose Speed
Unit Tests 384+ Individual function/class testing ⚡ Fast
Integration Tests 50+ Component interaction testing 🔄 Medium
E2E Tests 20+ Full workflow testing 🐌 Slow
Performance Tests 10+ Load and stress testing ⏱️ Variable

Current Status

Test Execution

  • Total Tests: 516 tests
  • Passing: 384 tests ✅
  • Skipped: 2 tests (MCP server dependent) ⏭️
  • Failed: 1 test ❌ (under investigation)

Coverage Metrics

  • Overall Coverage: 66.17%
  • Target Coverage: 80%
  • High-Quality Modules: 5 modules >90% coverage
  • Need Attention: API modules (0% coverage)

Test Categories

By Functionality

  • 🤖 Agent Framework - Planner and Executor agents
  • 🧠 Knowledge Graph - Graph operations and queries
  • 🔌 MCP Integration - Protocol communication and tools
  • 🌐 API Endpoints - FastAPI route testing
  • 🎨 UI Components - Gradio interface testing
  • 📊 Data Processing - Embeddings and similarity

By Execution Speed

# Fast tests (< 1 second each)
just test-fast

# All tests including slow integration tests
just test

# Coverage analysis
just test-cov

Quality Metrics

Code Quality Gates

  • Linting: Ruff with strict rules
  • Type Checking: mypy with strict mode
  • Code Formatting: Black formatting
  • ⚠️ Test Coverage: 66.17% (target: 80%)
  • Security: Bandit security scanning

Continuous Integration

Our CI pipeline ensures:

  1. All tests pass before merging
  2. Coverage doesn't decrease
  3. Code quality standards maintained
  4. Documentation stays current

Testing Tools & Frameworks

Core Testing Stack

Quality Assurance Tools

Development Workflow

Test-Driven Development

  1. Write failing test first
  2. Implement minimal code to pass
  3. Refactor while keeping tests green
  4. Repeat for each feature

Before Committing

# Run quality checks
just lint
just type-check
just test-fast

# Full validation
just test-cov

CI/CD Integration

Tests are automatically run on:

  • Pull Requests - Full test suite
  • Main Branch - Full suite + deployment tests
  • Release Tags - Complete validation + performance tests

Getting Started

Running Tests

# Install dependencies
uv sync --dev

# Run all tests
just test

# Run with coverage
just test-cov

# Run specific test file
uv run pytest tests/test_specific.py

# Run specific test class
uv run pytest tests/test_file.py::TestClass

# Run with verbose output
uv run pytest -v

Writing Tests

See our Testing Standards for detailed guidelines on:

  • Test structure and organization
  • Naming conventions
  • Fixture usage
  • Mocking strategies
  • Assertion patterns

Debugging Tests

# Run tests with debugging
uv run pytest -vvv --tb=long

# Run single test with output
uv run pytest -s tests/test_file.py::test_function

# Drop into debugger on failure
uv run pytest --pdb

Resources

External Resources


Testing is not about finding bugs, it's about gaining confidence in your code.