# Deployment Guide for CX AI Agent ## Hugging Face Spaces Deployment ### Prerequisites 1. Hugging Face account 2. Hugging Face API token with write access ### Step 1: Create a New Space 1. Go to https://huggingface.co/spaces 2. Click "Create new Space" 3. Choose: - **Owner**: Your username or organization - **Space name**: `cx-ai-agent` - **License**: MIT - **Space SDK**: Gradio - **Space hardware**: CPU Basic (free) or upgrade for better performance ### Step 2: Upload Files Upload these essential files to your Space: **Required Files:** ``` app.py # Main Gradio app requirements_gradio.txt # Dependencies (rename to requirements.txt) README_HF_SPACES.md # Space README (rename to README.md) app/ # Application code ├── __init__.py ├── config.py ├── main.py ├── orchestrator.py ├── schema.py └── logging_utils.py agents/ # Agent implementations ├── __init__.py ├── hunter.py ├── enricher.py ├── contactor.py ├── scorer.py ├── writer.py ├── compliance.py ├── sequencer.py └── curator.py mcp/ # MCP servers ├── __init__.py ├── registry.py └── servers/ ├── __init__.py ├── calendar_server.py ├── email_server.py ├── search_server.py └── store_server.py vector/ # Vector store ├── __init__.py ├── embeddings.py ├── retriever.py └── store.py data/ # Data files ├── companies.json ├── suppression.json └── footer.txt scripts/ # Utility scripts ├── start_mcp_servers.sh └── seed_vectorstore.py ``` ### Step 3: Configure Secrets In your Space settings, add these secrets: 1. Go to your Space settings 2. Click on "Repository secrets" 3. Add: - `HF_API_TOKEN`: Your Hugging Face API token ### Step 4: Update README.md Rename `README_HF_SPACES.md` to `README.md` and update: - Space URL - Social media post link - Demo video link (after recording) Make sure the README includes the frontmatter: ```yaml --- title: CX AI Agent - Autonomous Multi-Agent System emoji: 🤖 colorFrom: blue colorTo: purple sdk: gradio sdk_version: 5.5.0 app_file: app.py pinned: false tags: - mcp-in-action-track-02 - autonomous-agents - mcp - rag license: mit --- ``` ### Step 5: Start MCP Servers For HF Spaces, you have two options: #### Option A: Background Processes (Recommended for demo) The MCP servers will start automatically when the app launches. Make sure `scripts/start_mcp_servers.sh` is executable. #### Option B: Simplified Integration If background processes don't work on HF Spaces, you can integrate the MCP server logic directly into the app by modifying the `mcp/registry.py` to use in-memory implementations instead of separate processes. ### Step 6: Initialize Vector Store The vector store will be initialized on first run. You can also pre-seed it by running: ```bash python scripts/seed_vectorstore.py ``` ### Step 7: Test the Deployment 1. Visit your Space URL 2. Check the System tab for health status 3. Run the pipeline with a test company 4. Verify MCP server interactions in the workflow log --- ## Local Development ### Setup 1. **Clone the repository:** ```bash git clone https://github.com/yourusername/cx_ai_agent cd cx_ai_agent ``` 2. **Create virtual environment:** ```bash python3.11 -m venv .venv source .venv/bin/activate # Windows: .venv\Scripts\activate ``` 3. **Install dependencies:** ```bash pip install -r requirements_gradio.txt ``` 4. **Set up environment:** ```bash cp .env.example .env # Edit .env and add your HF_API_TOKEN ``` 5. **Start MCP servers:** ```bash bash scripts/start_mcp_servers.sh ``` 6. **Seed vector store:** ```bash python scripts/seed_vectorstore.py ``` 7. **Run the app:** ```bash python app.py ``` The app will be available at http://localhost:7860 --- ## Troubleshooting ### MCP Servers Not Starting **On HF Spaces:** If MCP servers fail to start as background processes, you can modify the implementation to use in-memory storage instead. Update `mcp/registry.py` to instantiate servers directly rather than connecting to them via HTTP. **Locally:** ```bash # Check if ports are already in use lsof -i:9001,9002,9003,9004 # Unix netstat -ano | findstr "9001 9002 9003 9004" # Windows # Kill processes if needed pkill -f "mcp/servers" # Unix ``` ### Vector Store Issues ```bash # Rebuild the index rm data/faiss.index python scripts/seed_vectorstore.py ``` ### HuggingFace API Issues ```bash # Verify token python -c "from huggingface_hub import InferenceClient; c = InferenceClient(); print('OK')" # Try fallback model if main model is rate limited # Edit app/config.py and change MODEL_NAME to MODEL_NAME_FALLBACK ``` --- ## Performance Optimization ### For HF Spaces 1. **Upgrade Space Hardware:** - CPU Basic (free): Good for testing - CPU Upgraded: Better for demos - GPU: Best for production-like performance 2. **Model Selection:** - Default: `Qwen/Qwen2.5-7B-Instruct` (high quality) - Fallback: `mistralai/Mistral-7B-Instruct-v0.2` (faster) - For free tier: Consider smaller models like `HuggingFaceH4/zephyr-7b-beta` 3. **Caching:** - Vector store is cached after first build - Consider pre-building the FAISS index in the repo --- ## Monitoring ### Health Checks The System tab provides: - MCP server status - Vector store initialization status - HF Inference API connectivity ### Logs Check Space logs for: - Agent execution flow - MCP server interactions - Error messages --- ## Security Notes ### Secrets Management - Never commit `.env` file - Always use HF Spaces secrets for `HF_API_TOKEN` - Rotate tokens regularly ### Data Privacy - Sample data is for demonstration only - For production, ensure GDPR/CCPA compliance - Implement proper suppression list management --- ## Next Steps After successful deployment: 1. **Record Demo Video:** - Show pipeline execution - Highlight MCP interactions - Demonstrate RAG capabilities - Record 1-5 minutes 2. **Create Social Media Post:** - Share on X/LinkedIn - Include Space URL - Use hackathon hashtags - Add demo video or GIF 3. **Submit to Hackathon:** - Verify README includes `mcp-in-action-track-02` tag - Add social media link to README - Add demo video link to README --- ## Support For issues: - Check HF Spaces logs - Review troubleshooting section - Check GitHub issues - Contact maintainers --- **Good luck with your submission! 🚀**