| # Thunkable Integration Examples for Gradio API | |
| This file contains examples of how to integrate the RetinaFace Gradio API with Thunkable. | |
| ## 1. Camera Capture and Face Detection | |
| ### Blocks Setup: | |
| ``` | |
| 1. Camera1 β TakePicture | |
| 2. Convert image to base64 | |
| 3. Make API call to Gradio endpoint | |
| 4. Process results | |
| ``` | |
| ### Base64 Conversion Block: | |
| ``` | |
| Set app variable "imageBase64" to | |
| call CloudinaryAPI.convertToBase64 | |
| mediaDB = Camera1.Picture | |
| ``` | |
| ### API Call Block: | |
| ``` | |
| Web API1: | |
| - URL: https://your-space-name.hf.space/api/predict | |
| - Method: POST | |
| - Headers: {"Content-Type": "application/json"} | |
| - Body: { | |
| "data": [ | |
| get app variable "imageBase64", | |
| "mobilenet", | |
| 0.5, | |
| 0.4 | |
| ] | |
| } | |
| ``` | |
| ### Response Handling: | |
| ``` | |
| When Web API1 receives data: | |
| Set app variable "apiResponse" to responseBody | |
| Set app variable "detectionData" to get property "data" of apiResponse | |
| Set app variable "resultData" to get item 1 of list detectionData | |
| Set app variable "faces" to get property "faces" of resultData | |
| Set app variable "faceCount" to get property "total_faces" of resultData | |
| If faceCount > 0: | |
| For each item "face" in list "faces": | |
| Set app variable "confidence" to get property "confidence" of object "face" | |
| Set app variable "bbox" to get property "bbox" of object "face" | |
| // Draw bounding box or show results | |
| Set Label1.Text to join("Found face with confidence: ", confidence) | |
| ``` | |
| ## 2. API Response Structure | |
| ### Gradio API Response Format: | |
| ```json | |
| { | |
| "data": [ | |
| { | |
| "faces": [ | |
| { | |
| "bbox": {"x1": 100, "y1": 120, "x2": 200, "y2": 220}, | |
| "confidence": 0.95, | |
| "landmarks": { | |
| "right_eye": [130, 150], | |
| "left_eye": [170, 150], | |
| "nose": [150, 170], | |
| "right_mouth": [135, 190], | |
| "left_mouth": [165, 190] | |
| } | |
| } | |
| ], | |
| "processing_time": 0.1, | |
| "model_used": "mobilenet", | |
| "total_faces": 1 | |
| } | |
| ] | |
| } | |
| ``` | |
| ### Extracting Data in Thunkable: | |
| ``` | |
| // Get the main detection result | |
| Set app variable "result" to get item 1 of list (get property "data" of responseBody) | |
| // Extract face information | |
| Set app variable "faces" to get property "faces" of result | |
| Set app variable "totalFaces" to get property "total_faces" of result | |
| Set app variable "processingTime" to get property "processing_time" of result | |
| Set app variable "modelUsed" to get property "model_used" of result | |
| // For each face detected | |
| For each item "face" in list "faces": | |
| Set app variable "boundingBox" to get property "bbox" of face | |
| Set app variable "confidence" to get property "confidence" of face | |
| Set app variable "landmarks" to get property "landmarks" of face | |
| ``` | |
| ## 3. Error Handling | |
| ### Connection Error: | |
| ``` | |
| When Web API1 has error: | |
| Set Label_Error.Text to "Failed to connect to face detection service" | |
| Set Label_Error.Visible to true | |
| ``` | |
| ### API Error Response: | |
| ``` | |
| When Web API1 receives data: | |
| If response status β 200: | |
| Set Label_Error.Text to "API Error: Check your image format" | |
| Else: | |
| // Check for error in response data | |
| Set app variable "result" to get item 1 of list (get property "data" of responseBody) | |
| If get property "error" of result β null: | |
| Set Label_Error.Text to get property "error" of result | |
| Else: | |
| // Process successful response | |
| ``` | |
| ## 4. Real-time Detection Loop | |
| ### Continuous Detection: | |
| ``` | |
| When Screen opens: | |
| Set app variable "isDetecting" to true | |
| Call function "startDetectionLoop" | |
| Function startDetectionLoop: | |
| While app variable "isDetecting" = true: | |
| Camera1.TakePicture | |
| Wait 1 second // Adjust for performance - Gradio may be slower than FastAPI | |
| When Camera1.AfterPicture: | |
| If app variable "isDetecting" = true: | |
| Call API for detection | |
| ``` | |
| ## 5. Performance Optimization | |
| ### Image Compression: | |
| ``` | |
| Before API call: | |
| Set app variable "compressedImage" to | |
| call ImageUtils.compress | |
| image = Camera1.Picture | |
| quality = 0.7 // Reduce file size for faster upload | |
| maxWidth = 640 // Gradio handles smaller images better | |
| ``` | |
| ### Model Selection for Performance: | |
| ``` | |
| // For real-time applications, always use MobileNet | |
| Set app variable "modelType" to "mobilenet" | |
| // For high-accuracy single shots, use ResNet | |
| Set app variable "modelType" to "resnet" | |
| ``` | |
| ## 6. Complete API Integration Function | |
| ### Thunkable Function: DetectFaces | |
| ``` | |
| Function DetectFaces(imageToAnalyze, selectedModel, confidenceLevel): | |
| // Convert image to base64 | |
| Set local variable "imageBase64" to | |
| call CloudinaryAPI.convertToBase64 | |
| mediaDB = imageToAnalyze | |
| // Prepare API request | |
| Set local variable "requestData" to create object with: | |
| "data" = create list with items: | |
| - imageBase64 | |
| - selectedModel | |
| - confidenceLevel | |
| - 0.4 // NMS threshold | |
| // Make API call | |
| Call Web API1.POST with: | |
| url = "https://your-space-name.hf.space/api/predict" | |
| body = requestData | |
| headers = create object with "Content-Type" = "application/json" | |
| // Return to calling function | |
| Return "API call initiated" | |
| ``` | |
| ### Response Handler Function: | |
| ``` | |
| Function ProcessDetectionResponse(responseBody): | |
| // Extract main result | |
| Set local variable "detectionResult" to get item 1 of list (get property "data" of responseBody) | |
| // Check for errors | |
| If get property "error" of detectionResult β null: | |
| Set Label_Status.Text to get property "error" of detectionResult | |
| Return false | |
| // Process successful detection | |
| Set app variable "lastDetectionFaces" to get property "faces" of detectionResult | |
| Set app variable "lastDetectionCount" to get property "total_faces" of detectionResult | |
| Set app variable "lastProcessingTime" to get property "processing_time" of detectionResult | |
| // Update UI | |
| Set Label_FaceCount.Text to join("Faces detected: ", lastDetectionCount) | |
| Set Label_ProcessingTime.Text to join("Processing time: ", lastProcessingTime, "s") | |
| Return true | |
| ``` | |
| ## 7. Advanced Features | |
| ### Face Landmark Visualization: | |
| ``` | |
| For each face in lastDetectionFaces: | |
| Set local variable "landmarks" to get property "landmarks" of face | |
| // Extract landmark coordinates | |
| Set local variable "rightEye" to get property "right_eye" of landmarks | |
| Set local variable "leftEye" to get property "left_eye" of landmarks | |
| Set local variable "nose" to get property "nose" of landmarks | |
| Set local variable "rightMouth" to get property "right_mouth" of landmarks | |
| Set local variable "leftMouth" to get property "left_mouth" of landmarks | |
| // Draw landmarks (if using drawing components) | |
| Set Circle_RightEye.X to get item 1 of rightEye | |
| Set Circle_RightEye.Y to get item 2 of rightEye | |
| // ... repeat for other landmarks | |
| ``` | |
| ### Confidence Filtering: | |
| ``` | |
| Function FilterHighConfidenceFaces(allFaces, minConfidence): | |
| Set local variable "filteredFaces" to create empty list | |
| For each item "face" in list allFaces: | |
| Set local variable "confidence" to get property "confidence" of face | |
| If confidence β₯ minConfidence: | |
| Add face to filteredFaces | |
| Return filteredFaces | |
| ``` | |
| ## 8. UI Components for Gradio Integration | |
| ### Required Components: | |
| ``` | |
| - Camera1 (for image capture) | |
| - Button_Detect (trigger detection) | |
| - Label_Status (show current status) | |
| - Label_FaceCount (display number of faces) | |
| - Label_ProcessingTime (show API response time) | |
| - Label_Error (error messages) | |
| - WebAPI1 (API communication) | |
| - Dropdown_Model (model selection) | |
| - Slider_Confidence (confidence threshold) | |
| ``` | |
| ### Component Properties: | |
| ``` | |
| Button_Detect: | |
| - Text: "π Detect Faces" | |
| - Enabled: true when camera has image | |
| Label_Status: | |
| - Text: "Ready to detect faces" | |
| - Font size: 16 | |
| Dropdown_Model: | |
| - Options: ["mobilenet", "resnet"] | |
| - Default: "mobilenet" | |
| Slider_Confidence: | |
| - Min: 0.1 | |
| - Max: 1.0 | |
| - Default: 0.5 | |
| - Step: 0.1 | |
| ``` | |
| ## 9. Testing Your Gradio Integration | |
| ### Test Checklist: | |
| - [ ] Camera permission granted | |
| - [ ] Internet connection available | |
| - [ ] Gradio API endpoint accessible (test in browser first) | |
| - [ ] Base64 conversion working correctly | |
| - [ ] Response parsing handles Gradio format | |
| - [ ] Error handling for API failures | |
| - [ ] UI updates with detection results | |
| ### Debug Tips: | |
| 1. Test Gradio web interface first at your Space URL | |
| 2. Use the built-in "π API Testing" tab in Gradio | |
| 3. Verify base64 encoding doesn't include data URL prefix | |
| 4. Check that response format matches expected structure | |
| 5. Monitor processing times (Gradio may be slower than FastAPI) | |
| ## 10. Production Considerations | |
| ### Performance: | |
| - Gradio apps may have slightly higher latency than pure FastAPI | |
| - Use MobileNet for real-time applications | |
| - Consider image compression for faster uploads | |
| - Implement proper loading indicators | |
| ### Reliability: | |
| - Handle Gradio app cold starts (first request may timeout) | |
| - Implement retry logic for failed requests | |
| - Cache successful results when appropriate | |
| - Provide fallback options for offline scenarios | |
| ### User Experience: | |
| - Show clear loading states during API calls | |
| - Provide informative error messages | |
| - Allow users to switch between models | |
| - Display confidence scores and processing times | |
| ## 11. Sample Thunkable Blocks Layout | |
| ### Main Detection Flow: | |
| ``` | |
| When Button_Detect.Click: | |
| β Set Label_Status.Text to "Capturing image..." | |
| β Camera1.TakePicture | |
| When Camera1.AfterPicture: | |
| β Set Label_Status.Text to "Converting to base64..." | |
| β Call CloudinaryAPI.convertToBase64 | |
| When CloudinaryAPI.GotBase64: | |
| β Set Label_Status.Text to "Detecting faces..." | |
| β Set app variable "imageB64" to base64Result | |
| β Call function DetectFaces | |
| When WebAPI1.GotText: | |
| β Set Label_Status.Text to "Processing results..." | |
| β Call function ProcessDetectionResponse | |
| β Set Label_Status.Text to "Detection complete!" | |
| ``` | |
| This comprehensive guide should help you successfully integrate your Gradio-based RetinaFace API with Thunkable! | |