diff --git a/ESP32_CAM_IMPLEMENTATION_SUMMARY.md b/ESP32_CAM_IMPLEMENTATION_SUMMARY.md index 85dbb82..6b4fd03 100644 --- a/ESP32_CAM_IMPLEMENTATION_SUMMARY.md +++ b/ESP32_CAM_IMPLEMENTATION_SUMMARY.md @@ -1,521 +1,6 @@ -# ESP32-CAM Integration Implementation Summary -## Overview - -Successfully implemented a comprehensive, production-ready ESP32-CAM integration module for the Accelerapp platform. This implementation provides enterprise-grade camera functionality with seamless integration into existing infrastructure. - -## Implementation Statistics - -### Code Metrics -- **Total lines of code**: 2,763 lines (Python + Arduino + YAML) -- **Test lines**: 441 lines -- **Example code**: 369 lines -- **Documentation**: 497 lines -- **Files created**: 27 files -- **Test coverage**: 25 tests, 100% passing - -### Module Breakdown -- Core modules: 9 files (1,854 lines) -- Drivers: 2 files (216 lines) -- Protocols: 2 files (290 lines) -- Utilities: 3 files (416 lines) -- Configuration: 3 YAML files (103 lines) -- Firmware: 1 Arduino file (172 lines) - -## Features Implemented - -### 1. Core Camera Interface (core.py - 305 lines) - -**Classes:** -- `CameraResolution` - Enum for supported resolutions (QVGA to UXGA) -- `CameraModel` - Enum for sensor types (OV2640, OV3660, OV5640) -- `FrameFormat` - Enum for image formats (JPEG, RGB565, YUV422, Grayscale) -- `CameraConfig` - Dataclass for camera configuration -- `ESP32Camera` - Main camera interface class - -**Key Features:** -- Multi-board support (AI-Thinker, ESP32-S3-CAM, TTGO) -- Automatic pin configuration based on board type -- Multiple resolution support (320x240 to 1600x1200) -- Multiple frame formats -- Image adjustments (brightness, contrast, saturation, flip/mirror) -- Configurable JPEG quality (0-63) -- Frame rate control (1-60 fps) -- Statistics tracking -- Complete lifecycle management (initialize, capture, stream, shutdown) - -**Methods (14):** -- `initialize()` - Hardware initialization -- `capture_image()` - Single image capture -- `start_streaming()` / `stop_streaming()` - Streaming control -- `set_resolution()` - Change resolution -- `set_quality()` - Adjust JPEG quality -- `set_brightness()` - Brightness control -- `set_flip()` - Flip/mirror settings -- `get_status()` - Status reporting -- `get_config()` - Configuration retrieval -- `reset()` - Reset to defaults -- `shutdown()` - Clean shutdown - -### 2. Streaming Infrastructure (streaming.py - 215 lines) - -**Classes:** -- `StreamProtocol` - Enum for protocols (MJPEG, RTSP, WebRTC, WebSocket) -- `StreamConfig` - Dataclass for stream configuration -- `StreamingServer` - Multi-protocol streaming server -- `MJPEGStreamer` - MJPEG-specific implementation -- `RTSPServer` - RTSP-specific implementation - -**Key Features:** -- Multi-protocol support -- Multi-client management (up to 5 concurrent clients) -- Client tracking and statistics -- Frame callback system -- Automatic port assignment -- Stream URL generation -- Bitrate control - -**Methods (12):** -- `start()` / `stop()` - Server control -- `add_client()` / `remove_client()` - Client management -- `get_client_count()` / `get_clients()` - Client queries -- `register_frame_callback()` - Event registration -- `get_stream_url()` - URL generation -- `get_status()` - Status reporting - -### 3. Motion Detection (motion_detection.py - 215 lines) - -**Classes:** -- `MotionSensitivity` - Enum for sensitivity levels -- `MotionEvent` - Dataclass for motion events -- `MotionDetector` - Motion detection system - -**Key Features:** -- Configurable sensitivity (Low, Medium, High, Very High) -- Event-driven callback system -- Automatic recording on motion -- Configurable thresholds and cooldown -- Minimum area detection -- Multiple callback support - -**Methods (11):** -- `enable()` / `disable()` - Detection control -- `set_sensitivity()` - Sensitivity adjustment -- `register_callback()` / `unregister_callback()` - Event handling -- `start_recording_on_motion()` / `stop_recording_on_motion()` - Auto-record -- `get_status()` / `get_config()` / `set_config()` - Configuration - -### 4. Digital Twin Integration (digital_twin.py - 180 lines) - -**Classes:** -- `CameraDigitalTwin` - Digital twin interface - -**Key Features:** -- Real-time state synchronization -- State history tracking (1000 snapshots default) -- Telemetry data collection -- Health monitoring -- Predictive maintenance -- Performance analytics -- Complete data export - -**Methods (7):** -- `sync_state()` - State synchronization -- `get_telemetry()` - Telemetry collection -- `get_state_history()` - Historical data -- `predict_maintenance()` - Maintenance prediction -- `get_analytics()` - Performance analytics -- `export_twin_data()` - Data export - -### 5. Web Interface & API (web_interface.py - 175 lines) - -**Classes:** -- `CameraWebInterface` - Web-based control interface - -**Key Features:** -- RESTful API design -- Multiple endpoints (status, config, capture, stream, settings) -- Request handling -- Response formatting -- Settings management -- API information - -**API Endpoints (6):** -- `GET /api/status` - Camera status -- `GET /api/config` - Configuration -- `POST /api/capture` - Capture image -- `POST /api/stream/start` - Start streaming -- `POST /api/stream/stop` - Stop streaming -- `GET/POST /api/settings` - Settings management - -### 6. Storage Management (storage.py - 240 lines) - -**Classes:** -- `StorageType` - Enum for storage types -- `FileFormat` - Enum for file formats -- `StorageConfig` - Dataclass for storage configuration -- `StorageManager` - Storage management system - -**Key Features:** -- Multiple storage types (SD card, SPIFFS, RAM) -- Automatic file organization -- Timestamp-based naming -- Auto-cleanup with configurable thresholds -- File management operations -- Storage statistics -- Cloud upload support (extensible) - -**Methods (11):** -- `initialize()` - Storage initialization -- `save_image()` / `save_video()` - File saving -- `delete_file()` - File deletion -- `list_files()` - File listing -- `get_storage_info()` - Statistics -- `format_storage()` - Format/clear -- `upload_to_cloud()` - Cloud upload - -### 7. Security Management (security.py - 285 lines) - -**Classes:** -- `AuthMethod` - Enum for auth methods -- `AccessLevel` - Enum for access levels -- `SecurityConfig` - Dataclass for security configuration -- `CameraSecurityManager` - Security management system - -**Key Features:** -- Multiple authentication methods (None, Basic, Token, Certificate) -- Token-based authentication with secure generation -- Role-based access control (Guest, User, Admin, Owner) -- User management -- Failed login tracking -- Automatic lockout protection -- Permission checking -- Encryption support -- Audit logging (extensible) - -**Methods (13):** -- `add_user()` / `remove_user()` - User management -- `authenticate()` - Authentication -- `validate_token()` / `revoke_token()` - Token management -- `check_permission()` - Permission checking -- `enable_encryption()` / `disable_encryption()` - Encryption control -- `get_security_status()` - Status reporting -- `list_users()` - User listing -- `audit_log()` - Audit logging - -### 8. Camera Sensor Drivers - -**OV2640 Driver (ov2640.py - 105 lines):** -- 2 Megapixel CMOS sensor -- Maximum resolution: UXGA (1600x1200) -- Supported resolutions: 6 standard sizes -- JPEG/RGB/YUV format support -- Capabilities reporting -- Status monitoring - -**OV3660 Driver (ov3660.py - 105 lines):** -- 3 Megapixel CMOS sensor -- Maximum resolution: QXGA (2048x1536) -- Supported resolutions: 7 standard sizes -- Improved low-light performance -- Same interface as OV2640 - -### 9. Streaming Protocols - -**MJPEG Protocol (mjpeg.py - 110 lines):** -- HTTP-based streaming -- Sequential JPEG frames -- Multipart boundary formatting -- Default port: 81 -- Frame counting -- Status tracking - -**RTSP Protocol (rtsp.py - 140 lines):** -- Industry-standard RTSP -- Session management -- SDP description generation -- Default port: 8554 -- Multiple session support -- URL generation - -### 10. Utilities - -**Image Processing (image_processing.py - 110 lines):** -- Brightness calculation -- Motion region detection -- Image resizing -- Filter application -- Image information extraction - -**Network Helpers (network.py - 103 lines):** -- IP address validation -- Port number validation -- URL formatting -- Network info retrieval -- Connectivity checking - -**Configuration Validation (validation.py - 133 lines):** -- Camera config validation -- Streaming config validation -- Security config validation -- Comprehensive error reporting - -### 11. Configuration System - -**Default Configuration (default.yaml):** -- Device settings -- Camera parameters -- Streaming configuration -- Motion detection settings -- Storage configuration -- Security settings -- Network settings -- Web interface settings -- Digital twin settings - -**Board-Specific Configs:** -- AI-Thinker configuration (ai_thinker.yaml) -- ESP32-S3-CAM configuration (esp32_s3_cam.yaml) -- Pin mappings -- Power requirements -- Feature listings - -### 12. Arduino Firmware - -**Base Firmware (base_firmware.ino - 172 lines):** -- Camera initialization -- WiFi connectivity -- Web server implementation -- Image capture endpoint -- Stream endpoint -- Status endpoint -- PSRAM support -- AI-Thinker pin configuration ## Integration Points ### Hardware Abstraction Layer -- Exported from `accelerapp.hardware` module -- Compatible with existing HAL infrastructure -- Follows established component patterns - -### Digital Twin Platform -- `CameraDigitalTwin` class integrates seamlessly -- Real-time state synchronization -- Historical tracking -- Predictive maintenance -- Performance analytics - -### Observability -- Comprehensive logging throughout -- Metrics collection -- Error tracking -- Health monitoring -- Status reporting - -### Security Framework -- Token-based authentication -- Role-based access control -- Encryption support -- Audit logging hooks -- Compatible with enterprise security - -## Testing - -### Test Coverage (test_camera.py - 441 lines) - -**25 Comprehensive Tests:** -1. `test_camera_import` - Module import -2. `test_camera_config_creation` - Configuration creation -3. `test_camera_initialization` - Hardware initialization -4. `test_camera_capture` - Image capture -5. `test_camera_streaming` - Video streaming -6. `test_camera_settings` - Settings adjustment -7. `test_camera_status` - Status reporting -8. `test_streaming_server` - Streaming server -9. `test_streaming_url` - URL generation -10. `test_motion_detection` - Motion detection -11. `test_motion_callbacks` - Callback system -12. `test_digital_twin` - Digital twin integration -13. `test_predictive_maintenance` - Predictive maintenance -14. `test_web_interface` - Web interface -15. `test_storage_manager` - Storage management -16. `test_storage_file_operations` - File operations -17. `test_security_manager` - Security management -18. `test_security_failed_login` - Failed login handling -19. `test_pin_configurations` - Pin configurations -20. `test_camera_models` - Sensor drivers -21. `test_streaming_protocols` - Protocol implementations -22. `test_config_validation` - Configuration validation -23. `test_network_utilities` - Network utilities -24. `test_camera_reset` - Reset functionality -25. `test_camera_shutdown` - Shutdown process - -**All tests pass:** ✓ 25/25 (100%) - -## Examples & Documentation - -### Demo Script (esp32_cam_demo.py - 369 lines) - -**8 Demonstration Scenarios:** -1. Basic Camera Operations -2. Video Streaming -3. Motion Detection -4. Digital Twin Integration -5. Web Interface & API -6. Storage Management -7. Security Management -8. Advanced Features - -**Output:** -- Clear, formatted output -- Step-by-step demonstrations -- Status reporting -- Error handling - -### Documentation (ESP32_CAM_INTEGRATION.md - 497 lines) - -**Complete Guide Including:** -- Feature overview -- Quick start examples -- Configuration guide -- API reference -- Board support details -- Integration points -- Architecture diagram -- Performance metrics -- Troubleshooting -- Future enhancements - -## Quality Standards - -### Code Quality -✓ Follows existing Accelerapp patterns -✓ Type hints throughout -✓ Comprehensive docstrings -✓ Error handling -✓ Thread safety (locks where needed) -✓ Clean separation of concerns -✓ Modular design -✓ Extensible architecture - -### Testing -✓ 100% test pass rate -✓ Unit tests for all major features -✓ Integration tests included -✓ Edge case coverage -✓ Error condition testing - -### Documentation -✓ Module-level documentation -✓ Class documentation -✓ Method documentation -✓ Example code -✓ Configuration guides -✓ Troubleshooting guides - -### Security -✓ Secure password hashing (SHA-256) -✓ Token-based authentication -✓ Role-based access control -✓ Failed login tracking -✓ Automatic lockout -✓ Encryption support -✓ Audit logging hooks - -## Backward Compatibility - -✓ No breaking changes to existing code -✓ Additive only - new module added -✓ All existing tests still pass (32/32) -✓ Compatible with all existing infrastructure -✓ Follows established patterns - -## Performance Characteristics - -- Image capture: ~100ms (simulated) -- Stream latency: <200ms (MJPEG) -- Motion detection: Real-time capable -- Multi-client support: 5 concurrent streams -- Storage: Auto-cleanup at 80% capacity -- Memory efficient: Configurable history limits - -## Production Readiness - -### Ready for Deployment -✓ Complete feature set -✓ Comprehensive testing -✓ Full documentation -✓ Example implementations -✓ Security features -✓ Error handling -✓ Performance optimized -✓ Scalable architecture - -### Not Included (Future Enhancements) -- TinyML model deployment -- Real-time object detection -- Face recognition -- QR code/barcode scanning -- WebRTC implementation (protocol handler exists) -- Advanced image processing -- Cloud storage integration (upload hooks exist) -- Mobile app support - -## File Structure - -``` -src/accelerapp/hardware/camera/ -├── __init__.py # Module exports (23 lines) -├── esp32_cam/ -│ ├── __init__.py # ESP32-CAM exports (25 lines) -│ ├── core.py # Camera interface (305 lines) -│ ├── streaming.py # Streaming server (215 lines) -│ ├── motion_detection.py # Motion detection (215 lines) -│ ├── digital_twin.py # Digital twin (180 lines) -│ ├── web_interface.py # Web interface (175 lines) -│ ├── storage.py # Storage management (240 lines) -│ ├── security.py # Security (285 lines) -│ ├── firmware/ -│ │ └── base_firmware.ino # Arduino firmware (172 lines) -│ └── configs/ -│ ├── default.yaml # Default config (54 lines) -│ ├── ai_thinker.yaml # AI-Thinker config (41 lines) -│ └── esp32_s3_cam.yaml # ESP32-S3 config (54 lines) -├── drivers/ -│ ├── __init__.py # Driver exports (11 lines) -│ ├── ov2640.py # OV2640 driver (105 lines) -│ └── ov3660.py # OV3660 driver (105 lines) -├── protocols/ -│ ├── __init__.py # Protocol exports (11 lines) -│ ├── mjpeg.py # MJPEG protocol (110 lines) -│ └── rtsp.py # RTSP protocol (140 lines) -└── utils/ - ├── __init__.py # Utility exports (13 lines) - ├── image_processing.py # Image utilities (110 lines) - ├── network.py # Network helpers (103 lines) - └── validation.py # Config validation (133 lines) - -tests/ -└── test_camera.py # Test suite (441 lines) - -examples/ -└── esp32_cam_demo.py # Demo script (369 lines) - -docs/ -└── ESP32_CAM_INTEGRATION.md # Documentation (497 lines) -``` - -## Summary - -Successfully delivered a comprehensive, production-ready ESP32-CAM integration module that: - -1. **Meets all core requirements** from the problem statement -2. **Integrates seamlessly** with existing Accelerapp infrastructure -3. **Provides enterprise-grade features** (security, monitoring, management) -4. **Is fully tested** with 25 comprehensive tests (100% passing) -5. **Is well documented** with guide, examples, and inline documentation -6. **Follows best practices** in code quality, security, and architecture -7. **Is ready for production** deployment and scaling -The implementation provides a solid foundation for ESP32-CAM support that can be extended with additional features (TinyML, advanced AI, etc.) as needed. diff --git a/docs/ESP32_CAM_GUIDE.md b/docs/ESP32_CAM_GUIDE.md new file mode 100644 index 0000000..8b71092 --- /dev/null +++ b/docs/ESP32_CAM_GUIDE.md @@ -0,0 +1,520 @@ +# ESP32-CAM Comprehensive Guide + +## Overview + +The ESP32-CAM module provides comprehensive camera support for the Accelerapp hardware control platform. It integrates advanced features including TinyML AI processing, multi-protocol streaming, motion detection, remote access, and web-based management. + +## Features + +### 1. Core Camera Support +- **Multi-variant support**: AI-Thinker, TTGO, WROVER Kit, ESP-EYE, M5Stack +- **Multiple sensors**: OV2640, OV5640, OV3660, OV7670 +- **Flexible configuration**: Frame size, pixel format, quality settings +- **Advanced controls**: Brightness, contrast, saturation, sharpness, flip settings + +### 2. Multi-Protocol Streaming +- **MJPEG**: Simple HTTP-based streaming +- **RTSP**: Real-Time Streaming Protocol +- **WebRTC**: Low-latency peer-to-peer streaming +- **HTTP**: Standard HTTP streaming +- **Adaptive quality**: Automatic quality adjustment based on bandwidth +- **Multiple concurrent streams**: Support for multiple clients + +### 3. AI Processing +- **TinyML integration**: Edge AI inference using TensorFlow Lite Micro +- **Person detection**: Detect humans in frame +- **Face detection**: Detect and recognize faces +- **Object detection**: General object detection +- **Gesture recognition**: Detect hand gestures +- **Custom models**: Support for custom TFLite models + +### 4. Motion Detection +- **Multiple algorithms**: Frame difference, background subtraction, optical flow +- **Detection zones**: Define specific areas for motion detection +- **Event-driven**: Trigger callbacks on motion events +- **QR code scanning**: Detect and decode QR codes +- **Configurable sensitivity**: Adjust detection threshold + +### 5. Remote Access +- **Secure tunneling**: ngrok, Cloudflare, or custom tunnels +- **Multiple auth methods**: None, Basic, Token, OAuth, Certificate +- **Access control**: IP whitelisting, rate limiting +- **Session management**: Track and manage active sessions +- **Audit logging**: Record all access attempts + +### 6. Web Interface +- **RESTful API**: Complete camera control API +- **Web UI**: Built-in web interface for management +- **Live view**: Real-time camera preview +- **Settings management**: Configure camera from web browser +- **CORS support**: Cross-origin resource sharing + +## Installation + +```bash +pip install accelerapp +``` + +## Quick Start + +### Basic Camera Setup + +```python +from accelerapp.hardware.camera.esp32_cam import ( + ESP32Camera, + CameraVariant, + CameraConfig, +) + +# Create camera with AI-Thinker variant +config = CameraConfig( + variant=CameraVariant.AI_THINKER, + frame_size=FrameSize.VGA, + jpeg_quality=12, +) + +camera = ESP32Camera(config) +camera.initialize() + +# Capture a frame +frame = camera.capture_frame() +``` + +### Streaming Setup + +```python +from accelerapp.hardware.camera.esp32_cam import ( + StreamingManager, + StreamingProtocol, + StreamConfig, +) + +# Configure MJPEG streaming +stream_config = StreamConfig( + protocol=StreamingProtocol.MJPEG, + port=8080, + fps_target=15, +) + +streaming = StreamingManager(camera, stream_config) +stream_info = streaming.start_stream() + +print(f"Stream available at: {stream_info['urls']['mjpeg']}") +``` + +### AI Processing + +```python +from accelerapp.hardware.camera.esp32_cam import ( + AIProcessor, + DetectionModel, + ModelConfig, +) + +# Setup person detection +model_config = ModelConfig( + model_type=DetectionModel.PERSON_DETECTION, + confidence_threshold=0.7, +) + +ai = AIProcessor(camera, model_config) +ai.load_model() + +# Detect people in frame +detections = ai.detect() +for detection in detections: + print(f"Detected: {detection.label} ({detection.confidence:.2f})") +``` + +### Motion Detection + +```python +from accelerapp.hardware.camera.esp32_cam import ( + MotionDetector, + MotionConfig, +) + +# Setup motion detection +motion_config = MotionConfig( + threshold=20, + min_area=500, +) + +motion = MotionDetector(camera, motion_config) + +# Add event callback +def on_motion(event): + print(f"Motion detected! Confidence: {event.confidence}") + +motion.add_event_callback(on_motion) + +# Detect motion +if motion.detect_motion(): + print("Motion detected!") +``` + +### Remote Access + +```python +from accelerapp.hardware.camera.esp32_cam import ( + RemoteAccess, + AuthConfig, + TunnelConfig, + AuthMethod, + TunnelType, +) + +# Setup secure remote access +auth_config = AuthConfig( + method=AuthMethod.TOKEN, + access_token="your_secure_token", +) + +tunnel_config = TunnelConfig( + tunnel_type=TunnelType.NGROK, + ngrok_auth_token="your_ngrok_token", +) + +remote = RemoteAccess(camera, auth_config, tunnel_config) + +# Start tunnel +info = remote.start_tunnel() +print(f"Camera accessible at: {info['public_url']}") +``` + +### Web Interface + +```python +from accelerapp.hardware.camera.esp32_cam import ( + WebInterface, + APIConfig, +) + +# Setup web interface +api_config = APIConfig( + port=80, + enable_api=True, + enable_web_ui=True, +) + +web = WebInterface(camera, api_config) + +# API endpoints available: +# GET /api/camera/status +# GET /api/camera/capture +# GET /api/camera/config +# PUT /api/camera/config +# POST /api/stream/start +# POST /api/stream/stop +# PUT /api/settings/quality +# PUT /api/settings/brightness +# PUT /api/settings/flip +``` + +## Hardware Variants + +### AI-Thinker (Default) +Most common ESP32-CAM variant. + +```python +config = CameraConfig(variant=CameraVariant.AI_THINKER) +camera = ESP32Camera(config) +``` + +### WROVER Kit +ESP32-WROVER-KIT development board. + +```python +config = CameraConfig(variant=CameraVariant.WROVER_KIT) +camera = ESP32Camera(config) +``` + +### ESP-EYE +ESP32-EYE AI development board. + +```python +config = CameraConfig(variant=CameraVariant.ESP_EYE) +camera = ESP32Camera(config) +``` + +### M5Stack Camera +M5Stack camera modules. + +```python +config = CameraConfig(variant=CameraVariant.M5STACK_CAMERA) +camera = ESP32Camera(config) +``` + +## Firmware Generation + +Generate firmware code for ESP32: + +```python +# Camera configuration +firmware_code = camera.generate_firmware_config() + +# Streaming code +stream_code = streaming.generate_streaming_code() + +# AI inference code +ai_code = ai.generate_inference_code() + +# Motion detection code +motion_code = motion.generate_motion_detection_code() + +# Remote access code +remote_code = remote.generate_remote_access_code() +``` + +## Digital Twin Integration + +Integrate with Accelerapp's Digital Twin platform: + +```python +config = CameraConfig( + twin_id="camera_001", + twin_sync_interval=60, # seconds + enable_metrics=True, + enable_health_checks=True, +) + +camera = ESP32Camera(config) +``` + +## TinyML Integration + +Integrate with TinyMLAgent: + +```python +from accelerapp.agents import TinyMLAgent + +# Setup AI processor +ai = AIProcessor(camera) + +# Get TinyML integration spec +spec = ai.integrate_with_tinyml_agent() + +# Use with TinyMLAgent +tinyml_agent = TinyMLAgent() +result = tinyml_agent.generate(spec) +``` + +## Advanced Configuration + +### Custom Pin Configuration + +```python +config = CameraConfig( + variant=CameraVariant.GENERIC, + pin_pwdn=32, + pin_reset=-1, + pin_xclk=0, + pin_sscb_sda=26, + pin_sscb_scl=27, + # ... other pins +) +``` + +### Camera Settings + +```python +# Adjust quality +camera.set_quality(10) # 0-63, lower is better + +# Adjust brightness +camera.set_brightness(1) # -2 to 2 + +# Adjust contrast +camera.set_contrast(1) # -2 to 2 + +# Flip image +camera.set_flip(horizontal=True, vertical=False) + +# Set frame size +camera.set_frame_size(FrameSize.SVGA) +``` + +### Streaming Quality Presets + +```python +from accelerapp.hardware.camera.esp32_cam import StreamQuality + +# Low quality (320x240) +config = StreamConfig(quality=StreamQuality.LOW) + +# Medium quality (640x480) +config = StreamConfig(quality=StreamQuality.MEDIUM) + +# High quality (800x600) +config = StreamConfig(quality=StreamQuality.HIGH) + +# Ultra quality (1024x768+) +config = StreamConfig(quality=StreamQuality.ULTRA) +``` + +## API Reference + +### ESP32Camera + +Main camera interface class. + +**Methods:** +- `initialize()`: Initialize camera hardware +- `capture_frame()`: Capture a single frame +- `set_quality(quality)`: Set JPEG quality (0-63) +- `set_frame_size(size)`: Set frame size +- `set_brightness(value)`: Set brightness (-2 to 2) +- `set_contrast(value)`: Set contrast (-2 to 2) +- `set_flip(h, v)`: Set flip settings +- `get_status()`: Get camera status +- `get_config()`: Get configuration +- `generate_firmware_config()`: Generate firmware code +- `shutdown()`: Shutdown camera + +### StreamingManager + +Manages video streaming. + +**Methods:** +- `start_stream(stream_id)`: Start streaming +- `stop_stream(stream_id)`: Stop specific stream +- `stop_all_streams()`: Stop all streams +- `get_stream_stats()`: Get statistics +- `generate_streaming_code()`: Generate firmware code + +### AIProcessor + +AI processing engine. + +**Methods:** +- `load_model(path)`: Load TinyML model +- `detect(frame)`: Run detection +- `get_statistics()`: Get AI statistics +- `generate_inference_code()`: Generate firmware code +- `integrate_with_tinyml_agent()`: Get TinyML spec + +### MotionDetector + +Motion detection engine. + +**Methods:** +- `detect_motion(frame)`: Detect motion +- `add_event_callback(callback)`: Add event handler +- `get_statistics()`: Get statistics +- `generate_motion_detection_code()`: Generate firmware + +### QRScanner + +QR code scanner. + +**Methods:** +- `scan(frame)`: Scan for QR codes +- `get_statistics()`: Get scan statistics +- `generate_qr_scanner_code()`: Generate firmware + +### RemoteAccess + +Remote access manager. + +**Methods:** +- `start_tunnel()`: Start cloud tunnel +- `stop_tunnel()`: Stop tunnel +- `authenticate(credentials)`: Authenticate user +- `create_session(user, ip)`: Create session +- `end_session(session_id)`: End session +- `get_status()`: Get remote access status +- `get_access_log()`: Get access log + +### WebInterface + +Web interface and API. + +**Methods:** +- `handle_request(path, method, params)`: Handle HTTP request +- `get_statistics()`: Get interface statistics +- `generate_api_documentation()`: Generate docs + +## Troubleshooting + +### Camera initialization fails + +**Issue:** Camera fails to initialize +**Solution:** +- Check pin configuration for your board variant +- Verify power supply is adequate (5V, 500mA+) +- Check camera cable connection + +### Streaming performance issues + +**Issue:** Low FPS or choppy video +**Solution:** +- Reduce frame size (e.g., VGA instead of SVGA) +- Increase JPEG quality number (lower quality, higher FPS) +- Enable adaptive quality +- Check WiFi signal strength + +### AI detection not working + +**Issue:** No detections or low accuracy +**Solution:** +- Ensure model is loaded correctly +- Adjust confidence threshold +- Check lighting conditions +- Verify input image preprocessing + +### Remote access connection fails + +**Issue:** Cannot connect remotely +**Solution:** +- Check tunnel configuration +- Verify auth credentials +- Check firewall settings +- Ensure device has internet connectivity + +## Performance Tips + +1. **Frame Size**: Use smallest frame size needed for your application +2. **JPEG Quality**: Higher quality numbers = lower file size but lower quality +3. **Frame Buffer**: Use 2 frame buffers for smoother streaming +4. **AI Processing**: Use INT8 quantization for faster inference +5. **Motion Detection**: Use frame skip to reduce processing load +6. **Streaming**: Enable adaptive quality for variable bandwidth + +## Security Best Practices + +1. **Always use authentication** for remote access +2. **Use HTTPS/TLS** for encrypted communication +3. **Implement IP whitelisting** for production deployments +4. **Regular security audits** using Accelerapp's security tools +5. **Keep firmware updated** via OTA updates +6. **Use strong tokens** for token-based authentication +7. **Monitor access logs** for suspicious activity + +## Examples + +See the `examples/` directory for complete examples: +- `esp32_cam_basic.py`: Basic camera usage +- `esp32_cam_streaming.py`: Streaming setup +- `esp32_cam_ai.py`: AI processing +- `esp32_cam_remote.py`: Remote access +- `esp32_cam_full.py`: Complete integration + +## Integration with Accelerapp Ecosystem + +The ESP32-CAM module integrates seamlessly with: +- **Digital Twin Platform**: Real-time device monitoring +- **Observability System**: Metrics and health checks +- **TinyML Agent**: Edge AI model deployment +- **Security Framework**: Compliance and access control +- **Hardware Abstraction Layer**: Unified hardware interface + +## Support + +For issues and questions: +- GitHub Issues: https://github.com/thewriterben/Accelerapp/issues +- Documentation: https://github.com/thewriterben/Accelerapp/docs +- Examples: https://github.com/thewriterben/Accelerapp/examples + +## License + +MIT License - See LICENSE file for details diff --git a/docs/ESP32_CAM_QUICK_REFERENCE.md b/docs/ESP32_CAM_QUICK_REFERENCE.md new file mode 100644 index 0000000..b2a72f3 --- /dev/null +++ b/docs/ESP32_CAM_QUICK_REFERENCE.md @@ -0,0 +1,368 @@ +# ESP32-CAM Quick Reference + +## Installation + +```bash +pip install accelerapp +``` + +## Basic Usage + +```python +from accelerapp.hardware import ESP32Camera, CameraVariant + +# Initialize camera +camera = ESP32Camera() +camera.initialize() + +# Capture frame +frame = camera.capture_frame() +``` + +## Camera Variants + +```python +from accelerapp.hardware.camera.esp32_cam import CameraVariant + +CameraVariant.AI_THINKER # Default, most common +CameraVariant.WROVER_KIT # ESP32-WROVER-KIT +CameraVariant.ESP_EYE # ESP32-EYE +CameraVariant.M5STACK_CAMERA # M5Stack +CameraVariant.TTGO_T_CAMERA # TTGO T-Camera +``` + +## Frame Sizes + +```python +from accelerapp.hardware.camera.esp32_cam import FrameSize + +FrameSize.QQVGA # 160x120 +FrameSize.QVGA # 320x240 +FrameSize.VGA # 640x480 +FrameSize.SVGA # 800x600 +FrameSize.XGA # 1024x768 +FrameSize.SXGA # 1280x1024 +FrameSize.UXGA # 1600x1200 +``` + +## Streaming + +### MJPEG Streaming + +```python +from accelerapp.hardware.camera.esp32_cam import ( + StreamingManager, + StreamingProtocol, +) + +streaming = StreamingManager(camera) +stream_info = streaming.start_stream() +# Access at: http://device:8080/stream +``` + +### RTSP Streaming + +```python +from accelerapp.hardware.camera.esp32_cam import StreamConfig + +config = StreamConfig(protocol=StreamingProtocol.RTSP) +streaming = StreamingManager(camera, config) +stream_info = streaming.start_stream() +# Access at: rtsp://device:8080/stream +``` + +### WebRTC Streaming + +```python +config = StreamConfig(protocol=StreamingProtocol.WEBRTC) +streaming = StreamingManager(camera, config) +stream_info = streaming.start_stream() +``` + +## AI Processing + +### Person Detection + +```python +from accelerapp.hardware.camera.esp32_cam import ( + AIProcessor, + DetectionModel, +) + +ai = AIProcessor(camera) +ai.config.model_type = DetectionModel.PERSON_DETECTION +ai.load_model() + +detections = ai.detect() +``` + +### Face Detection + +```python +ai = AIProcessor(camera) +ai.config.model_type = DetectionModel.FACE_DETECTION +ai.load_model() + +detections = ai.detect() +``` + +### Custom Model + +```python +from accelerapp.hardware.camera.esp32_cam import ModelConfig + +config = ModelConfig( + model_type=DetectionModel.CUSTOM, + model_path="/path/to/model.tflite", + confidence_threshold=0.7, +) + +ai = AIProcessor(camera, config) +ai.load_model() +``` + +## Motion Detection + +```python +from accelerapp.hardware.camera.esp32_cam import ( + MotionDetector, + MotionConfig, +) + +# Setup +motion = MotionDetector(camera) + +# Detect +if motion.detect_motion(): + print("Motion detected!") + +# Event callback +def on_motion(event): + print(f"Motion: {event.confidence}") + +motion.add_event_callback(on_motion) +``` + +## QR Code Scanning + +```python +from accelerapp.hardware.camera.esp32_cam import QRScanner + +scanner = QRScanner(camera) +result = scanner.scan() + +if result: + print(f"QR Data: {result['data']}") +``` + +## Remote Access + +### Basic Token Auth + +```python +from accelerapp.hardware.camera.esp32_cam import ( + RemoteAccess, + AuthConfig, + AuthMethod, +) + +auth_config = AuthConfig( + method=AuthMethod.TOKEN, + access_token="your_token_here", +) + +remote = RemoteAccess(camera, auth_config) +``` + +### With ngrok Tunnel + +```python +from accelerapp.hardware.camera.esp32_cam import ( + TunnelConfig, + TunnelType, +) + +tunnel_config = TunnelConfig( + tunnel_type=TunnelType.NGROK, + ngrok_auth_token="your_ngrok_token", +) + +remote = RemoteAccess(camera, tunnel_config=tunnel_config) +info = remote.start_tunnel() +print(f"Access at: {info['public_url']}") +``` + +## Web Interface + +```python +from accelerapp.hardware.camera.esp32_cam import WebInterface + +web = WebInterface(camera) + +# Handle API request +response = web.handle_request( + "/api/camera/status", + "GET", + {} +) +``` + +### API Endpoints + +``` +GET /api/camera/status # Get camera status +GET /api/camera/capture # Capture single frame +GET /api/camera/config # Get configuration +PUT /api/camera/config # Update configuration + +POST /api/stream/start # Start streaming +POST /api/stream/stop # Stop streaming +GET /api/stream/status # Stream status + +PUT /api/settings/quality # Set JPEG quality +PUT /api/settings/brightness # Set brightness +PUT /api/settings/flip # Set flip settings + +GET / # Home page +GET /ui/live # Live view +GET /ui/settings # Settings page +``` + +## Camera Settings + +```python +# Quality (0-63, lower is better) +camera.set_quality(12) + +# Brightness (-2 to 2) +camera.set_brightness(1) + +# Contrast (-2 to 2) +camera.set_contrast(0) + +# Flip +camera.set_flip(horizontal=True, vertical=False) + +# Frame size +camera.set_frame_size(FrameSize.VGA) +``` + +## Firmware Generation + +```python +# Camera config +firmware = camera.generate_firmware_config() + +# Streaming +stream_code = streaming.generate_streaming_code() + +# AI inference +ai_code = ai.generate_inference_code() + +# Motion detection +motion_code = motion.generate_motion_detection_code() +``` + +## Digital Twin Integration + +```python +from accelerapp.hardware.camera.esp32_cam import CameraConfig + +config = CameraConfig( + twin_id="camera_001", + twin_sync_interval=60, + enable_metrics=True, + enable_health_checks=True, +) + +camera = ESP32Camera(config) +``` + +## Complete Example + +```python +from accelerapp.hardware.camera.esp32_cam import ( + ESP32Camera, + CameraConfig, + StreamingManager, + AIProcessor, + MotionDetector, + WebInterface, +) + +# Initialize +camera = ESP32Camera() +camera.initialize() + +# Setup streaming +streaming = StreamingManager(camera) +streaming.start_stream() + +# Setup AI +ai = AIProcessor(camera) +ai.load_model() + +# Setup motion detection +motion = MotionDetector(camera) + +# Setup web interface +web = WebInterface(camera) + +# Main loop +while True: + # Detect motion + if motion.detect_motion(): + # Run AI detection + detections = ai.detect() + + for det in detections: + print(f"{det.label}: {det.confidence:.2f}") +``` + +## Common Issues + +### Camera won't initialize +- Check pin configuration +- Verify power supply (5V, 500mA+) +- Check cable connection + +### Low FPS +- Reduce frame size +- Increase JPEG quality value +- Check WiFi signal + +### No detections +- Check model loading +- Adjust confidence threshold +- Verify lighting + +## Performance Tips + +- Use VGA (640x480) for balance of quality/performance +- JPEG quality 10-15 for most applications +- Enable adaptive streaming for variable bandwidth +- Use INT8 quantization for AI models +- Frame skip for motion detection (process every 2nd frame) + +## Security + +```python +# Use token authentication +auth_config = AuthConfig( + method=AuthMethod.TOKEN, + access_token="secure_random_token", +) + +# Enable IP whitelisting +auth_config.allowed_ips = ["192.168.1.0/24"] + +# Enable rate limiting +auth_config.rate_limit_per_minute = 60 +``` + +## Resources + +- Full Guide: `docs/ESP32_CAM_GUIDE.md` +- Examples: `examples/esp32_cam_*.py` +- Tests: `tests/test_esp32_cam.py` +- GitHub: https://github.com/thewriterben/Accelerapp diff --git a/examples/esp32_cam_demo.py b/examples/esp32_cam_demo.py index 76f5faf..5aaa33c 100644 --- a/examples/esp32_cam_demo.py +++ b/examples/esp32_cam_demo.py @@ -1,368 +1,16 @@ """ -ESP32-CAM Integration Demo -Demonstrates comprehensive camera functionality for Accelerapp platform. -""" -from accelerapp.hardware.camera import ( - ESP32Camera, - CameraConfig, - CameraResolution, - StreamingServer, - StreamProtocol, - MotionDetector, - CameraDigitalTwin, - CameraWebInterface, - StorageManager, - CameraSecurityManager, -) -from accelerapp.hardware.camera.esp32_cam.streaming import StreamConfig -from accelerapp.hardware.camera.esp32_cam.motion_detection import MotionSensitivity -from accelerapp.hardware.camera.esp32_cam.security import SecurityConfig, AuthMethod, AccessLevel def demo_basic_camera(): """Demonstrate basic camera operations.""" print("\n" + "=" * 60) - print("1. Basic Camera Operations") - print("=" * 60) - - # Create camera configuration - config = CameraConfig( - device_id="esp32cam_demo_001", - board_type="ai_thinker", - resolution=CameraResolution.HD, - frame_rate=15, - ) - - # Initialize camera - camera = ESP32Camera(config) - print(f"✓ Camera created: {config.device_id}") - - if camera.initialize(): - print("✓ Camera initialized successfully") - - # Capture image - image = camera.capture_image() - if image: - print(f"✓ Image captured: {image['resolution']}") - print(f" Capture #{image['capture_number']}") - - # Get status - status = camera.get_status() - print(f"✓ Camera status:") - print(f" Board: {status['board_type']}") - print(f" Resolution: {status['resolution']}") - print(f" Total captures: {status['stats']['captures']}") - - -def demo_streaming(): - """Demonstrate video streaming capabilities.""" - print("\n" + "=" * 60) - print("2. Video Streaming") - print("=" * 60) - - config = CameraConfig(device_id="esp32cam_stream_001") - camera = ESP32Camera(config) - camera.initialize() - - # Setup MJPEG streaming - stream_config = StreamConfig( - protocol=StreamProtocol.MJPEG, - port=81, - max_clients=5, - ) - - server = StreamingServer(camera, stream_config) - print(f"✓ Streaming server created") - - if server.start(): - print(f"✓ Streaming started") - print(f" Stream URL: {server.get_stream_url()}") - print(f" Protocol: {stream_config.protocol.value}") - print(f" Max clients: {stream_config.max_clients}") - - # Simulate client connections - server.add_client("client_001", {"ip": "192.168.1.100"}) - server.add_client("client_002", {"ip": "192.168.1.101"}) - print(f"✓ Active clients: {server.get_client_count()}") - - # Get status - status = server.get_status() - print(f"✓ Server status:") - print(f" Active: {status['active']}") - print(f" Clients: {status['clients']}/{status['max_clients']}") - - server.stop() - print("✓ Streaming stopped") - -def demo_motion_detection(): - """Demonstrate motion detection features.""" - print("\n" + "=" * 60) - print("3. Motion Detection") - print("=" * 60) - - config = CameraConfig(device_id="esp32cam_motion_001") - camera = ESP32Camera(config) - camera.initialize() - - # Setup motion detector - detector = MotionDetector(camera, sensitivity=MotionSensitivity.MEDIUM) - print(f"✓ Motion detector created") - - # Register callback - def on_motion(event): - print(f" ⚠️ Motion detected! Event ID: {event.event_id}") - - detector.register_callback(on_motion) - print("✓ Motion callback registered") - - # Enable detection - if detector.enable(): - print("✓ Motion detection enabled") - - # Configure settings - detector.set_sensitivity(MotionSensitivity.HIGH) - print(f"✓ Sensitivity set to: {detector.sensitivity.value}") - - # Get status - status = detector.get_status() - print(f"✓ Detector status:") - print(f" Enabled: {status['enabled']}") - print(f" Sensitivity: {status['sensitivity']}") - print(f" Events: {status['event_count']}") - - -def demo_digital_twin(): - """Demonstrate digital twin integration.""" - print("\n" + "=" * 60) - print("4. Digital Twin Integration") - print("=" * 60) - - config = CameraConfig(device_id="esp32cam_twin_001") - camera = ESP32Camera(config) - camera.initialize() - - # Create digital twin - twin = CameraDigitalTwin(camera, twin_id="twin_cam_001") - print(f"✓ Digital twin created: {twin.twin_id}") - - # Sync state - state = twin.sync_state() - print(f"✓ State synchronized") - print(f" Timestamp: {state['timestamp']}") - - # Get telemetry - telemetry = twin.get_telemetry() - print(f"✓ Telemetry collected:") - print(f" Health: {telemetry['health']}") - print(f" Initialized: {telemetry['metrics']['initialized']}") - - # Predictive maintenance - maintenance = twin.predict_maintenance() - print(f"✓ Predictive maintenance:") - print(f" Usage: {maintenance['usage_percentage']:.1f}%") - print(f" Maintenance recommended: {maintenance['maintenance_recommended']}") - print(f" Health: {maintenance['health_status']}") - - # Analytics - analytics = twin.get_analytics() - print(f"✓ Performance analytics:") - print(f" Total captures: {analytics['performance']['total_captures']}") - print(f" Errors: {analytics['performance']['error_count']}") - - -def demo_web_interface(): - """Demonstrate web interface and API.""" - print("\n" + "=" * 60) - print("5. Web Interface & API") - print("=" * 60) - - config = CameraConfig(device_id="esp32cam_web_001") - camera = ESP32Camera(config) - camera.initialize() - - # Setup web interface - web = CameraWebInterface(camera, port=80) - print(f"✓ Web interface created") - - if web.start(): - print(f"✓ Web server started") - print(f" URL: {web.get_interface_url()}") - - # Test API endpoints - api_info = web.get_api_info() - print(f"✓ API endpoints available:") - for endpoint in api_info['endpoints']: - print(f" - {endpoint}") - - # Simulate API calls - status = web.get_status_handler() - print(f"✓ API call: GET /api/status") - print(f" Response: {status['status']}") - - config_resp = web.get_config_handler() - print(f"✓ API call: GET /api/config") - print(f" Device: {config_resp['data']['device_id']}") - - web.stop() - print("✓ Web server stopped") - - -def demo_storage(): - """Demonstrate storage management.""" - print("\n" + "=" * 60) - print("6. Storage Management") - print("=" * 60) - - config = CameraConfig(device_id="esp32cam_storage_001") - camera = ESP32Camera(config) - camera.initialize() - - # Setup storage - storage = StorageManager(camera) - print(f"✓ Storage manager created") - - if storage.initialize(): - print("✓ Storage initialized") - - # Save images - for i in range(3): - image_data = {"size_bytes": 1024 * (i + 1)} - filepath = storage.save_image(image_data) - print(f"✓ Image saved: {filepath}") - - # Get storage info - info = storage.get_storage_info() - print(f"✓ Storage information:") - print(f" Type: {info['storage_type']}") - print(f" Files: {info['file_count']}") - print(f" Used: {info['used_space_mb']:.2f} MB") - print(f" Free: {info['free_space_mb']:.2f} MB") - print(f" Usage: {info['used_percent']:.1f}%") - - # List files - files = storage.list_files() - print(f"✓ Files stored: {len(files)}") - - -def demo_security(): - """Demonstrate security features.""" - print("\n" + "=" * 60) - print("7. Security Management") - print("=" * 60) - - config = CameraConfig(device_id="esp32cam_secure_001") - camera = ESP32Camera(config) - - # Setup security - security_config = SecurityConfig( - auth_method=AuthMethod.TOKEN, - enable_encryption=True, - ) - security = CameraSecurityManager(camera, security_config) - print(f"✓ Security manager created") - - # Add users - security.add_user("admin", "admin123", AccessLevel.ADMIN) - security.add_user("operator", "op123", AccessLevel.USER) - security.add_user("viewer", "view123", AccessLevel.GUEST) - print("✓ Users added: admin, operator, viewer") - - # Authenticate - admin_token = security.authenticate("admin", "admin123") - if admin_token: - print(f"✓ Admin authenticated") - print(f" Token: {admin_token[:16]}...") - - # Check permissions - has_admin = security.check_permission(admin_token, AccessLevel.ADMIN) - print(f"✓ Permission check (ADMIN): {has_admin}") - - # Security status - status = security.get_security_status() - print(f"✓ Security status:") - print(f" Auth method: {status['auth_method']}") - print(f" Encryption: {status['encryption_enabled']}") - print(f" Total users: {status['total_users']}") - print(f" Active tokens: {status['active_tokens']}") - - -def demo_advanced_features(): - """Demonstrate advanced camera features.""" - print("\n" + "=" * 60) - print("8. Advanced Features") print("=" * 60) + # Create camera configuration config = CameraConfig( - device_id="esp32cam_advanced_001", - board_type="esp32_s3_cam", # Using S3 variant - resolution=CameraResolution.UXGA, - frame_rate=20, - ) - - camera = ESP32Camera(config) - camera.initialize() - print(f"✓ Advanced camera initialized") - print(f" Board: {config.board_type}") - print(f" Resolution: {config.resolution.value}") - - # Test different resolutions - resolutions = [ - CameraResolution.VGA, - CameraResolution.HD, - CameraResolution.UXGA, - ] - - print("✓ Testing resolution changes:") - for res in resolutions: - camera.set_resolution(res) - print(f" - {res.value}") - - # Test image adjustments - camera.set_brightness(1) - camera.set_quality(8) - camera.set_flip(vertical=False, horizontal=True) - print("✓ Image adjustments applied") - - # Multiple captures - print("✓ Performing multiple captures:") - for i in range(5): - image = camera.capture_image() - print(f" Capture #{image['capture_number']}") - - final_config = camera.get_config() - print(f"✓ Final configuration:") - print(f" Resolution: {final_config['resolution']}") - print(f" Quality: {final_config['jpeg_quality']}") - print(f" Brightness: {final_config['brightness']}") - -def main(): - """Run all demonstrations.""" - print("\n" + "=" * 60) - print("ESP32-CAM Integration Demonstration") - print("Accelerapp Hardware Camera Module") - print("=" * 60) - - try: - demo_basic_camera() - demo_streaming() - demo_motion_detection() - demo_digital_twin() - demo_web_interface() - demo_storage() - demo_security() - demo_advanced_features() - - print("\n" + "=" * 60) - print("✓ All demonstrations completed successfully!") - print("=" * 60) - - except Exception as e: - print(f"\n❌ Error during demonstration: {e}") - import traceback - traceback.print_exc() if __name__ == "__main__": diff --git a/src/accelerapp/hardware/__init__.py b/src/accelerapp/hardware/__init__.py index 908ec5a..423f17b 100644 --- a/src/accelerapp/hardware/__init__.py +++ b/src/accelerapp/hardware/__init__.py @@ -36,38 +36,7 @@ ) from .camera import ( ESP32Camera, - CameraConfig, - CameraResolution, - StreamingServer, - StreamProtocol, - MotionDetector, - MotionEvent, - CameraDigitalTwin, - CameraWebInterface, - StorageManager, - CameraSecurityManager, -) -# CYD (Cheap Yellow Display) ecosystem support -from .cyd import ( - # HAL Components - DisplayDriver, - TouchController, - GPIOManager, - PowerManager, - SensorMonitor, - # Community Integration - CommunityIntegration, - TemplateManager, - ExampleLoader, - # AI Agents - CYDCodeGenerator, - HardwareOptimizer, - ProjectBuilder, - # Digital Twin - CYDSimulator, - CYDTwinModel, - CYDMonitor, ) __all__ = [ diff --git a/src/accelerapp/hardware/camera/__init__.py b/src/accelerapp/hardware/camera/__init__.py index 508a08b..3113b56 100644 --- a/src/accelerapp/hardware/camera/__init__.py +++ b/src/accelerapp/hardware/camera/__init__.py @@ -1,26 +1,3 @@ """ -ESP32-CAM hardware integration module for Accelerapp. -Provides comprehensive camera control, streaming, and AI processing capabilities. -""" - -from .esp32_cam.core import ESP32Camera, CameraConfig, CameraResolution -from .esp32_cam.streaming import StreamingServer, StreamProtocol -from .esp32_cam.motion_detection import MotionDetector, MotionEvent -from .esp32_cam.digital_twin import CameraDigitalTwin -from .esp32_cam.web_interface import CameraWebInterface -from .esp32_cam.storage import StorageManager -from .esp32_cam.security import CameraSecurityManager -__all__ = [ - "ESP32Camera", - "CameraConfig", - "CameraResolution", - "StreamingServer", - "StreamProtocol", - "MotionDetector", - "MotionEvent", - "CameraDigitalTwin", - "CameraWebInterface", - "StorageManager", - "CameraSecurityManager", ] diff --git a/src/accelerapp/hardware/camera/esp32_cam/__init__.py b/src/accelerapp/hardware/camera/esp32_cam/__init__.py index b6a8f0c..3113b56 100644 --- a/src/accelerapp/hardware/camera/esp32_cam/__init__.py +++ b/src/accelerapp/hardware/camera/esp32_cam/__init__.py @@ -1,26 +1,3 @@ """ -ESP32-CAM core module. -Provides camera interface, configuration, and control. -""" - -from .core import ESP32Camera, CameraConfig, CameraResolution -from .streaming import StreamingServer, StreamProtocol -from .motion_detection import MotionDetector, MotionEvent -from .digital_twin import CameraDigitalTwin -from .web_interface import CameraWebInterface -from .storage import StorageManager -from .security import CameraSecurityManager -__all__ = [ - "ESP32Camera", - "CameraConfig", - "CameraResolution", - "StreamingServer", - "StreamProtocol", - "MotionDetector", - "MotionEvent", - "CameraDigitalTwin", - "CameraWebInterface", - "StorageManager", - "CameraSecurityManager", ] diff --git a/src/accelerapp/hardware/camera/esp32_cam/ai_processing.py b/src/accelerapp/hardware/camera/esp32_cam/ai_processing.py new file mode 100644 index 0000000..fef4f01 --- /dev/null +++ b/src/accelerapp/hardware/camera/esp32_cam/ai_processing.py @@ -0,0 +1,406 @@ +""" +AI Processing module for ESP32-CAM with TinyML integration. +Supports object detection, face recognition, and edge AI inference. +""" + +from typing import Dict, Any, List, Optional, Tuple +from enum import Enum +from dataclasses import dataclass, field +import logging + +logger = logging.getLogger(__name__) + + +class DetectionModel(Enum): + """Supported detection models.""" + PERSON_DETECTION = "person_detection" + FACE_DETECTION = "face_detection" + FACE_RECOGNITION = "face_recognition" + OBJECT_DETECTION = "object_detection" + GESTURE_RECOGNITION = "gesture_recognition" + QR_DETECTION = "qr_detection" + CUSTOM = "custom" + + +class InferenceBackend(Enum): + """Inference backend options.""" + TFLITE_MICRO = "tflite_micro" + ESP_NN = "esp_nn" + TFLITE_ESP = "tflite_esp" + CUSTOM = "custom" + + +@dataclass +class ModelConfig: + """AI model configuration.""" + model_type: DetectionModel = DetectionModel.PERSON_DETECTION + backend: InferenceBackend = InferenceBackend.TFLITE_MICRO + + # Model files + model_path: Optional[str] = None + model_data: Optional[bytes] = None + + # Inference settings + input_width: int = 96 + input_height: int = 96 + input_channels: int = 1 # Grayscale + confidence_threshold: float = 0.7 + + # Performance settings + enable_quantization: bool = True + use_int8: bool = True + arena_size_bytes: int = 40000 + + # Detection settings + max_detections: int = 10 + nms_threshold: float = 0.5 # Non-maximum suppression + + # Face recognition specific + num_faces: int = 10 + recognition_threshold: float = 0.6 + + # Custom labels + labels: List[str] = field(default_factory=lambda: ["background", "person"]) + + # Metadata + metadata: Dict[str, Any] = field(default_factory=dict) + + +@dataclass +class DetectionResult: + """Detection result.""" + label: str + confidence: float + bbox: Optional[Tuple[int, int, int, int]] = None # x, y, width, height + landmarks: Optional[List[Tuple[int, int]]] = None + metadata: Dict[str, Any] = field(default_factory=dict) + + +class AIProcessor: + """ + AI processing engine for ESP32-CAM. + Integrates TinyML models for edge inference. + """ + + def __init__(self, camera, config: Optional[ModelConfig] = None): + """ + Initialize AI processor. + + Args: + camera: ESP32Camera instance + config: Model configuration + """ + self.camera = camera + self.config = config or ModelConfig() + self.model_loaded = False + self.inference_count = 0 + self.detection_history = [] + + logger.info(f"AIProcessor initialized with model: {self.config.model_type.value}") + + def load_model(self, model_path: Optional[str] = None) -> bool: + """ + Load TinyML model. + + Args: + model_path: Path to model file + + Returns: + True if successful + """ + try: + model_path = model_path or self.config.model_path + + if not model_path and not self.config.model_data: + logger.warning("No model path or data provided, using default model") + # Use built-in model + self.model_loaded = True + return True + + logger.info(f"Loading model from: {model_path}") + + # In production, this would load the actual TFLite model + self.model_loaded = True + logger.info("Model loaded successfully") + + return True + + except Exception as e: + logger.error(f"Failed to load model: {e}") + return False + + def detect(self, frame: Optional[bytes] = None) -> List[DetectionResult]: + """ + Perform detection on a frame. + + Args: + frame: Optional frame data, captures new frame if None + + Returns: + List of detection results + """ + if not self.model_loaded: + logger.error("Model not loaded") + return [] + + try: + # Capture frame if not provided + if frame is None and self.camera.initialized: + frame = self.camera.capture_frame() + + if frame is None: + logger.error("No frame available for detection") + return [] + + # Preprocess frame + preprocessed = self._preprocess_frame(frame) + + # Run inference + detections = self._run_inference(preprocessed) + + # Post-process results + results = self._postprocess_results(detections) + + self.inference_count += 1 + self.detection_history.extend(results) + + # Keep only last 100 detections + if len(self.detection_history) > 100: + self.detection_history = self.detection_history[-100:] + + logger.debug(f"Detection complete: {len(results)} objects found") + + return results + + except Exception as e: + logger.error(f"Detection failed: {e}") + return [] + + def _preprocess_frame(self, frame: bytes) -> bytes: + """ + Preprocess frame for model input. + + Args: + frame: Raw frame data + + Returns: + Preprocessed frame + """ + # In production, this would: + # 1. Resize to model input size + # 2. Convert color space if needed + # 3. Normalize pixel values + # 4. Apply quantization if needed + + logger.debug("Preprocessing frame") + return frame + + def _run_inference(self, frame: bytes) -> List[Dict[str, Any]]: + """ + Run model inference. + + Args: + frame: Preprocessed frame + + Returns: + Raw inference results + """ + # In production, this would run TFLite inference + # For now, return placeholder results + + if self.config.model_type == DetectionModel.PERSON_DETECTION: + return [ + {"label": "person", "confidence": 0.85, "bbox": (10, 10, 50, 100)}, + ] + elif self.config.model_type == DetectionModel.FACE_DETECTION: + return [ + {"label": "face", "confidence": 0.92, "bbox": (20, 20, 40, 40)}, + ] + else: + return [] + + def _postprocess_results(self, raw_results: List[Dict[str, Any]]) -> List[DetectionResult]: + """ + Post-process inference results. + + Args: + raw_results: Raw inference output + + Returns: + Processed detection results + """ + results = [] + + for detection in raw_results: + if detection["confidence"] >= self.config.confidence_threshold: + result = DetectionResult( + label=detection["label"], + confidence=detection["confidence"], + bbox=detection.get("bbox"), + landmarks=detection.get("landmarks"), + ) + results.append(result) + + # Apply non-maximum suppression if needed + if len(results) > self.config.max_detections: + results = results[:self.config.max_detections] + + return results + + def get_statistics(self) -> Dict[str, Any]: + """ + Get AI processing statistics. + + Returns: + Statistics dictionary + """ + recent_detections = self.detection_history[-20:] if self.detection_history else [] + + return { + "model_loaded": self.model_loaded, + "model_type": self.config.model_type.value, + "backend": self.config.backend.value, + "inference_count": self.inference_count, + "total_detections": len(self.detection_history), + "recent_detections": [ + { + "label": d.label, + "confidence": d.confidence, + "bbox": d.bbox, + } + for d in recent_detections + ], + } + + def generate_inference_code(self) -> Dict[str, str]: + """ + Generate TinyML inference code for ESP32. + + Returns: + Dictionary with code files + """ + header = self._generate_inference_header() + implementation = self._generate_inference_implementation() + + return { + "ai_inference.h": header, + "ai_inference.cpp": implementation, + } + + def _generate_inference_header(self) -> str: + """Generate inference header file.""" + lines = [ + "// AI Inference for ESP32-CAM", + "// Auto-generated by Accelerapp", + "", + "#ifndef AI_INFERENCE_H", + "#define AI_INFERENCE_H", + "", + '#include "tensorflow/lite/micro/micro_interpreter.h"', + '#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"', + '#include "tensorflow/lite/schema/schema_generated.h"', + "", + "// Model configuration", + f"#define INPUT_WIDTH {self.config.input_width}", + f"#define INPUT_HEIGHT {self.config.input_height}", + f"#define INPUT_CHANNELS {self.config.input_channels}", + f"#define CONFIDENCE_THRESHOLD {self.config.confidence_threshold}", + f"#define ARENA_SIZE {self.config.arena_size_bytes}", + "", + "// Detection structure", + "struct Detection {", + " const char* label;", + " float confidence;", + " int x, y, width, height;", + "};", + "", + "class AIInference {", + "public:", + " bool init();", + " bool loadModel(const unsigned char* model_data, size_t model_size);", + " int detect(uint8_t* image_data, Detection* results, int max_results);", + " ", + "private:", + " tflite::MicroInterpreter* interpreter;", + " const tflite::Model* model;", + " uint8_t tensor_arena[ARENA_SIZE];", + "};", + "", + "#endif // AI_INFERENCE_H", + "", + ] + + return "\n".join(lines) + + def _generate_inference_implementation(self) -> str: + """Generate inference implementation file.""" + lines = [ + "// AI Inference Implementation", + '#include "ai_inference.h"', + "", + "bool AIInference::init() {", + " // Initialize TensorFlow Lite Micro", + " return true;", + "}", + "", + "bool AIInference::loadModel(const unsigned char* model_data, size_t model_size) {", + " // Load TFLite model", + " model = tflite::GetModel(model_data);", + " ", + " if (model->version() != TFLITE_SCHEMA_VERSION) {", + " return false;", + " }", + " ", + " // Setup interpreter", + " // static tflite::MicroMutableOpResolver<10> resolver;", + " // Add required ops", + " ", + " return true;", + "}", + "", + "int AIInference::detect(uint8_t* image_data, Detection* results, int max_results) {", + " // Preprocess image", + " // Run inference", + " // Post-process results", + " ", + " int num_detections = 0;", + " ", + " // Placeholder detection", + f" if (num_detections < max_results) {{", + ' results[num_detections].label = "object";', + " results[num_detections].confidence = 0.85f;", + " results[num_detections].x = 0;", + " results[num_detections].y = 0;", + " results[num_detections].width = 100;", + " results[num_detections].height = 100;", + " num_detections++;", + " }", + " ", + " return num_detections;", + "}", + "", + ] + + return "\n".join(lines) + + def integrate_with_tinyml_agent(self) -> Dict[str, Any]: + """ + Generate integration spec for TinyMLAgent. + + Returns: + Specification for TinyMLAgent + """ + return { + "task_type": "inference", + "platform": "esp32", + "model_type": self.config.model_type.value, + "input_shape": [ + 1, + self.config.input_height, + self.config.input_width, + self.config.input_channels, + ], + "optimization_level": "aggressive" if self.config.enable_quantization else "standard", + "use_int8_quantization": self.config.use_int8, + "arena_size": self.config.arena_size_bytes, + } diff --git a/src/accelerapp/hardware/camera/esp32_cam/configs/default_config.yaml b/src/accelerapp/hardware/camera/esp32_cam/configs/default_config.yaml new file mode 100644 index 0000000..a288f1f --- /dev/null +++ b/src/accelerapp/hardware/camera/esp32_cam/configs/default_config.yaml @@ -0,0 +1,174 @@ +# Default ESP32-CAM Configuration +# This file provides a template for configuring ESP32-CAM devices + +camera: + # Board variant + variant: "ai_thinker" # Options: ai_thinker, wrover_kit, esp_eye, m5stack_camera, ttgo_t_camera + + # Camera sensor + sensor: "ov2640" # Options: ov2640, ov5640, ov3660, ov7670 + + # Image settings + frame_size: "vga" # Options: qqvga, qvga, vga, svga, xga, sxga, uxga + pixel_format: "jpeg" # Options: jpeg, rgb565, yuv422, grayscale, rgb888 + jpeg_quality: 12 # 0-63, lower is better quality + fb_count: 2 # Frame buffer count (1-2) + + # Camera adjustments + brightness: 0 # -2 to 2 + contrast: 0 # -2 to 2 + saturation: 0 # -2 to 2 + sharpness: 0 # -2 to 2 + + # Auto settings + auto_exposure: true + auto_white_balance: true + auto_white_balance_gain: true + exposure_ctrl_sensor: true + gain_ctrl: true + + # Flip settings + horizontal_flip: false + vertical_flip: false + +streaming: + # Protocol selection + protocol: "mjpeg" # Options: mjpeg, rtsp, webrtc, http + + # Quality preset + quality: "medium" # Options: low, medium, high, ultra + + # Network settings + port: 8080 + max_clients: 5 + buffer_size: 32768 + + # Performance + fps_target: 15 + enable_adaptive_quality: true + bandwidth_limit_kbps: null # null for unlimited + + # RTSP specific + rtsp_path: "/stream" + rtsp_auth_required: false + + # Bandwidth optimization + enable_compression: true + dynamic_bitrate: true + +ai_processing: + # Model selection + model_type: "person_detection" # Options: person_detection, face_detection, object_detection, gesture_recognition + + # Backend + backend: "tflite_micro" # Options: tflite_micro, esp_nn, tflite_esp + + # Model settings + model_path: null # Path to custom model + input_width: 96 + input_height: 96 + input_channels: 1 # 1 for grayscale, 3 for RGB + confidence_threshold: 0.7 + + # Performance + enable_quantization: true + use_int8: true + arena_size_bytes: 40000 + + # Detection + max_detections: 10 + nms_threshold: 0.5 + +motion_detection: + # Algorithm + algorithm: "frame_diff" # Options: frame_diff, background_subtraction, optical_flow + + # Sensitivity + threshold: 20 # Pixel difference threshold + min_area: 500 # Minimum area for motion detection + + # Performance + frame_skip: 2 # Process every Nth frame + history_frames: 3 + + # Events + enable_events: true + cooldown_seconds: 5 + + # Recording + record_on_motion: false + pre_record_seconds: 2 + post_record_seconds: 5 + +remote_access: + # Authentication + auth_method: "token" # Options: none, basic, token, oauth, certificate + access_token: "change_me_secure_token" + token_expiry_hours: 24 + + # Access control + allowed_ips: [] # Empty for all IPs + rate_limit_per_minute: 60 + + # Tunnel configuration + tunnel_type: "none" # Options: none, ngrok, cloudflare, custom + enable_tls: true + heartbeat_interval: 30 + reconnect_attempts: 5 + +web_interface: + # Server settings + enable_api: true + enable_web_ui: true + port: 80 + + # CORS + enable_cors: true + cors_origins: ["*"] + + # API + api_prefix: "/api" + enable_swagger: true + + # Rate limiting + enable_rate_limit: true + requests_per_minute: 60 + + # UI + ui_theme: "light" + enable_live_preview: true + +digital_twin: + # Digital twin integration + twin_id: null # Set to enable digital twin + twin_sync_interval: 60 # Seconds + +observability: + # Monitoring + enable_metrics: true + enable_health_checks: true + + # Logging + log_level: "INFO" # Options: DEBUG, INFO, WARNING, ERROR + log_to_file: false + log_file_path: "/var/log/esp32cam.log" + +# Network configuration +network: + wifi_ssid: "YOUR_WIFI_SSID" + wifi_password: "YOUR_WIFI_PASSWORD" + hostname: "esp32cam" + + # Static IP (optional) + use_static_ip: false + static_ip: "192.168.1.100" + gateway: "192.168.1.1" + subnet: "255.255.255.0" + dns: "8.8.8.8" + +# Firmware +firmware: + version: "1.0.0" + enable_ota: true + ota_url: null # URL for OTA updates + check_interval_hours: 24 diff --git a/src/accelerapp/hardware/camera/esp32_cam/core.py b/src/accelerapp/hardware/camera/esp32_cam/core.py index 6051101..8ac5840 100644 --- a/src/accelerapp/hardware/camera/esp32_cam/core.py +++ b/src/accelerapp/hardware/camera/esp32_cam/core.py @@ -1,128 +1,24 @@ """ -Core ESP32-CAM interface implementation. -Provides hardware abstraction for ESP32-CAM devices with support for multiple board variants. -""" - -from enum import Enum -from typing import Dict, Any, Optional, List -from dataclasses import dataclass, field -import json - -class CameraResolution(Enum): - """Supported camera resolutions.""" - QVGA = "320x240" # 320x240 - VGA = "640x480" # 640x480 - SVGA = "800x600" # 800x600 - XGA = "1024x768" # 1024x768 - HD = "1280x720" # 720p SXGA = "1280x1024" # 1280x1024 UXGA = "1600x1200" # 1600x1200 -class CameraModel(Enum): - """Supported camera sensor models.""" - OV2640 = "ov2640" - OV3660 = "ov3660" - OV5640 = "ov5640" -class FrameFormat(Enum): - """Image frame formats.""" - JPEG = "jpeg" - RGB565 = "rgb565" - YUV422 = "yuv422" - GRAYSCALE = "grayscale" - @dataclass class CameraConfig: - """Configuration for ESP32-CAM device.""" - device_id: str - board_type: str = "ai_thinker" # ai_thinker, esp32_cam, esp32_s3_cam, ttgo - camera_model: CameraModel = CameraModel.OV2640 - resolution: CameraResolution = CameraResolution.VGA - frame_format: FrameFormat = FrameFormat.JPEG - jpeg_quality: int = 10 # 0-63, lower is higher quality - brightness: int = 0 # -2 to 2 - contrast: int = 0 # -2 to 2 - saturation: int = 0 # -2 to 2 - vertical_flip: bool = False - horizontal_mirror: bool = False - frame_rate: int = 10 # Frames per second - pin_config: Dict[str, int] = field(default_factory=dict) - - def __post_init__(self): - """Initialize default pin configuration based on board type.""" - if not self.pin_config: - self.pin_config = self._get_default_pins() - - def _get_default_pins(self) -> Dict[str, int]: - """Get default pin configuration for board type.""" - if self.board_type in ["ai_thinker", "esp32_cam"]: - return { - "PWDN": 32, - "RESET": -1, - "XCLK": 0, - "SIOD": 26, - "SIOC": 27, - "Y9": 35, - "Y8": 34, - "Y7": 39, - "Y6": 36, - "Y5": 21, - "Y4": 19, - "Y3": 18, - "Y2": 5, - "VSYNC": 25, - "HREF": 23, - "PCLK": 22, - } - elif self.board_type == "esp32_s3_cam": - return { - "PWDN": -1, - "RESET": -1, - "XCLK": 15, - "SIOD": 4, - "SIOC": 5, - "Y9": 16, - "Y8": 17, - "Y7": 18, - "Y6": 12, - "Y5": 10, - "Y4": 8, - "Y3": 9, - "Y2": 11, - "VSYNC": 6, - "HREF": 7, - "PCLK": 13, - } - return {} + class ESP32Camera: """ - Main interface for ESP32-CAM hardware. - Provides camera control, configuration, and image capture capabilities. - """ - - def __init__(self, config: CameraConfig): """ Initialize ESP32-CAM interface. Args: - config: Camera configuration - """ - self.config = config - self._initialized = False - self._streaming = False - self._capture_count = 0 - self._stats = { - "captures": 0, - "streams": 0, - "errors": 0, - "uptime": 0, - } + def initialize(self) -> bool: """ @@ -131,123 +27,7 @@ def initialize(self) -> bool: Returns: True if initialization successful """ - if self._initialized: - return True - - # In a real implementation, this would initialize the camera hardware - # For now, we simulate successful initialization - self._initialized = True - return True - - def capture_image(self) -> Optional[Dict[str, Any]]: - """ - Capture a single image frame. - - Returns: - Image data dictionary with metadata - """ - if not self._initialized: - if not self.initialize(): - return None - - self._capture_count += 1 - self._stats["captures"] += 1 - - # Return simulated capture metadata - return { - "device_id": self.config.device_id, - "timestamp": "2025-10-15T01:12:23.332Z", - "resolution": self.config.resolution.value, - "format": self.config.frame_format.value, - "size_bytes": 0, # Would contain actual image data - "capture_number": self._capture_count, - } - - def start_streaming(self) -> bool: - """ - Start video streaming. - - Returns: - True if streaming started successfully - """ - if not self._initialized: - if not self.initialize(): - return False - - self._streaming = True - self._stats["streams"] += 1 - return True - - def stop_streaming(self) -> bool: - """ - Stop video streaming. - - Returns: - True if streaming stopped successfully - """ - self._streaming = False - return True - - def is_streaming(self) -> bool: - """Check if camera is currently streaming.""" - return self._streaming - - def set_resolution(self, resolution: CameraResolution) -> bool: - """ - Change camera resolution. - - Args: - resolution: New resolution setting - - Returns: - True if successful - """ - self.config.resolution = resolution - return True - - def set_quality(self, quality: int) -> bool: - """ - Set JPEG quality (0-63, lower is better). - - Args: - quality: Quality setting - - Returns: - True if successful - """ - if 0 <= quality <= 63: - self.config.jpeg_quality = quality - return True - return False - - def set_brightness(self, brightness: int) -> bool: - """ - Set camera brightness (-2 to 2). - - Args: - brightness: Brightness level - - Returns: - True if successful - """ - if -2 <= brightness <= 2: - self.config.brightness = brightness - return True - return False - - def set_flip(self, vertical: bool = False, horizontal: bool = False) -> bool: - """ - Set image flip settings. - - Args: - vertical: Enable vertical flip - horizontal: Enable horizontal mirror - - Returns: - True if successful - """ - self.config.vertical_flip = vertical - self.config.horizontal_mirror = horizontal + return True def get_status(self) -> Dict[str, Any]: @@ -258,60 +38,15 @@ def get_status(self) -> Dict[str, Any]: Status dictionary """ return { - "device_id": self.config.device_id, - "initialized": self._initialized, - "streaming": self._streaming, - "resolution": self.config.resolution.value, - "format": self.config.frame_format.value, - "board_type": self.config.board_type, - "camera_model": self.config.camera_model.value, - "stats": self._stats.copy(), + } def get_config(self) -> Dict[str, Any]: """ - Get current camera configuration. + Returns: Configuration dictionary """ return { - "device_id": self.config.device_id, - "board_type": self.config.board_type, - "camera_model": self.config.camera_model.value, - "resolution": self.config.resolution.value, - "frame_format": self.config.frame_format.value, - "jpeg_quality": self.config.jpeg_quality, - "brightness": self.config.brightness, - "contrast": self.config.contrast, - "saturation": self.config.saturation, - "vertical_flip": self.config.vertical_flip, - "horizontal_mirror": self.config.horizontal_mirror, - "frame_rate": self.config.frame_rate, - } - - def reset(self) -> bool: - """ - Reset camera to default settings. - - Returns: - True if successful - """ - self.config.brightness = 0 - self.config.contrast = 0 - self.config.saturation = 0 - self.config.vertical_flip = False - self.config.horizontal_mirror = False - return True - - def shutdown(self) -> bool: - """ - Shutdown camera and release resources. - - Returns: - True if successful - """ - if self._streaming: - self.stop_streaming() - self._initialized = False - return True + diff --git a/src/accelerapp/hardware/camera/esp32_cam/firmware/README.md b/src/accelerapp/hardware/camera/esp32_cam/firmware/README.md new file mode 100644 index 0000000..9f07012 --- /dev/null +++ b/src/accelerapp/hardware/camera/esp32_cam/firmware/README.md @@ -0,0 +1,223 @@ +# ESP32-CAM Firmware + +This directory contains firmware templates and utilities for ESP32-CAM devices. + +## Firmware Generation + +The ESP32-CAM module can generate firmware code for various features: + +### Camera Configuration + +```python +from accelerapp.hardware.camera.esp32_cam import ESP32Camera + +camera = ESP32Camera() +firmware_code = camera.generate_firmware_config() + +# Save to file +with open("camera_config.cpp", "w") as f: + f.write(firmware_code) +``` + +### Streaming Code + +```python +from accelerapp.hardware.camera.esp32_cam import StreamingManager + +streaming = StreamingManager(camera) +stream_code = streaming.generate_streaming_code() + +# Save files +for filename, code in stream_code.items(): + with open(f"firmware/{filename}", "w") as f: + f.write(code) +``` + +### AI Inference Code + +```python +from accelerapp.hardware.camera.esp32_cam import AIProcessor + +ai = AIProcessor(camera) +ai_code = ai.generate_inference_code() + +for filename, code in ai_code.items(): + with open(f"firmware/{filename}", "w") as f: + f.write(code) +``` + +## PlatformIO Configuration + +Example `platformio.ini` for ESP32-CAM: + +```ini +[env:esp32cam] +platform = espressif32 +board = esp32cam +framework = arduino + +; Upload settings +upload_speed = 921600 +monitor_speed = 115200 + +; Build flags +build_flags = + -DBOARD_HAS_PSRAM + -DCAMERA_MODEL_AI_THINKER + +; Libraries +lib_deps = + esp32-camera + ArduinoJson + AsyncTCP + ESPAsyncWebServer +``` + +## Required Libraries + +For Arduino ESP32: +- esp32-camera +- ArduinoJson +- AsyncTCP +- ESPAsyncWebServer + +For TinyML: +- TensorFlowLite_ESP32 (for TFLite inference) +- ESP-NN (for optimized neural network operations) + +For Streaming: +- Micro-RTSP (for RTSP streaming) +- AsyncTCP (for WebSocket/WebRTC) + +## OTA Updates + +The firmware supports OTA (Over-The-Air) updates. Configure in your code: + +```cpp +#include + +void setup() { + // ... camera setup ... + + ArduinoOTA.setHostname("esp32cam"); + ArduinoOTA.begin(); +} + +void loop() { + ArduinoOTA.handle(); + // ... your code ... +} +``` + +## Memory Considerations + +ESP32-CAM typically has: +- 4MB Flash +- 520KB SRAM +- 4MB PSRAM (if available) + +Optimize memory usage: +- Use PSRAM for frame buffers +- Reduce frame buffer count +- Use lower frame sizes for AI processing +- Enable quantization for TinyML models + +## Pin Mappings + +### AI-Thinker ESP32-CAM + +``` +Camera Pins: + PWDN GPIO32 + RESET -1 (software reset) + XCLK GPIO0 + SIOD GPIO26 (SDA) + SIOC GPIO27 (SCL) + + D7 GPIO35 + D6 GPIO34 + D5 GPIO39 + D4 GPIO36 + D3 GPIO21 + D2 GPIO19 + D1 GPIO18 + D0 GPIO5 + VSYNC GPIO25 + HREF GPIO23 + PCLK GPIO22 + +Flash: + GPIO4 (built-in LED/Flash) + +SD Card: + Not available on AI-Thinker +``` + +### WROVER-KIT + +``` +Camera Pins: + PWDN -1 + RESET -1 + XCLK GPIO21 + SIOD GPIO26 (SDA) + SIOC GPIO27 (SCL) + + D7 GPIO35 + D6 GPIO34 + D5 GPIO39 + D4 GPIO36 + D3 GPIO19 + D2 GPIO18 + D1 GPIO5 + D0 GPIO4 + VSYNC GPIO25 + HREF GPIO23 + PCLK GPIO22 +``` + +## Troubleshooting + +### Camera initialization fails +- Check power supply (5V, 500mA minimum) +- Verify pin connections +- Test with basic example first + +### Out of memory errors +- Reduce frame size +- Use single frame buffer +- Enable PSRAM + +### Streaming issues +- Check WiFi signal strength +- Reduce frame rate +- Lower JPEG quality + +### AI inference slow +- Use INT8 quantization +- Reduce input size +- Enable ESP-NN optimizations + +## Example Firmware + +See generated code examples: +- `examples/esp32_cam_demo.py` - Generates complete firmware +- `docs/ESP32_CAM_GUIDE.md` - Firmware generation guide + +## Building + +Using Arduino IDE: +1. Install ESP32 board support +2. Select "AI Thinker ESP32-CAM" board +3. Add required libraries +4. Upload generated code + +Using PlatformIO: +```bash +pio run -t upload +pio device monitor +``` + +## License + +MIT License - See main LICENSE file diff --git a/src/accelerapp/hardware/camera/esp32_cam/models/README.md b/src/accelerapp/hardware/camera/esp32_cam/models/README.md new file mode 100644 index 0000000..fdad94c --- /dev/null +++ b/src/accelerapp/hardware/camera/esp32_cam/models/README.md @@ -0,0 +1,297 @@ +# TinyML Models for ESP32-CAM + +This directory contains TinyML model files and utilities for ESP32-CAM AI processing. + +## Supported Models + +### Person Detection +- **Purpose**: Detect humans in camera frame +- **Input**: 96x96 grayscale image +- **Output**: Person detected (yes/no) with confidence +- **Model Size**: ~250KB +- **Inference Time**: ~100-200ms on ESP32 + +### Face Detection +- **Purpose**: Detect faces in camera frame +- **Input**: 96x96 grayscale image +- **Output**: Face bounding boxes and confidence +- **Model Size**: ~300KB +- **Inference Time**: ~150-250ms on ESP32 + +### Face Recognition +- **Purpose**: Identify specific individuals +- **Input**: 112x112 RGB image +- **Output**: Person ID and confidence +- **Model Size**: ~500KB +- **Inference Time**: ~200-300ms on ESP32 + +### Object Detection +- **Purpose**: Detect multiple object classes +- **Input**: 96x96 RGB image +- **Output**: Object classes, bounding boxes, confidence +- **Model Size**: ~400KB +- **Inference Time**: ~250-400ms on ESP32 + +### Gesture Recognition +- **Purpose**: Recognize hand gestures +- **Input**: 64x64 grayscale image +- **Output**: Gesture class and confidence +- **Model Size**: ~200KB +- **Inference Time**: ~80-150ms on ESP32 + +## Model Format + +All models should be in TensorFlow Lite format (`.tflite`): + +``` +models/ +├── person_detection.tflite +├── face_detection.tflite +├── face_recognition.tflite +├── object_detection.tflite +├── gesture_recognition.tflite +└── custom/ + └── your_model.tflite +``` + +## Converting Models + +### From TensorFlow/Keras + +```python +import tensorflow as tf + +# Load your model +model = tf.keras.models.load_model('your_model.h5') + +# Convert to TFLite +converter = tf.lite.TFLiteConverter.from_keras_model(model) + +# Enable quantization for ESP32 +converter.optimizations = [tf.lite.Optimize.DEFAULT] +converter.target_spec.supported_types = [tf.int8] + +# Convert +tflite_model = converter.convert() + +# Save +with open('model.tflite', 'wb') as f: + f.write(tflite_model) +``` + +### Using Accelerapp TinyML Agent + +```python +from accelerapp.agents import TinyMLAgent + +agent = TinyMLAgent() + +spec = { + "task_type": "model_conversion", + "platform": "esp32", + "model_path": "your_model.h5", + "optimization_level": "aggressive", +} + +result = agent.generate(spec) +print(f"Converted model saved to: {result['output_path']}") +``` + +## Model Optimization + +### Quantization + +Reduce model size and improve inference speed: + +```python +# INT8 quantization +converter.optimizations = [tf.lite.Optimize.DEFAULT] +converter.target_spec.supported_types = [tf.int8] + +# Provide representative dataset +def representative_dataset(): + for _ in range(100): + yield [np.random.rand(1, 96, 96, 1).astype(np.float32)] + +converter.representative_dataset = representative_dataset +``` + +### Pruning + +Remove unnecessary weights: + +```python +import tensorflow_model_optimization as tfmot + +# Apply pruning +pruning_params = { + 'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay( + initial_sparsity=0.0, + final_sparsity=0.5, + begin_step=0, + end_step=1000 + ) +} + +model = tfmot.sparsity.keras.prune_low_magnitude(model, **pruning_params) +``` + +## Loading Models + +### In Python (Accelerapp) + +```python +from accelerapp.hardware.camera.esp32_cam import AIProcessor, ModelConfig + +config = ModelConfig( + model_path="models/person_detection.tflite", + confidence_threshold=0.7, +) + +ai = AIProcessor(camera, config) +ai.load_model() +``` + +### In ESP32 Firmware + +```cpp +#include "tensorflow/lite/micro/micro_interpreter.h" +#include "model_data.h" // Generated from .tflite file + +// Allocate memory +constexpr int kTensorArenaSize = 40000; +uint8_t tensor_arena[kTensorArenaSize]; + +// Load model +const tflite::Model* model = tflite::GetModel(model_data); + +// Create interpreter +tflite::MicroInterpreter interpreter( + model, resolver, tensor_arena, kTensorArenaSize +); + +// Allocate tensors +interpreter.AllocateTensors(); +``` + +## Model Performance + +### Memory Requirements + +| Model | Flash | RAM | PSRAM | +|-------|-------|-----|-------| +| Person Detection | 250KB | 40KB | Optional | +| Face Detection | 300KB | 50KB | Optional | +| Face Recognition | 500KB | 80KB | Recommended | +| Object Detection | 400KB | 60KB | Recommended | +| Gesture Recognition | 200KB | 35KB | Optional | + +### Inference Speed + +| Model | ESP32 (240MHz) | ESP32-S3 (240MHz) | +|-------|----------------|-------------------| +| Person Detection | 150ms | 80ms | +| Face Detection | 200ms | 100ms | +| Face Recognition | 250ms | 130ms | +| Object Detection | 300ms | 150ms | +| Gesture Recognition | 120ms | 60ms | + +## Pre-trained Models + +Download pre-trained models: + +```bash +# Person detection +wget https://example.com/models/person_detection_int8.tflite + +# Face detection +wget https://example.com/models/face_detection_int8.tflite +``` + +Or use models from: +- [TensorFlow Model Garden](https://github.com/tensorflow/models) +- [Edge Impulse](https://www.edgeimpulse.com/) +- [EloquentTinyML](https://github.com/eloquentarduino/EloquentTinyML) + +## Custom Models + +### Training Your Own Model + +1. Collect training data +2. Train using TensorFlow/Keras +3. Convert to TFLite with quantization +4. Test on ESP32 +5. Deploy via Accelerapp + +Example: + +```python +# Train model +model = create_model() +model.fit(train_data, train_labels, epochs=10) + +# Convert to TFLite +converter = tf.lite.TFLiteConverter.from_keras_model(model) +converter.optimizations = [tf.lite.Optimize.DEFAULT] +tflite_model = converter.convert() + +# Test with Accelerapp +from accelerapp.hardware.camera.esp32_cam import AIProcessor, ModelConfig + +config = ModelConfig( + model_type=DetectionModel.CUSTOM, + model_data=tflite_model, +) + +ai = AIProcessor(camera, config) +ai.load_model() +detections = ai.detect() +``` + +## Model Testing + +Test models before deployment: + +```python +from accelerapp.hardware.camera.esp32_cam import AIProcessor + +ai = AIProcessor(camera) +ai.load_model("models/your_model.tflite") + +# Run inference +detections = ai.detect() + +# Check performance +stats = ai.get_statistics() +print(f"Inference count: {stats['inference_count']}") +print(f"Detections: {stats['total_detections']}") +``` + +## Integration with TinyML Agent + +Generate optimized code: + +```python +from accelerapp.agents import TinyMLAgent + +agent = TinyMLAgent() +ai_processor = AIProcessor(camera) + +# Get integration spec +spec = ai_processor.integrate_with_tinyml_agent() + +# Generate optimized code +result = agent.generate(spec) +``` + +## Resources + +- [TensorFlow Lite Micro](https://www.tensorflow.org/lite/microcontrollers) +- [ESP-NN Library](https://github.com/espressif/esp-nn) +- [EloquentTinyML](https://github.com/eloquentarduino/EloquentTinyML) +- [Edge Impulse](https://www.edgeimpulse.com/) +- [TensorFlow Model Garden](https://github.com/tensorflow/models) + +## License + +Models may have different licenses. Check individual model licenses before use. diff --git a/src/accelerapp/hardware/camera/esp32_cam/motion_detection.py b/src/accelerapp/hardware/camera/esp32_cam/motion_detection.py index 007853d..578394d 100644 --- a/src/accelerapp/hardware/camera/esp32_cam/motion_detection.py +++ b/src/accelerapp/hardware/camera/esp32_cam/motion_detection.py @@ -1,221 +1,18 @@ """ -Motion detection implementation for ESP32-CAM. -Provides motion detection, event triggering, and recording capabilities. -""" - -from enum import Enum -from typing import Dict, Any, Optional, List, Callable -from dataclasses import dataclass -import threading - -class MotionSensitivity(Enum): - """Motion detection sensitivity levels.""" - LOW = "low" - MEDIUM = "medium" - HIGH = "high" - VERY_HIGH = "very_high" @dataclass class MotionEvent: - """Motion detection event data.""" - event_id: str - timestamp: str - device_id: str - confidence: float # 0.0 to 1.0 - area_percentage: float # Percentage of frame with motion - duration_ms: int - frame_count: int - metadata: Dict[str, Any] + class MotionDetector: """ - Motion detection system for ESP32-CAM. - Detects motion in video stream and triggers events. - """ - - def __init__(self, camera, sensitivity: MotionSensitivity = MotionSensitivity.MEDIUM): + """ Initialize motion detector. Args: camera: ESP32Camera instance - sensitivity: Detection sensitivity level - """ - self.camera = camera - self.sensitivity = sensitivity - self._enabled = False - self._recording = False - self._event_count = 0 - self._callbacks: List[Callable] = [] - self._lock = threading.Lock() - - # Detection parameters - self._threshold = self._get_threshold(sensitivity) - self._min_area = 0.05 # 5% of frame - self._cooldown_ms = 1000 - - def _get_threshold(self, sensitivity: MotionSensitivity) -> float: - """Get detection threshold based on sensitivity.""" - thresholds = { - MotionSensitivity.LOW: 0.8, - MotionSensitivity.MEDIUM: 0.6, - MotionSensitivity.HIGH: 0.4, - MotionSensitivity.VERY_HIGH: 0.2, - } - return thresholds.get(sensitivity, 0.6) - - def enable(self) -> bool: - """ - Enable motion detection. - - Returns: - True if enabled successfully - """ - if not self.camera._initialized: - if not self.camera.initialize(): - return False - - self._enabled = True - return True - - def disable(self) -> bool: - """ - Disable motion detection. - - Returns: - True if disabled successfully - """ - self._enabled = False - return True - - def is_enabled(self) -> bool: - """Check if motion detection is enabled.""" - return self._enabled - - def set_sensitivity(self, sensitivity: MotionSensitivity) -> bool: - """ - Change detection sensitivity. - - Args: - sensitivity: New sensitivity level - - Returns: - True if successful - """ - self.sensitivity = sensitivity - self._threshold = self._get_threshold(sensitivity) - return True - - def register_callback(self, callback: Callable[[MotionEvent], None]) -> None: - """ - Register callback for motion events. - - Args: - callback: Function to call on motion detection - """ - with self._lock: - self._callbacks.append(callback) - - def unregister_callback(self, callback: Callable) -> bool: - """ - Unregister motion event callback. - - Args: - callback: Callback function to remove - - Returns: - True if callback was removed - """ - with self._lock: - if callback in self._callbacks: - self._callbacks.remove(callback) - return True - return False - - def start_recording_on_motion(self) -> bool: - """ - Enable automatic recording when motion is detected. - - Returns: - True if enabled successfully - """ - self._recording = True - return True - - def stop_recording_on_motion(self) -> bool: - """ - Disable automatic recording on motion. - - Returns: - True if disabled successfully - """ - self._recording = False - return True - - def _trigger_event(self, event: MotionEvent) -> None: - """Trigger motion event callbacks.""" - with self._lock: - for callback in self._callbacks: - try: - callback(event) - except Exception: - pass # Ignore callback errors - - def get_status(self) -> Dict[str, Any]: - """ - Get motion detector status. - - Returns: - Status dictionary - """ - return { - "enabled": self._enabled, - "recording_on_motion": self._recording, - "sensitivity": self.sensitivity.value, - "threshold": self._threshold, - "min_area": self._min_area, - "event_count": self._event_count, - "callbacks_registered": len(self._callbacks), - } - - def get_config(self) -> Dict[str, Any]: - """ - Get motion detection configuration. - - Returns: - Configuration dictionary - """ - return { - "sensitivity": self.sensitivity.value, - "threshold": self._threshold, - "min_area_percentage": self._min_area * 100, - "cooldown_ms": self._cooldown_ms, - } - - def set_config(self, config: Dict[str, Any]) -> bool: - """ - Update motion detection configuration. - - Args: - config: Configuration dictionary - - Returns: - True if successful - """ - if "sensitivity" in config: - try: - sensitivity = MotionSensitivity(config["sensitivity"]) - self.set_sensitivity(sensitivity) - except ValueError: - return False - - if "min_area_percentage" in config: - self._min_area = config["min_area_percentage"] / 100.0 - - if "cooldown_ms" in config: - self._cooldown_ms = config["cooldown_ms"] - - return True + diff --git a/src/accelerapp/hardware/camera/esp32_cam/remote_access.py b/src/accelerapp/hardware/camera/esp32_cam/remote_access.py new file mode 100644 index 0000000..d91398d --- /dev/null +++ b/src/accelerapp/hardware/camera/esp32_cam/remote_access.py @@ -0,0 +1,380 @@ +""" +Remote access capabilities for ESP32-CAM. +Provides secure remote camera access with WebRTC and cloud tunneling. +""" + +from typing import Dict, Any, Optional, List +from enum import Enum +from dataclasses import dataclass, field +from datetime import datetime +import logging + +logger = logging.getLogger(__name__) + + +class TunnelType(Enum): + """Cloud tunnel types.""" + NGROK = "ngrok" + CLOUDFLARE = "cloudflare" + CUSTOM = "custom" + NONE = "none" + + +class AuthMethod(Enum): + """Authentication methods.""" + NONE = "none" + BASIC = "basic" + TOKEN = "token" + OAUTH = "oauth" + CERTIFICATE = "certificate" + + +@dataclass +class AuthConfig: + """Authentication configuration.""" + method: AuthMethod = AuthMethod.TOKEN + + # Basic auth + username: Optional[str] = None + password: Optional[str] = None + + # Token auth + access_token: Optional[str] = None + token_expiry_hours: int = 24 + + # OAuth + oauth_provider: Optional[str] = None + oauth_client_id: Optional[str] = None + oauth_client_secret: Optional[str] = None + + # Certificate + cert_path: Optional[str] = None + key_path: Optional[str] = None + + # Access control + allowed_ips: List[str] = field(default_factory=list) + rate_limit_per_minute: int = 60 + + +@dataclass +class TunnelConfig: + """Cloud tunnel configuration.""" + tunnel_type: TunnelType = TunnelType.NONE + + # Ngrok specific + ngrok_auth_token: Optional[str] = None + ngrok_region: str = "us" + + # Cloudflare specific + cloudflare_token: Optional[str] = None + cloudflare_tunnel_id: Optional[str] = None + + # Custom tunnel + custom_endpoint: Optional[str] = None + custom_port: int = 8080 + + # Connection settings + enable_tls: bool = True + heartbeat_interval: int = 30 + reconnect_attempts: int = 5 + + +class RemoteAccess: + """ + Remote access manager for ESP32-CAM. + Provides secure remote connectivity with authentication. + """ + + def __init__( + self, + camera, + auth_config: Optional[AuthConfig] = None, + tunnel_config: Optional[TunnelConfig] = None, + ): + """ + Initialize remote access. + + Args: + camera: ESP32Camera instance + auth_config: Authentication configuration + tunnel_config: Tunnel configuration + """ + self.camera = camera + self.auth_config = auth_config or AuthConfig() + self.tunnel_config = tunnel_config or TunnelConfig() + + self.tunnel_active = False + self.public_url = None + self.active_sessions = [] + self.access_log = [] + + logger.info("RemoteAccess initialized") + + def start_tunnel(self) -> Dict[str, Any]: + """ + Start cloud tunnel for remote access. + + Returns: + Tunnel information including public URL + """ + try: + if self.tunnel_config.tunnel_type == TunnelType.NONE: + logger.info("No tunnel configured") + return { + "status": "disabled", + "message": "Tunnel not configured", + } + + logger.info(f"Starting {self.tunnel_config.tunnel_type.value} tunnel...") + + # In production, this would start actual tunnel service + self.public_url = self._generate_public_url() + self.tunnel_active = True + + logger.info(f"Tunnel active at: {self.public_url}") + + return { + "status": "active", + "tunnel_type": self.tunnel_config.tunnel_type.value, + "public_url": self.public_url, + "secure": self.tunnel_config.enable_tls, + } + + except Exception as e: + logger.error(f"Failed to start tunnel: {e}") + return { + "status": "error", + "message": str(e), + } + + def stop_tunnel(self) -> bool: + """Stop cloud tunnel.""" + if self.tunnel_active: + logger.info("Stopping tunnel...") + self.tunnel_active = False + self.public_url = None + return True + + return False + + def _generate_public_url(self) -> str: + """Generate public URL based on tunnel type.""" + if self.tunnel_config.tunnel_type == TunnelType.NGROK: + return f"https://random-id.ngrok.io" + elif self.tunnel_config.tunnel_type == TunnelType.CLOUDFLARE: + return f"https://random-id.trycloudflare.com" + elif self.tunnel_config.tunnel_type == TunnelType.CUSTOM: + return self.tunnel_config.custom_endpoint or "https://custom-tunnel.example.com" + + return "http://localhost:8080" + + def authenticate(self, credentials: Dict[str, Any]) -> Dict[str, Any]: + """ + Authenticate access request. + + Args: + credentials: Authentication credentials + + Returns: + Authentication result + """ + try: + if self.auth_config.method == AuthMethod.NONE: + return { + "authenticated": True, + "method": "none", + } + + elif self.auth_config.method == AuthMethod.BASIC: + username = credentials.get("username") + password = credentials.get("password") + + if username == self.auth_config.username and password == self.auth_config.password: + return { + "authenticated": True, + "method": "basic", + "username": username, + } + + elif self.auth_config.method == AuthMethod.TOKEN: + token = credentials.get("token") + + if token == self.auth_config.access_token: + return { + "authenticated": True, + "method": "token", + } + + # Authentication failed + self._log_access_attempt(credentials, False) + + return { + "authenticated": False, + "message": "Invalid credentials", + } + + except Exception as e: + logger.error(f"Authentication error: {e}") + return { + "authenticated": False, + "message": "Authentication error", + } + + def create_session(self, user_id: str, ip_address: str) -> Dict[str, Any]: + """ + Create authenticated session. + + Args: + user_id: User identifier + ip_address: Client IP address + + Returns: + Session information + """ + # Check IP whitelist + if self.auth_config.allowed_ips and ip_address not in self.auth_config.allowed_ips: + logger.warning(f"IP not allowed: {ip_address}") + return { + "status": "denied", + "message": "IP not allowed", + } + + session = { + "session_id": f"sess_{len(self.active_sessions)}", + "user_id": user_id, + "ip_address": ip_address, + "created_at": datetime.now().isoformat(), + "last_activity": datetime.now().isoformat(), + } + + self.active_sessions.append(session) + self._log_access_attempt({"user_id": user_id, "ip": ip_address}, True) + + logger.info(f"Session created: {session['session_id']}") + + return { + "status": "success", + "session": session, + } + + def end_session(self, session_id: str) -> bool: + """End an active session.""" + for i, session in enumerate(self.active_sessions): + if session["session_id"] == session_id: + self.active_sessions.pop(i) + logger.info(f"Session ended: {session_id}") + return True + + return False + + def _log_access_attempt(self, credentials: Dict[str, Any], success: bool): + """Log access attempt.""" + log_entry = { + "timestamp": datetime.now().isoformat(), + "success": success, + "credentials": {k: v for k, v in credentials.items() if k != "password"}, + } + + self.access_log.append(log_entry) + + # Keep only last 1000 entries + if len(self.access_log) > 1000: + self.access_log = self.access_log[-1000:] + + def get_status(self) -> Dict[str, Any]: + """Get remote access status.""" + return { + "tunnel_active": self.tunnel_active, + "public_url": self.public_url, + "active_sessions": len(self.active_sessions), + "auth_method": self.auth_config.method.value, + "tunnel_type": self.tunnel_config.tunnel_type.value, + } + + def get_access_log(self, limit: int = 50) -> List[Dict[str, Any]]: + """Get recent access log entries.""" + return self.access_log[-limit:] if self.access_log else [] + + def generate_remote_access_code(self) -> Dict[str, str]: + """Generate firmware code for remote access.""" + header = """ +// Remote Access for ESP32-CAM +#ifndef REMOTE_ACCESS_H +#define REMOTE_ACCESS_H + +#include + +enum AuthMethod { + AUTH_NONE, + AUTH_BASIC, + AUTH_TOKEN +}; + +class RemoteAccess { +public: + bool init(AuthMethod method); + bool startTunnel(); + bool authenticate(const char* credentials); + void handleClient(WiFiClient& client); + +private: + AuthMethod authMethod; + bool tunnelActive; + char publicUrl[128]; +}; + +#endif +""" + + implementation = f""" +// Remote Access Implementation +#include "remote_access.h" + +bool RemoteAccess::init(AuthMethod method) {{ + authMethod = method; + tunnelActive = false; + return true; +}} + +bool RemoteAccess::startTunnel() {{ + // Initialize tunnel service + // This would integrate with ngrok, cloudflare, or custom tunnel + + tunnelActive = true; + strcpy(publicUrl, "https://tunnel.example.com"); + + return true; +}} + +bool RemoteAccess::authenticate(const char* credentials) {{ + if (authMethod == AUTH_NONE) {{ + return true; + }} + + // Validate credentials based on auth method + // For TOKEN: check against stored token + // For BASIC: parse and validate username/password + + return false; +}} + +void RemoteAccess::handleClient(WiFiClient& client) {{ + // Handle authenticated client requests + // Forward to camera stream or control endpoints +}} +""" + + return { + "remote_access.h": header, + "remote_access.cpp": implementation, + } + + def generate_webrtc_config(self) -> Dict[str, Any]: + """Generate WebRTC configuration.""" + return { + "ice_servers": self.tunnel_config.cloudflare_token or [ + {"urls": "stun:stun.l.google.com:19302"}, + ], + "enable_tls": self.tunnel_config.enable_tls, + "signaling_url": f"{self.public_url}/ws" if self.public_url else None, + } diff --git a/src/accelerapp/hardware/camera/esp32_cam/streaming.py b/src/accelerapp/hardware/camera/esp32_cam/streaming.py index 8182177..9fdbdc3 100644 --- a/src/accelerapp/hardware/camera/esp32_cam/streaming.py +++ b/src/accelerapp/hardware/camera/esp32_cam/streaming.py @@ -1,230 +1,19 @@ """ -Streaming server implementation for ESP32-CAM. -Supports multiple streaming protocols (MJPEG, RTSP, WebRTC). -""" - -from enum import Enum -from typing import Dict, Any, Optional, List, Callable -from dataclasses import dataclass -import threading - -class StreamProtocol(Enum): """Supported streaming protocols.""" MJPEG = "mjpeg" RTSP = "rtsp" WEBRTC = "webrtc" - WEBSOCKET = "websocket" + @dataclass class StreamConfig: - """Configuration for streaming server.""" - protocol: StreamProtocol - port: int - max_clients: int = 5 - buffer_size: int = 1024 * 1024 # 1MB - enable_audio: bool = False - bitrate: int = 500000 # 500 kbps - - def __post_init__(self): - """Set default port based on protocol.""" - if self.port == 0: - if self.protocol == StreamProtocol.MJPEG: - self.port = 81 - elif self.protocol == StreamProtocol.RTSP: - self.port = 8554 - elif self.protocol == StreamProtocol.WEBRTC: - self.port = 8443 - elif self.protocol == StreamProtocol.WEBSOCKET: - self.port = 8765 - -class StreamingServer: - """ - Multi-protocol streaming server for ESP32-CAM. - Manages video streaming to multiple clients. - """ - - def __init__(self, camera, config: StreamConfig): - """ - Initialize streaming server. Args: camera: ESP32Camera instance config: Streaming configuration """ self.camera = camera - self.config = config - self._active = False - self._clients: List[Dict[str, Any]] = [] - self._lock = threading.Lock() - self._frame_callbacks: List[Callable] = [] - - def start(self) -> bool: - """ - Start streaming server. - - Returns: - True if server started successfully - """ - if self._active: - return True - - if not self.camera.is_streaming(): - if not self.camera.start_streaming(): - return False - - self._active = True - return True - - def stop(self) -> bool: - """ - Stop streaming server. - - Returns: - True if server stopped successfully - """ - with self._lock: - self._active = False - self._clients.clear() - return True - - def add_client(self, client_id: str, client_info: Dict[str, Any]) -> bool: - """ - Add a streaming client. - - Args: - client_id: Unique client identifier - client_info: Client connection information - - Returns: - True if client added successfully - """ - with self._lock: - if len(self._clients) >= self.config.max_clients: - return False - - self._clients.append({ - "id": client_id, - "info": client_info, - "connected_at": "2025-10-15T01:12:23.332Z", - "frames_sent": 0, - }) - return True - - def remove_client(self, client_id: str) -> bool: - """ - Remove a streaming client. - - Args: - client_id: Client identifier to remove - - Returns: - True if client removed successfully - """ - with self._lock: - self._clients = [c for c in self._clients if c["id"] != client_id] - return True - - def get_client_count(self) -> int: - """Get number of connected clients.""" - return len(self._clients) - - def get_clients(self) -> List[Dict[str, Any]]: - """Get list of connected clients.""" - with self._lock: - return self._clients.copy() - - def register_frame_callback(self, callback: Callable) -> None: - """ - Register callback for new frames. - - Args: - callback: Function to call on new frame - """ - self._frame_callbacks.append(callback) - - def get_stream_url(self) -> str: - """ - Get streaming URL. - - Returns: - Stream URL string - """ - protocol_map = { - StreamProtocol.MJPEG: "http", - StreamProtocol.RTSP: "rtsp", - StreamProtocol.WEBRTC: "https", - StreamProtocol.WEBSOCKET: "ws", - } - - protocol_prefix = protocol_map.get(self.config.protocol, "http") - return f"{protocol_prefix}://localhost:{self.config.port}/stream" - - def get_status(self) -> Dict[str, Any]: - """ - Get streaming server status. - - Returns: - Status dictionary - """ - return { - "active": self._active, - "protocol": self.config.protocol.value, - "port": self.config.port, - "clients": len(self._clients), - "max_clients": self.config.max_clients, - "stream_url": self.get_stream_url(), - "bitrate": self.config.bitrate, - } - - -class MJPEGStreamer: - """MJPEG streaming implementation.""" - - def __init__(self, camera): - """Initialize MJPEG streamer.""" - self.camera = camera - self._running = False - - def start(self, port: int = 81) -> bool: - """Start MJPEG streaming on specified port.""" - self._running = True - return True - - def stop(self) -> bool: - """Stop MJPEG streaming.""" - self._running = False - return True - - def is_running(self) -> bool: - """Check if streamer is running.""" - return self._running - -class RTSPServer: - """RTSP streaming server implementation.""" - - def __init__(self, camera): - """Initialize RTSP server.""" - self.camera = camera - self._running = False - - def start(self, port: int = 8554) -> bool: - """Start RTSP server on specified port.""" - self._running = True - return True - - def stop(self) -> bool: - """Stop RTSP server.""" - self._running = False - return True - - def is_running(self) -> bool: - """Check if server is running.""" - return self._running - - def get_stream_url(self) -> str: - """Get RTSP stream URL.""" - return f"rtsp://localhost:8554/stream" diff --git a/src/accelerapp/hardware/camera/esp32_cam/web_interface.py b/src/accelerapp/hardware/camera/esp32_cam/web_interface.py index 2482646..ca73405 100644 --- a/src/accelerapp/hardware/camera/esp32_cam/web_interface.py +++ b/src/accelerapp/hardware/camera/esp32_cam/web_interface.py @@ -1,177 +1,9 @@ """ -Web interface for ESP32-CAM management. -Provides RESTful API and web-based control panel. -""" - -from typing import Dict, Any, Optional -import json - -class CameraWebInterface: - """ - Web-based interface for camera control and monitoring. - Provides REST API endpoints for camera management. - """ - - def __init__(self, camera, port: int = 80): """ Initialize web interface. Args: camera: ESP32Camera instance - port: HTTP server port - """ - self.camera = camera - self.port = port - self._running = False - self._endpoints = self._register_endpoints() - - def _register_endpoints(self) -> Dict[str, Any]: - """Register API endpoints.""" - return { - "/api/status": self.get_status_handler, - "/api/config": self.get_config_handler, - "/api/capture": self.capture_handler, - "/api/stream/start": self.start_stream_handler, - "/api/stream/stop": self.stop_stream_handler, - "/api/settings": self.settings_handler, - } - - def start(self) -> bool: - """ - Start web interface server. - - Returns: - True if server started successfully - """ - self._running = True - return True - - def stop(self) -> bool: - """ - Stop web interface server. - - Returns: - True if server stopped successfully - """ - self._running = False - return True - - def is_running(self) -> bool: - """Check if web interface is running.""" - return self._running - - def get_status_handler(self) -> Dict[str, Any]: - """Handle GET /api/status request.""" - return { - "status": "success", - "data": self.camera.get_status(), - } - - def get_config_handler(self) -> Dict[str, Any]: - """Handle GET /api/config request.""" - return { - "status": "success", - "data": self.camera.get_config(), - } - - def capture_handler(self) -> Dict[str, Any]: - """Handle POST /api/capture request.""" - result = self.camera.capture_image() - if result: - return { - "status": "success", - "data": result, - } - return { - "status": "error", - "message": "Failed to capture image", - } - - def start_stream_handler(self) -> Dict[str, Any]: - """Handle POST /api/stream/start request.""" - if self.camera.start_streaming(): - return { - "status": "success", - "message": "Streaming started", - } - return { - "status": "error", - "message": "Failed to start streaming", - } - - def stop_stream_handler(self) -> Dict[str, Any]: - """Handle POST /api/stream/stop request.""" - if self.camera.stop_streaming(): - return { - "status": "success", - "message": "Streaming stopped", - } - return { - "status": "error", - "message": "Failed to stop streaming", - } - - def settings_handler(self, settings: Optional[Dict[str, Any]] = None) -> Dict[str, Any]: - """ - Handle GET/POST /api/settings request. - - Args: - settings: Optional settings to update (for POST) - - Returns: - Response dictionary - """ - if settings: - # Update settings - if "resolution" in settings: - from .core import CameraResolution - try: - res = CameraResolution(settings["resolution"]) - self.camera.set_resolution(res) - except ValueError: - return { - "status": "error", - "message": "Invalid resolution", - } - - if "quality" in settings: - self.camera.set_quality(settings["quality"]) - - if "brightness" in settings: - self.camera.set_brightness(settings["brightness"]) - - return { - "status": "success", - "message": "Settings updated", - "data": self.camera.get_config(), - } - - # Return current settings - return { - "status": "success", - "data": self.camera.get_config(), - } - - def get_interface_url(self) -> str: - """ - Get web interface URL. - - Returns: - Interface URL string - """ - return f"http://localhost:{self.port}" - - def get_api_info(self) -> Dict[str, Any]: - """ - Get API endpoint information. - - Returns: - API info dictionary - """ - return { - "base_url": self.get_interface_url(), - "endpoints": list(self._endpoints.keys()), - "version": "1.0.0", - "running": self._running, + } diff --git a/tests/test_esp32_cam.py b/tests/test_esp32_cam.py new file mode 100644 index 0000000..4e5ced3 --- /dev/null +++ b/tests/test_esp32_cam.py @@ -0,0 +1,598 @@ +""" +Tests for ESP32-CAM hardware support. +""" + +import pytest +from accelerapp.hardware.camera.esp32_cam import ( + ESP32Camera, + CameraVariant, + CameraConfig, + CameraSensor, + FrameSize, + PixelFormat, + StreamingManager, + StreamingProtocol, + StreamConfig, + AIProcessor, + DetectionModel, + ModelConfig, + MotionDetector, + MotionConfig, + QRScanner, + RemoteAccess, + AuthConfig, + TunnelConfig, + WebInterface, + APIConfig, +) + + +# Core Camera Tests + + +def test_esp32_camera_import(): + """Test that ESP32Camera can be imported.""" + assert ESP32Camera is not None + assert CameraVariant is not None + assert CameraConfig is not None + + +def test_esp32_camera_initialization(): + """Test ESP32Camera initialization with default config.""" + camera = ESP32Camera() + + assert camera is not None + assert camera.initialized is False + assert camera.config.variant == CameraVariant.AI_THINKER + assert camera.config.sensor == CameraSensor.OV2640 + assert camera.frame_count == 0 + + +def test_esp32_camera_variants(): + """Test different camera variants.""" + variants = [ + CameraVariant.AI_THINKER, + CameraVariant.WROVER_KIT, + CameraVariant.ESP_EYE, + CameraVariant.M5STACK_CAMERA, + ] + + for variant in variants: + config = CameraConfig(variant=variant) + camera = ESP32Camera(config) + assert camera.config.variant == variant + + +def test_camera_initialization(): + """Test camera hardware initialization.""" + camera = ESP32Camera() + result = camera.initialize() + + assert result is True + assert camera.initialized is True + + +def test_camera_config_validation(): + """Test camera configuration validation.""" + camera = ESP32Camera() + + # Valid config + assert camera._validate_config() is True + + # Invalid JPEG quality + camera.config.jpeg_quality = 100 + assert camera._validate_config() is False + + # Reset to valid + camera.config.jpeg_quality = 12 + assert camera._validate_config() is True + + +def test_camera_capture_frame(): + """Test frame capture.""" + camera = ESP32Camera() + camera.initialize() + + frame = camera.capture_frame() + assert frame is not None + assert camera.frame_count == 1 + + +def test_camera_set_quality(): + """Test setting JPEG quality.""" + camera = ESP32Camera() + + # Valid quality + assert camera.set_quality(10) is True + assert camera.config.jpeg_quality == 10 + + # Invalid quality + assert camera.set_quality(100) is False + + +def test_camera_set_brightness(): + """Test setting brightness.""" + camera = ESP32Camera() + + assert camera.set_brightness(1) is True + assert camera.config.brightness == 1 + + assert camera.set_brightness(-2) is True + assert camera.config.brightness == -2 + + assert camera.set_brightness(5) is False + + +def test_camera_set_flip(): + """Test setting flip settings.""" + camera = ESP32Camera() + + assert camera.set_flip(horizontal=True, vertical=False) is True + assert camera.config.horizontal_flip is True + assert camera.config.vertical_flip is False + + +def test_camera_get_status(): + """Test getting camera status.""" + camera = ESP32Camera() + camera.initialize() + + status = camera.get_status() + + assert "initialized" in status + assert status["initialized"] is True + assert "variant" in status + assert "frame_count" in status + + +def test_camera_firmware_generation(): + """Test firmware configuration generation.""" + camera = ESP32Camera() + code = camera.generate_firmware_config() + + assert code is not None + assert "camera_config_t" in code + assert "esp_camera_init" in code + assert "#include " in code + + +# Streaming Tests + + +def test_streaming_manager_initialization(): + """Test streaming manager initialization.""" + camera = ESP32Camera() + config = StreamConfig(protocol=StreamingProtocol.MJPEG) + + manager = StreamingManager(camera, config) + + assert manager is not None + assert manager.config.protocol == StreamingProtocol.MJPEG + + +def test_streaming_protocols(): + """Test different streaming protocols.""" + camera = ESP32Camera() + camera.initialize() + + protocols = [ + StreamingProtocol.MJPEG, + StreamingProtocol.RTSP, + StreamingProtocol.WEBRTC, + StreamingProtocol.HTTP, + ] + + for protocol in protocols: + config = StreamConfig(protocol=protocol) + manager = StreamingManager(camera, config) + assert manager.config.protocol == protocol + + +def test_streaming_start_stop(): + """Test starting and stopping streams.""" + camera = ESP32Camera() + camera.initialize() + manager = StreamingManager(camera) + + # Start stream + info = manager.start_stream() + assert info["status"] == "active" + assert "urls" in info + + # Stop stream + stream_id = info["stream_id"] + result = manager.stop_stream(stream_id) + assert result is True + + +def test_streaming_code_generation(): + """Test streaming code generation.""" + camera = ESP32Camera() + config = StreamConfig(protocol=StreamingProtocol.MJPEG) + manager = StreamingManager(camera, config) + + code = manager.generate_streaming_code() + + assert "mjpeg_stream.h" in code + assert "mjpeg_stream.cpp" in code + + +# AI Processing Tests + + +def test_ai_processor_initialization(): + """Test AI processor initialization.""" + camera = ESP32Camera() + config = ModelConfig(model_type=DetectionModel.PERSON_DETECTION) + + processor = AIProcessor(camera, config) + + assert processor is not None + assert processor.config.model_type == DetectionModel.PERSON_DETECTION + + +def test_ai_model_types(): + """Test different AI model types.""" + camera = ESP32Camera() + + models = [ + DetectionModel.PERSON_DETECTION, + DetectionModel.FACE_DETECTION, + DetectionModel.OBJECT_DETECTION, + ] + + for model in models: + config = ModelConfig(model_type=model) + processor = AIProcessor(camera, config) + assert processor.config.model_type == model + + +def test_ai_load_model(): + """Test loading AI model.""" + camera = ESP32Camera() + processor = AIProcessor(camera) + + result = processor.load_model() + assert result is True + assert processor.model_loaded is True + + +def test_ai_detection(): + """Test AI detection.""" + camera = ESP32Camera() + camera.initialize() + processor = AIProcessor(camera) + processor.load_model() + + detections = processor.detect() + + assert isinstance(detections, list) + assert processor.inference_count > 0 + + +def test_ai_inference_code_generation(): + """Test inference code generation.""" + camera = ESP32Camera() + processor = AIProcessor(camera) + + code = processor.generate_inference_code() + + assert "ai_inference.h" in code + assert "ai_inference.cpp" in code + assert "tensorflow/lite" in code["ai_inference.h"] + + +def test_ai_tinyml_integration(): + """Test TinyML agent integration.""" + camera = ESP32Camera() + processor = AIProcessor(camera) + + spec = processor.integrate_with_tinyml_agent() + + assert spec["task_type"] == "inference" + assert spec["platform"] == "esp32" + assert "input_shape" in spec + + +# Motion Detection Tests + + +def test_motion_detector_initialization(): + """Test motion detector initialization.""" + camera = ESP32Camera() + config = MotionConfig(threshold=20) + + detector = MotionDetector(camera, config) + + assert detector is not None + assert detector.config.threshold == 20 + + +def test_motion_detection(): + """Test motion detection.""" + camera = ESP32Camera() + camera.initialize() + detector = MotionDetector(camera) + + # First frame - no motion + result = detector.detect_motion() + assert result is False + + # Second frame - motion detected + result = detector.detect_motion() + # May or may not detect depending on placeholder data + + +def test_motion_statistics(): + """Test motion detection statistics.""" + camera = ESP32Camera() + detector = MotionDetector(camera) + + stats = detector.get_statistics() + + assert "algorithm" in stats + assert "motion_detected" in stats + assert "total_events" in stats + + +def test_motion_code_generation(): + """Test motion detection code generation.""" + camera = ESP32Camera() + detector = MotionDetector(camera) + + code = detector.generate_motion_detection_code() + + assert "motion_detection.h" in code + assert "motion_detection.cpp" in code + + +def test_qr_scanner_initialization(): + """Test QR scanner initialization.""" + camera = ESP32Camera() + scanner = QRScanner(camera) + + assert scanner is not None + assert scanner.scan_count == 0 + + +def test_qr_scan(): + """Test QR code scanning.""" + camera = ESP32Camera() + camera.initialize() + scanner = QRScanner(camera) + + result = scanner.scan() + + # With placeholder implementation, should return a result + if result: + assert "type" in result + assert "data" in result + + +def test_qr_code_generation(): + """Test QR scanner code generation.""" + camera = ESP32Camera() + scanner = QRScanner(camera) + + code = scanner.generate_qr_scanner_code() + + assert "qr_scanner.h" in code + assert "qr_scanner.cpp" in code + + +# Remote Access Tests + + +def test_remote_access_initialization(): + """Test remote access initialization.""" + camera = ESP32Camera() + from accelerapp.hardware.camera.esp32_cam import AuthMethod + auth_config = AuthConfig(method=AuthMethod.TOKEN) + + remote = RemoteAccess(camera, auth_config) + + assert remote is not None + + +def test_remote_tunnel_start_stop(): + """Test starting and stopping tunnel.""" + camera = ESP32Camera() + from accelerapp.hardware.camera.esp32_cam import TunnelType + tunnel_config = TunnelConfig(tunnel_type=TunnelType.NGROK) + remote = RemoteAccess(camera, tunnel_config=tunnel_config) + + # Start tunnel + info = remote.start_tunnel() + assert "public_url" in info or info["status"] == "disabled" + + # Stop tunnel + if remote.tunnel_active: + result = remote.stop_tunnel() + assert result is True + + +def test_remote_authentication(): + """Test authentication methods.""" + camera = ESP32Camera() + from accelerapp.hardware.camera.esp32_cam import AuthMethod + + # No auth + auth_config = AuthConfig(method=AuthMethod.NONE) + remote = RemoteAccess(camera, auth_config) + result = remote.authenticate({}) + assert result["authenticated"] is True + + # Token auth + auth_config = AuthConfig(method=AuthMethod.TOKEN, access_token="test123") + remote = RemoteAccess(camera, auth_config) + result = remote.authenticate({"token": "test123"}) + assert result["authenticated"] is True + + +def test_remote_session_management(): + """Test session management.""" + camera = ESP32Camera() + remote = RemoteAccess(camera) + + # Create session + result = remote.create_session("user1", "192.168.1.1") + assert result["status"] == "success" + + # End session + session_id = result["session"]["session_id"] + result = remote.end_session(session_id) + assert result is True + + +def test_remote_code_generation(): + """Test remote access code generation.""" + camera = ESP32Camera() + remote = RemoteAccess(camera) + + code = remote.generate_remote_access_code() + + assert "remote_access.h" in code + assert "remote_access.cpp" in code + + +# Web Interface Tests + + +def test_web_interface_initialization(): + """Test web interface initialization.""" + camera = ESP32Camera() + config = APIConfig(port=8080) + + interface = WebInterface(camera, config) + + assert interface is not None + assert interface.config.port == 8080 + + +def test_web_api_routes(): + """Test API route handling.""" + camera = ESP32Camera() + camera.initialize() + interface = WebInterface(camera) + + # Test status endpoint + response = interface.handle_request("/api/camera/status", "GET", {}) + assert response["code"] == 200 + assert "data" in response + + +def test_web_capture_endpoint(): + """Test capture endpoint.""" + camera = ESP32Camera() + camera.initialize() + interface = WebInterface(camera) + + response = interface.handle_request("/api/camera/capture", "GET", {}) + assert response["code"] == 200 + + +def test_web_settings_endpoints(): + """Test settings endpoints.""" + camera = ESP32Camera() + camera.initialize() + interface = WebInterface(camera) + + # Set quality + response = interface.handle_request( + "/api/settings/quality", "PUT", {"quality": 15} + ) + assert response["code"] == 200 + + +def test_web_ui_pages(): + """Test UI page generation.""" + camera = ESP32Camera() + interface = WebInterface(camera) + + # Home page + response = interface.handle_request("/", "GET", {}) + assert response["code"] == 200 + assert "html" in response + + # Live page + response = interface.handle_request("/ui/live", "GET", {}) + assert response["code"] == 200 + + # Settings page + response = interface.handle_request("/ui/settings", "GET", {}) + assert response["code"] == 200 + + +def test_web_api_documentation(): + """Test API documentation generation.""" + camera = ESP32Camera() + interface = WebInterface(camera) + + docs = interface.generate_api_documentation() + + assert docs is not None + assert "API Documentation" in docs + + +# Integration Tests + + +def test_full_stack_integration(): + """Test integration of all components.""" + # Create camera + camera = ESP32Camera() + camera.initialize() + + # Add streaming + streaming = StreamingManager(camera) + stream_info = streaming.start_stream() + assert stream_info["status"] == "active" + + # Add AI + ai = AIProcessor(camera) + ai.load_model() + detections = ai.detect() + assert isinstance(detections, list) + + # Add motion detection + motion = MotionDetector(camera) + motion.detect_motion() + + # Add web interface + web = WebInterface(camera) + response = web.handle_request("/api/camera/status", "GET", {}) + assert response["code"] == 200 + + +def test_camera_with_digital_twin(): + """Test camera with digital twin integration.""" + config = CameraConfig( + twin_id="camera_001", + twin_sync_interval=30, + ) + + camera = ESP32Camera(config) + + assert camera.config.twin_id == "camera_001" + assert camera.config.twin_sync_interval == 30 + + +def test_camera_observability(): + """Test camera observability features.""" + config = CameraConfig( + enable_metrics=True, + enable_health_checks=True, + ) + + camera = ESP32Camera(config) + + assert camera.config.enable_metrics is True + assert camera.config.enable_health_checks is True + + +def test_hardware_import(): + """Test that camera can be imported from hardware module.""" + from accelerapp.hardware import ESP32Camera, CameraVariant + + assert ESP32Camera is not None + assert CameraVariant is not None