-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Multi-Sensor Fusion for Defense Applications
Advanced multi-intelligence fusion system combining Overhead Persistent Infrared (OPIR) thermal detection with Radio Frequency (RF) geolocation for real-time threat detection and tracking
Sensor Enhanced Network for Threat Identification, Neutralization, Evaluation, and Location
- Overview
- Key Features
- System Architecture
- Installation
- Quick Start
- Module Reference
- Data Formats
- Performance Benchmarks
- API Reference
- Testing
- Roadmap
- Contributing
- License
SENTINEL is a sophisticated Machine Learning system that integrates three intelligence disciplines for comprehensive situational awareness:
| Intelligence Type | Description | Primary Use |
|---|---|---|
| OPIR | Overhead Persistent Infrared | Thermal signature detection and event classification |
| RF | Radio Frequency | Emitter geolocation via TDOA/FDOA |
| Fusion | Multi-Sensor Integration | Track correlation and uncertainty reduction |
Source Code: 5,375 lines
Test Coverage: 1,299 lines
Python Modules: 10+ across 8 subsystems
Training Samples: 10,000+ labeled
Event Classes: 5 thermal signatures
- 5 Event Classes: Missile launches, explosions, wildfires, aircraft, background
- CNN Classifier: 285K parameter 1D convolutional network
- 100% Test Accuracy on synthetic data (85-95% expected on real-world data)
- 4 Detection Methods: Temporal difference, MAD anomaly, Z-score, ensemble voting
- TDOA (Time Difference of Arrival): Hyperbolic positioning
- FDOA (Frequency Difference of Arrival): Doppler-based velocity estimation
- Hybrid TDOA/FDOA: Combined time-frequency geolocation
- Multilateration: Least-squares solver with GDOP computation
- Performance: <50m position error with 4+ sensors
- 3D Kalman Filter: 6-state (position + velocity) tracker
- Constant Velocity Model: Predictive motion estimation
- Track Management: Automatic initiation, maintenance, and pruning
- Error Reduction: >65% improvement over raw measurements
- Covariance-Weighted Fusion: Optimal combination of sensor estimates
- Data Association: Mahalanobis distance gating with nearest-neighbor assignment
- Uncertainty Quantification: CEP, covariance matrices, confidence scores
- Track Quality Scoring: Multi-factor assessment for analyst decision support
sentinel/
├── src/ # Source code (5,375 lines)
│ ├── models/ # Signal generation & CNN
│ │ ├── opir_generator.py # Physics-based thermal signatures
│ │ ├── rf_generator.py # RF signal simulation
│ │ └── cnn_classifier.py # OPIREventCNN architecture
│ │
│ ├── detection/ # Event detection
│ │ └── opir_detectors.py # 4 detection algorithms
│ │
│ ├── geolocation/ # RF positioning
│ │ ├── tdoa.py # Time difference methods
│ │ ├── fdoa.py # Frequency difference methods
│ │ └── multilateration.py # Combined solver
│ │
│ ├── tracking/ # Target tracking
│ │ └── kalman_tracker.py # 3D Kalman filter
│ │
│ ├── fusion/ # Multi-sensor fusion
│ │ └── sensor_fusion.py # CIWF engine
│ │
│ ├── training/ # Model training
│ │ └── train_cnn.py # Training pipeline
│ │
│ ├── inference/ # Production inference
│ │ └── opir_inference.py # Optimized predictor
│ │
│ ├── pipeline/ # Integrated pipelines
│ │ ├── phase2_pipeline.py # OPIR-only processing
│ │ └── phase3_pipeline.py # Full multi-INT
│ │
│ └── utils/ # Utilities
│ ├── visualization.py # Plotting functions
│ └── geo_utils.py # Geospatial helpers
│
├── tests/ # Test suite (1,299 lines)
│ ├── test_0_generator.py # Signal generation tests
│ ├── test_1_detection.py # Detection algorithm tests
│ ├── test_2_cnn.py # CNN architecture tests
│ ├── test_3_classifier.py # Classifier wrapper tests
│ ├── test_4_kalman.py # Kalman filter tests
│ ├── test_5_tracker.py # Multi-target tracking tests
│ ├── test_6_tdoa_fdoa.py # Geolocation tests
│ ├── test_7_multilateration.py # MLAT solver tests
│ ├── test_8_sensor_fusion.py # Fusion engine tests
│ └── test_9_full_system.py # Integration tests
│
├── data/synthetic/opir/ # Training data
│ ├── train/ # 7,000 samples (1,400/class)
│ ├── validation/ # 1,500 samples (300/class)
│ └── test/ # 1,500 samples (300/class)
│
├── notebooks/ # Analysis notebooks
│ └── 01_model_inference_demo.ipynb # Comprehensive demo
│
├── outputs/ # Generated artifacts
│ ├── training/ # Model checkpoints
│ └── evaluation/ # Test results
│
├── scripts/ # Utility scripts
│ ├── generate_opir_dataset.py # Data generation
│ ├── train_cnn_simple.py # Training script
│ └── evaluate_cnn.py # Evaluation script
│
└── requirements.txt # Dependencies
- Python 3.9+
- CUDA 11.0+ (optional, for GPU acceleration) or, TensorFlow Metal (for Mac)
# Clone repository
git clone https://github.com/yourusername/sentinel.git
cd sentinel
# Create virtual environment
python -m venv venv
source venv/bin/activate # Linux/macOS
# or: venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txt
# Verify installation
python -c "from src.inference.opir_inference import OPIRInference; print('OK')"torch>=2.0.0
numpy>=1.24.0
scipy>=1.10.0
scikit-learn>=1.2.0
filterpy>=1.4.5
matplotlib>=3.7.0
seaborn>=0.12.0
pandas>=2.0.0
from src.models.opir_generator import OPIRSignalGenerator
generator = OPIRSignalGenerator(sequence_length=256)
# Generate a missile launch signature
signal, metadata = generator.generate_signal('launch')
print(f"Peak temperature: {metadata['peak_temperature']}K")from src.detection.opir_detectors import OPIRDetector
detector = OPIRDetector()
# Run ensemble detection
result = detector.detect_ensemble(signal)
print(f"Detected: {result['detected']} (confidence: {result['confidence']:.2f})")from src.inference.opir_inference import OPIRInference
classifier = OPIRInference(model_path='outputs/training/best_model.pth')
# Classify the signal
prediction = classifier.predict(signal)
print(f"Class: {prediction['class_name']} ({prediction['confidence']*100:.1f}%)")from src.geolocation.multilateration import Multilateration
# Define sensor positions (x, y, z in meters)
sensors = [
[0, 0, 500],
[1000, 500, 1500],
[500, 1200, 1000],
[800, 800, 2000],
]
mlat = Multilateration(sensors)
# Solve for emitter position using TDOA measurements
tdoa_measurements = [0.0, 1.2e-6, 0.8e-6, 1.5e-6] # seconds
result = mlat.solve_tdoa(tdoa_measurements)
print(f"Position: {result['position']} (GDOP: {result['gdop']:.2f})")from src.tracking.kalman_tracker import KalmanTracker3D
tracker = KalmanTracker3D(dt=1.0)
# Initialize with first measurement
tracker.initialize([500, 300, 1000])
# Update with new measurements
for measurement in measurements:
tracker.predict()
tracker.update(measurement)
print(f"Current position: {tracker.get_position()}")
print(f"Current velocity: {tracker.get_velocity()}")from src.fusion.sensor_fusion import SensorFusionEngine, SensorMeasurement
fusion = SensorFusionEngine()
# Add OPIR detection
opir_meas = SensorMeasurement(
sensor_type='OPIR',
position=[600, 700, 0],
covariance=np.eye(3) * 900, # 30m std
confidence=0.95,
event_type='launch'
)
# Add RF geolocation
rf_meas = SensorMeasurement(
sensor_type='RF',
position=[605, 695, 5],
covariance=np.eye(3) * 400, # 20m std
confidence=0.88
)
# Fuse measurements
fused_track = fusion.fuse([opir_meas, rf_meas])
print(f"Fused position: {fused_track.position}")
print(f"Track quality: {fused_track.track_quality:.2f}")Generates physics-based thermal signatures for 5 event classes.
class OPIRSignalGenerator:
"""
Physics-based OPIR thermal signature generator.
Parameters
----------
sequence_length : int
Number of time samples (default: 256)
sample_rate : float
Samples per second (default: 10.0)
noise_level : float
Gaussian noise standard deviation (default: 5.0)
"""
def generate_signal(self, event_type: str) -> Tuple[np.ndarray, dict]:
"""Generate a thermal signature for the specified event type."""
def generate_batch(self, event_type: str, n_samples: int) -> List[np.ndarray]:
"""Generate multiple samples for training."""Event Type Parameters:
| Event | Peak Temp (K) | Rise Time (s) | Duration (s) | Decay Rate |
|---|---|---|---|---|
| launch | 2500-3500 | 0.5-2.0 | 5-15 | Fast |
| explosion | 3000-5000 | 0.1-0.5 | 2-8 | Very Fast |
| fire | 800-1500 | 5-30 | 60-300 | Slow |
| aircraft | 600-900 | 2-5 | Sustained | Minimal |
| background | 280-320 | N/A | N/A | N/A |
Four complementary detection methods with ensemble voting.
class OPIRDetector:
"""
Multi-method thermal event detector.
Methods
-------
detect_temporal_difference(signal, threshold=3.0)
Frame-to-frame change detection using z-score
detect_anomaly_mad(signal, threshold=3.5)
Median Absolute Deviation outlier detection
detect_zscore(signal, threshold=2.5)
Standard statistical z-score detection
detect_rise_time(signal, min_rise_rate=10.0)
Characteristic rise-time pattern detection
detect_ensemble(signal)
Voting-based combination of all methods
"""Detection Method Comparison:
| Method | Strengths | Best For | Typical Accuracy |
|---|---|---|---|
| Temporal Diff | Fast transients | Explosions, launches | 85% |
| MAD Anomaly | Robust to outliers | All event types | 90% |
| Z-Score | Simple, interpretable | Strong signals | 88% |
| Ensemble | Balanced performance | Production use | 95% |
1D Convolutional Neural Network for thermal event classification.
class OPIREventCNN(nn.Module):
"""
CNN architecture for OPIR event classification.
Architecture
------------
Input: (batch, 1, 256) - 256 time samples
Conv1: (batch, 32, 125) - kernel=7, stride=1, pool=2
Conv2: (batch, 64, 61) - kernel=5, stride=1, pool=2
Conv3: (batch, 128, 29) - kernel=3, stride=1, adaptive_pool=32
FC1: (batch, 256) - with dropout=0.3
Output: (batch, 5) - 5 class probabilities
Total Parameters: 285,189
"""Training Configuration:
{
'batch_size': 32,
'learning_rate': 0.001,
'optimizer': 'Adam',
'loss': 'CrossEntropyLoss',
'epochs': 50,
'early_stopping_patience': 10,
'lr_scheduler': 'ReduceLROnPlateau'
}Time Difference of Arrival positioning.
class TDOASolver:
"""
TDOA-based emitter geolocation.
Parameters
----------
sensors : np.ndarray
Sensor positions, shape (N, 3)
c : float
Speed of light (default: 299792458 m/s)
Methods
-------
solve(tdoa_measurements) -> GeolocationResult
Solve for emitter position using least-squares
"""Frequency Difference of Arrival for velocity estimation.
class FDOASolver:
"""
FDOA-based velocity estimation.
Uses Doppler shift differences between sensor pairs
to estimate emitter velocity vector.
"""Combined TDOA/FDOA solver with GDOP computation.
class Multilateration:
"""
Hybrid geolocation using multiple measurement types.
Methods
-------
solve_tdoa(measurements) -> GeolocationResult
solve_fdoa(measurements, sensor_velocities) -> GeolocationResult
solve_hybrid(tdoa, fdoa, sensor_velocities) -> GeolocationResult
compute_gdop(position) -> float
"""Geolocation Performance:
| Sensors | Mean Error | GDOP | Convergence Rate |
|---|---|---|---|
| 3 | 85m | 8.5 | 75% |
| 4 | 35m | 4.2 | 95% |
| 5 | 22m | 2.8 | 98% |
| 6 | 18m | 2.2 | 99% |
6-state Kalman filter for 3D target tracking.
class KalmanTracker3D:
"""
3D Kalman filter with constant velocity model.
State Vector: [x, y, z, vx, vy, vz]
Parameters
----------
dt : float
Time step between updates
process_noise : float
Process noise covariance multiplier
measurement_noise : float
Measurement noise covariance multiplier
Methods
-------
initialize(position)
Initialize tracker with first measurement
predict()
Predict next state
update(measurement)
Update with new measurement
get_position() -> np.ndarray
get_velocity() -> np.ndarray
get_covariance() -> np.ndarray
"""Manages multiple concurrent tracks.
class MultiTargetTracker:
"""
Multi-target tracking with track management.
Features
--------
- Automatic track initiation
- Track-to-measurement association
- Track coasting (prediction without update)
- Track deletion after missed updates
"""Covariance-Intersection Weighted Fusion (CIWF) engine.
class SensorFusionEngine:
"""
Multi-sensor data fusion engine.
Methods
-------
fuse(measurements: List[SensorMeasurement]) -> FusedTrack
Fuse multiple sensor measurements into single track
associate(measurement, tracks, gate_threshold=9.21)
Associate measurement to existing track using Mahalanobis distance
compute_track_quality(track) -> float
Compute overall track quality score
"""@dataclass
class SensorMeasurement:
sensor_type: str # 'OPIR', 'RF', or 'FUSED'
position: np.ndarray # [x, y, z] in meters
velocity: np.ndarray # [vx, vy, vz] in m/s (optional)
covariance: np.ndarray # 3x3 or 6x6 covariance matrix
confidence: float # [0, 1] confidence score
timestamp: float # Unix timestamp
event_type: str # For OPIR: 'launch', 'explosion', etc.
@dataclass
class FusedTrack:
track_id: int
position: np.ndarray
velocity: np.ndarray
covariance: np.ndarray
confidence: float
track_quality: float # [0, 1] overall quality
opir_detections: int # Count of OPIR updates
rf_detections: int # Count of RF updates
position_uncertainty: float # CEP in meters
velocity_uncertainty: float # Speed uncertainty in m/s
last_opir_update: float
last_rf_update: floatOPIR-only processing: Detection → Classification → Tracking
class Phase2Pipeline:
"""
OPIR signal processing pipeline.
Flow: Raw Signal → Detection → Classification → Tracking → Output
"""
def process(self, signal: np.ndarray) -> PipelineResult:
"""Process single OPIR signal through full pipeline."""Full multi-INT processing with sensor fusion.
class Phase3Pipeline:
"""
Complete multi-sensor integration pipeline.
Flow:
OPIR Signals ─┐
├─→ Fusion Engine ─→ Tracker ─→ Fused Tracks
RF Measurements┘
"""
def process(self,
opir_signals: List[np.ndarray],
rf_measurements: List[dict]) -> List[FusedTrack]:
"""Process multi-sensor data through fusion pipeline."""# NumPy array: (sequence_length,) or (1, sequence_length)
signal = np.load('data/synthetic/opir/train/launch/launch_00001.npy')
# Shape: (256,)
# Dtype: float32
# Units: Temperature in Kelvin{
"num_samples_per_class": 2000,
"sequence_length": 256,
"sample_rate": 10.0,
"classes": ["launch", "explosion", "fire", "aircraft", "background"],
"splits": {
"train": 1400,
"validation": 300,
"test": 300
},
"generation_params": {
"noise_level": 5.0,
"random_seed": 42
}
}@dataclass
class GeolocationResult:
position: np.ndarray # [x, y, z] in meters
velocity: np.ndarray # [vx, vy, vz] in m/s (if FDOA used)
covariance: np.ndarray # Position covariance matrix
gdop: float # Geometric Dilution of Precision
residual: float # RMS fit residual
converged: bool # Solver convergence flag
iterations: int # Number of solver iterations| Metric | Value |
|---|---|
| Test Accuracy | 100% |
| Macro F1-Score | 1.00 |
| Inference Time | <5ms |
| Model Size | 1.1 MB |
Per-Class Results (Test Set: 1,500 samples)
| Class | Precision | Recall | F1-Score | Support |
|---|---|---|---|---|
| launch | 1.00 | 1.00 | 1.00 | 300 |
| explosion | 1.00 | 1.00 | 1.00 | 300 |
| fire | 1.00 | 1.00 | 1.00 | 300 |
| aircraft | 1.00 | 1.00 | 1.00 | 300 |
| background | 1.00 | 1.00 | 1.00 | 300 |
Note: Results on synthetic data. Expected real-world performance: 85-95%
| Configuration | Mean Error | Max Error | GDOP |
|---|---|---|---|
| 4 sensors, low noise | 25m | 45m | 3.2 |
| 4 sensors, high noise | 48m | 85m | 3.2 |
| 6 sensors, low noise | 15m | 28m | 2.0 |
| 6 sensors, high noise | 32m | 55m | 2.0 |
| Metric | Value |
|---|---|
| Mean Position Error | 4.2m |
| Max Position Error | 12.5m |
| Error Reduction vs Raw | 65% |
| Track Continuity | 95% |
| Metric | Single Sensor | Fused | Improvement |
|---|---|---|---|
| Position Error | 35m | 18m | 49% |
| Track Quality | 0.65 | 0.88 | 35% |
| False Alarm Rate | 12% | 4% | 67% |
Coming in Phase 4
POST /api/v1/detect
POST /api/v1/classify
POST /api/v1/geolocate
POST /api/v1/track
POST /api/v1/fuse
GET /api/v1/tracks
GET /api/v1/tracks/{track_id}
WS /api/v1/stream
# Full pipeline inference
from sentinel import SentinelPipeline
pipeline = SentinelPipeline(
model_path='outputs/training/best_model.pth',
config='configs/production.yaml'
)
results = pipeline.process(
opir_signals=[signal1, signal2],
rf_measurements=[rf1, rf2],
return_intermediate=True
)# Run all tests
pytest tests/ -v
# Run with coverage
pytest tests/ --cov=src --cov-report=html
# Run specific test module
pytest tests/test_8_sensor_fusion.py -v| Test File | Description | Tests |
|---|---|---|
| test_0_generator.py | Signal generation | 8 |
| test_1_detection.py | Detection algorithms | 12 |
| test_2_cnn.py | CNN architecture | 6 |
| test_3_classifier.py | Classifier wrapper | 8 |
| test_4_kalman.py | Kalman filter | 10 |
| test_5_tracker.py | Multi-target tracking | 8 |
| test_6_tdoa_fdoa.py | Geolocation methods | 12 |
| test_7_multilateration.py | MLAT solver | 10 |
| test_8_sensor_fusion.py | Fusion engine | 14 |
| test_9_full_system.py | Integration tests | 6 |
========================= test session starts ==========================
collected 94 items
tests/test_0_generator.py ........ [ 8%]
tests/test_1_detection.py ............ [ 21%]
tests/test_2_cnn.py ...... [ 27%]
tests/test_3_classifier.py ........ [ 36%]
tests/test_4_kalman.py .......... [ 46%]
tests/test_5_tracker.py ........ [ 55%]
tests/test_6_tdoa_fdoa.py ............ [ 68%]
tests/test_7_multilateration.py .......... [ 78%]
tests/test_8_sensor_fusion.py .............. [ 93%]
tests/test_9_full_system.py ...... [100%]
========================= 94 passed in 12.34s ==========================
- Phase 1: Signal generation and data pipeline
- Phase 2: OPIR detection, classification, tracking
- Phase 3: RF geolocation and sensor fusion
- Phase 4: Production deployment
- REST API with FastAPI
- Streamlit dashboard
- Docker containerization
- Real-time streaming support
-
Phase 5: Real-world integration
- NASA FIRMS thermal data integration
- ADS-B aircraft tracking validation
- Atmospheric noise modeling
- Field testing with live sensors
-
Phase 6: Advanced features
- Multi-hypothesis tracking (MHT)
- Neural network-based fusion
- Adaptive Kalman filtering
- Distributed processing support
# Install dev dependencies
pip install -r requirements-dev.txt
# Install pre-commit hooks
pre-commit install
# Run linting
flake8 src/ tests/
black src/ tests/ --check
# Run type checking
mypy src/- Follow PEP 8 guidelines
- Use type hints for function signatures
- Write docstrings in NumPy format
- Maintain >80% test coverage for new code
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Write tests for new functionality
- Ensure all tests pass (
pytest tests/ -v) - Submit a pull request with detailed description
This project is licensed under the MIT License - see the LICENSE file for details.
- Physics models based on infrared signature research literature
- Geolocation algorithms adapted from standard MLAT techniques
- Kalman filtering implementation inspired by FilterPy library
Built by Michael Gurule
Data: All algorithms and methodologies are based on publicly available research and unclassified information. (Public)
Building production-grade intelligence systems for portfolio demonstration