The PhoenixAI Fusion-ISR F100 represents an integrated aerial Counter-UAS system that addresses the Air Force Global Strike Command (AFGSC) requirement for a jamming-proof, autonomous, low Size, Weight, and Power (SWaP) solution for Group 2 ISR UAS to conduct intelligence, surveillance, reconnaissance, and targeting (ISR-T) operations in highly secured operational environments.
The system combines multiple sensing modalities—including RF passive detection, RF active analysis, EO/IR imaging, thermal sensors, and LiDAR—within a unified AI-driven fusion framework. Local edge processing on NVIDIA Jetson Orin enables sub-100ms detection and classification of low, slow, and small (LSS) drones, even in GPS-denied or electronically contested environments.
The Problem: Limitations of Existing ISR Systems
Current Intelligence, Surveillance, and Reconnaissance (ISR) and Counter-UAS systems face significant limitations when deployed against modern, low-observable, and autonomous aerial threats. Most legacy platforms rely on single-sensor modalities that are insufficient for detecting LSS drones operating at low altitudes or using passive flight profiles.
Single-Sensor Dependency
Traditional ISR systems depend on radar or optical sensors that perform well against large targets but fail when facing smaller, agile drones with low radar cross-section or operating in cluttered environments like urban or mountainous terrain.
Lack of True Sensor Fusion
Single-modality systems lack the ability to fuse data from multiple modes (RF, camera, infrared), leading to false alarms, missed detections, and incorrect classification of targets.
Electronic Warfare Vulnerability
Many systems are highly dependent on GPS and unprotected communication links. Adversaries exploit these weaknesses through RF jamming, GNSS spoofing, or link denial attacks, degrading situational awareness and potentially causing loss of control.
Processing Latency
Current systems rely on centralized processing architectures that introduce latency between detection, decision, and response—preventing timely interception of rapidly maneuvering targets, especially during swarming operations.
High False Positive Rates
Without multi-modal data correlation combining RF emissions, heat signatures, and optical tracking, ISR systems frequently misclassify birds, civilian drones, or background clutter as potential threats, burdening operators and slowing real response actions.
Limited Scalability and Interoperability
Existing solutions are often proprietary and siloed, making it difficult to integrate new sensors, deploy multi-vendor solutions, and rapidly scale across multiple sites—limiting coverage and interoperability with tactical command networks.
The Solution: Integrated Hardware and Software
The PhoenixAI Fusion-ISR F100 platform addresses all identified shortcomings through an integrated approach. Unlike traditional radar-based systems that fail to detect small autonomous drones, the platform leverages reinforcement learning and real-time edge analytics to autonomously correlate heterogeneous sensor data at the tactical edge.
Core System Capabilities
Physics-Informed AI Multi-Sensor Fusion
Physics-informed machine learning trained on multi-modal sensor inputs detects LSS drones by combining RF passive detection, RF active analysis, EO/IR stereo vision, thermal imaging, and LiDAR for precision ranging and 3D mapping.
MOSA-Compliant Modular Architecture
Modular Open Systems Approach (MOSA) compliant architecture supports multiple vendor sensor feeds, enabling best-in-class integration and avoiding vendor lock-in. Third-party sensors integrate via open APIs without complex integration cycles.
Edge Deployment and Autonomous Operation
Deploys on-premises at the edge with all critical processing occurring locally without cloud dependency. Operates fully autonomously with GPS-denied navigation, real-time threat detection, dynamic mission planning, and autonomous obstacle avoidance.
Optional 5G Network Integration
While fully operational at the edge, can leverage public and private 5G networks when available for extended communication range, multi-platform synchronization, secure encrypted channels, and low-latency coordination.
Hardware Platform Specifications
F100 Drone Platform Architecture
The PhoenixAI Fusion-ISR F100 drone is built on a carbon-fiber based chassis with foldable propeller arms—an X4 frame with collapsible legs and arms for easy storage and transportation. The structural design prioritizes strength-to-weight ratio while maintaining modularity for sensor integration.
| Component | Specification |
|---|---|
| Platform Designation | PhoenixAI Fusion-ISR F100 |
| Frame | X4 carbon-fiber with foldable arms, collapsible legs |
| Deployed Dimensions | 530mm x 530mm x 360mm (H) |
| Frame Weight | 2.5 kg (without battery) |
| Total Weight | ~3.5 kg (with 6S battery pack) |
| Motors | 4x 330KV high-torque brushless |
| Total Thrust | 20 kg combined (5 kg per motor) |
| Power System | 6S Lithium-Ion battery pack |
| Continuous Power | 2kW available for all systems |
| Flight Controller | Pixhawk 6X with custom firmware |
| Main Compute | NVIDIA Jetson Orin |
| Compute Power | 100W processing capacity |
| Primary Camera | ZED 2i Stereo Camera System |
| Camera Resolution | 4MP per sensor @ 30 FPS |
Key Subsystems
Propulsion System
330KV high-torque brushless motors with foldable propellers. Each motor generates 5kg of thrust at full power—total of 20kg thrust providing substantial lift capacity for sensor payloads. Motor controllers implement advanced ESC algorithms for precise thrust control and energy optimization.
Power and Energy Management
6S Lithium-Ion battery pack delivers 2kW continuous power. Battery management system implements over-current, over-temperature, and cell-balancing protection. Dedicated power regulation module provides NVIDIA Jetson Orin with up to 100W. Intelligent load management prioritizes critical systems during battery depletion.
Thermal Management System
NVIDIA Jetson Orin cooled by heatsink with active fan. Under normal conditions, natural airflow maintains optimal temperatures. During high-computational-load scenarios, active cooling engages. Carbon-fiber chassis incorporates ventilation pathways for passive cooling while maintaining structural integrity.
RF Sensing Technologies
Passive RF Sensing
Passive RF Sensing is a groundbreaking, stealthy approach to drone detection that operates by listening rather than transmitting. Unlike traditional radar systems that send out signals and can be easily jammed or detected, passive sensors are completely covert and non-emissive—they don't give away the platform's location or interfere with existing radio communications.
The process begins with modeling the drone's Radar Cross Section (RCS) to understand how its physical shape reflects cellular signals from different angles of incidence. By continuously calculating the dynamic geometric relationship between the cellular base station, the drone, and ground-based receiving sensors, the system precisely tracks changing scatter angles.
This allows the platform to use the scattered signal's strength to determine the location of the specular reflection on the ground, enabling successful localization, classification, and size estimation of rogue drones. This innovative use of ambient cellular signals allows superior, covert threat detection where traditional active systems fail.
Active RF Sensing
Active RF Sensing enables covert detection and identification of drones by analyzing radio frequency signals they actively transmit or receive from networks. This is achieved by exploiting both drone-specific links (Wi-Fi) and the drone's interaction with major cellular networks (4G/5G).
Wi-Fi and Proprietary Link Sensing
Many small UAS platforms rely on 2.4 GHz and 5 GHz ISM bands used by consumer Wi-Fi for command-and-control, telemetry, and FPV video links. By passively detecting OFDM beacon packets and recording attributes such as frequency, received power, time of arrival, and short-term channel state, the system builds a layered map of RF emitters in the area.
Feature extraction (temporal beacon patterns, RSSI trends, MAC ID/Manufacturer ID) supports classification: distinguishing between fixed infrastructure APs, mobile ground stations, and airborne Wi-Fi emitters. This sensing approach is non-intrusive and protocol-independent—exploiting mandated physical-layer behavior that all 802.11-compliant devices must produce.
Vision System and Object Detection
The F100 integrates a sophisticated multi-sensor vision platform centered around a high-resolution stereo camera system capable of capturing 4MP imagery at 30 frames per second, complemented by precision inertial sensors creating a comprehensive multi-modal perception suite.
AI-Powered Object Detection and Classification
The unique machine learning algorithm incorporates an optimized neural network architecture specifically trained for real-time aerial object detection. These models execute locally on the NVIDIA Jetson Orin, enabling sub-100ms detection and classification of dynamic obstacles including other aircraft and birds.
The system's predictive tracking algorithms allow the drone to anticipate object trajectories and execute preemptive flight path adjustments, ensuring safe separation margins are maintained. The neural network has been trained on extensive datasets of aerial scenarios, achieving high accuracy even in challenging lighting and weather conditions.
Depth Perception and 3D Mapping
The dual-camera configuration features optimized baseline separation that maximizes depth estimation accuracy across the operational range. Through advanced stereo disparity algorithms, the system generates dense depth maps with millimeter-level precision for obstacles within a 20-meter detection envelope.
Detection Capabilities
The ZED 2i camera system can detect objects as small as 0.3m x 0.3m at 32 meters with 10x10 pixel resolution. For larger objects measuring 1m x 1m, detection range extends beyond 106 meters, providing substantial early-warning capability. Point cloud data is continuously updated at 30 Hz, maintaining real-time situational awareness even during high-speed maneuvers.
Environmental Mapping and SLAM
The sensor fusion framework leverages state-of-the-art Simultaneous Localization and Mapping (SLAM) algorithms to combine visual-inertial data streams into coherent 3D environmental representations. The system constructs detailed mesh models of terrain features, structures, and persistent obstacles.
Visual-inertial odometry fuses camera-based motion estimation with high-rate IMU measurements, providing robust localization even when GPS is unavailable or unreliable. The fusion algorithm uses extended Kalman filtering with outlier rejection to maintain centimeter-level position accuracy.
Multi-Sensor Fusion Pipeline
The PhoenixAI Fusion-ISR software platform implements a sophisticated sensor fusion pipeline that integrates detections from multiple heterogeneous sensors to produce a unified track picture with accuracy exceeding any individual sensor.
Per-Sensor Processing
The software pipeline for each sensor independently computes location information for all detected intruders, along with error estimates for each detection. Each sensor modality contributes its unique perspective:
- RF Passive: Location derived from cellular scatter geometry with RCS-based confidence
- RF Active: Direction finding from Wi-Fi signal strength with RSSI. Cellular-based communications signal analysis and identification
- EO/IR Vision: Pixel-based detection with depth estimation and angular uncertainty
- Thermal: Heat signature localization with temperature-based confidence scoring
- LiDAR: Precise ranging with sub-centimeter accuracy but limited field of view
Track Correlation and Fusion
Individual sensor detections are correlated using a multi-hypothesis tracking algorithm that considers spatial proximity, velocity consistency, and temporal coherence. The fusion process includes:
Association
Detections within error bounds are grouped as potential track candidates
Weighting
Each detection weighted by inverse covariance (uncertainty)
Fusion
Weighted least-squares optimization produces fused position estimate
Filtering
Kalman filter applied to smooth trajectory and predict future states
Master Track Generation
Consolidated track with reduced uncertainty ellipse
Golden Dome Architecture: Layered Defense
PhoenixAI proposes the 'Golden Dome' solution as an autonomous, layered Counter-UAS defense system designed for comprehensive military airspace protection. This architecture enables Beyond-Visual Line of Sight (BVLOS) operations and utilizes distributed AI across a protected perimeter to create an impenetrable dome of awareness and response capability.
Three Integrated Layers
Core Intelligence - Fixed 5G Edge AI
A 5G Edge fixed AI system operates on-premises at the protected facility, providing ultra-low latency data fusion (sub-50ms), centralized decision-making, persistent 24/7/365 monitoring, and integration with external C2 systems.
Threat Detection & Fusion - Distributed Sensor Network
Integrates multiple heterogeneous sensor streams including drone-based wireless sensing (F100 platforms), third-party sensor integration via OpenAPI interfaces, multi-modal data streams (RF passive/active and video), and master track generation from correlated detections.
Kinetic Response - Autonomous Interception
Phase II capability featuring on-drone neural agents, optimal path planning via reinforcement learning, physical neutralization of hostile drones, proprietary RL patents for agile robotics deployment, and human-on-the-loop architecture with operator override.
Operational Workflow
- Continuous Surveillance: Distributed sensor network maintains 360-degree coverage of protected airspace
- Detection and Classification: Multi-modal sensors detect intrusions, FusionAI classifies threats
- Threat Assessment: Core intelligence evaluates threat level and determines response requirement
- Response Coordination: Optimal interceptor selection and launch authorization
- Autonomous Interception: Kinetic AI guides interceptor to optimal engagement point
- Neutralization and Assessment: Threat neutralized, results logged, operators notified
Edge Processing and Distributed Intelligence
Dual-Mode Architecture Philosophy
The F100 platform implements a sophisticated dual-mode computational architecture that balances autonomous edge capability with distributed intelligence when infrastructure is available. This approach directly addresses SWaP-C constraints while providing a clear evolutionary path from Phase I standalone operations to Phase II distributed swarm intelligence.
Phase I: Standalone Edge Processing
In Phase I configuration, each F100 platform operates as a fully autonomous unit with all mission-critical processing performed locally on the NVIDIA Jetson Orin compute module. This mode ensures operational capability in GPS-denied, network-denied, or contested electromagnetic environments.
Phase II: Distributed Edge Intelligence Architecture
The next evolutionary step migrates computationally intensive AI/ML processing from individual drones to strategically positioned Edge Servers at cellular towers or forward operating bases. This approach allows drones to remain lightweight platforms with only essential onboard safety features, significantly reducing onboard power consumption and extending mission duration.
Centralized Swarm Intelligence
The edge server acts as a powerful centralized 'AI-brain' that aggregates real-time data from the entire fleet, performing global path planning, collaborative task allocation, fleet-level conflict resolution, and coordinated swarm tactics for search, surveillance, and engagement missions.
Technology Readiness and Development Roadmap
Phase I Features (Current Demonstration Capabilities)
Phase I features represent current technology demonstrations at Technology Readiness Level (TRL) 6-7, having been demonstrated in relevant operational environments. Upon Phase II SBIR funding approval, these features will be commercialized, hardened for production deployment, and integrated into a unified system architecture.
| Feature | Description |
|---|---|
| Multi-Sensor Fusion | Integrates RF passive, RF active, EO/IR, thermal, and LiDAR for comprehensive threat detection |
| AI Object Detection | Sub-100ms LSS drone classification using physics-informed ML with neural network architecture |
| Autonomous Navigation | GPS-denied operation via SLAM, visual-inertial odometry, and 3D environmental mapping |
| Onboard AI Processing | 100W NVIDIA Jetson Orin for real-time local AI without cloud dependency |
| 5G Communications | Encrypted BVLOS communication when network available |
| Passive RF Detection | Covert sensing via 4G/5G cellular network scatter analysis |
| Active RF Analysis | 4G/5G/Wi-Fi beacon monitoring and channel fingerprinting |
| Stereo Vision System | ZED 2i 4MP@30FPS, 100m+ detection range with depth perception |
| MOSA Architecture | Open interfaces supporting vendor-neutral sensor integration |
Phase II Features (Future Development Roadmap)
Phase II features represent advanced capabilities planned for development, building upon the Phase I foundation to address emerging operational requirements, advanced threat scenarios, and enhanced multi-platform coordination. These will advance from concept (TRL 3-4) to demonstration in operational environments (TRL 7-8).
| Feature | Description |
|---|---|
| Multi-Platform Coordination | Drone-to-drone communication with synchronized data fusion across fleet |
| Enhanced Geo-location | Multi-ISR platform triangulation for precise target localization |
| Detection Characterization | Probability of detection analysis by RCS, RF strength, UAV size/group |
| Acoustic Sensing | Passive acoustic array for silent drone detection via motor/propeller signatures |
| CBRN Detection | Chemical, Biological, Radiological, Nuclear sensor integration |
| Counter-Swarm Capabilities | Coordinated multi-target tracking and threat prioritization algorithms |
| Hardened Communications | Cognitive radio, adaptive modulation, AI-based anti-jamming and frequency hopping |
| Quantum-Resistant Encryption | Post-quantum cryptographic algorithms for long-term security |
Strategic Impact
The PhoenixAI Fusion-ISR F100 platform represents a transformational advancement in autonomous aerial Counter-UAS capabilities. By integrating multi-sensor fusion, edge AI processing, and secure communication in a compact, deployable platform, the F100 addresses critical capability gaps in modern threat environments.
The Future of Autonomous Counter-UAS
PhoenixAI's Fusion-ISR F100 platform delivers unprecedented capabilities for intelligence, surveillance, reconnaissance, and autonomous counter-UAS operations. From standalone edge processing to distributed swarm intelligence, the F100 adapts to mission requirements while maintaining operational effectiveness in the most challenging environments.