AI-Enabled RF Passive Sensing for Counter-UAS

Avoiding jammers while optimizing wireless connectivity through reinforcement learning

Research Paper 14 min read Technical Innovation

Computer vision (CV) techniques are one of the most accurate means of passively detecting and locating rogue drones. They are, however, limited by environmental and lighting conditions and the scale-range ambiguity that stems from a 2-dimensional projection of a 3D object.

Concurrently, RF passive techniques have the ability to detect drones in adverse weather conditions, but suffer from localization challenges. In this research, we describe how RF passive sensing, coupled with the power of Artificial Intelligence (AI), can be combined with CV techniques to pinpoint the location of drones with high accuracy.

By combining RF passive sensing with computer vision, we can resolve the fundamental scale-range ambiguity problem that plagues traditional visual detection systems—enabling accurate drone tracking regardless of size or distance.

The Problem: Scale-Range Ambiguity in Computer Vision

Recent strategic defense initiatives, including the U.S. Department of Defense's Golden Dome concept for integrated homeland defense, emphasize the need for persistent sensing capable of addressing a broad spectrum of aerial threats, ranging from ballistic and hypersonic systems to low-cost unmanned platforms.

A variety of sensing modalities have been explored for UAV detection, including radar, acoustic sensing, computer vision, and radio frequency (RF)–based approaches. Vision-based systems provide high spatial resolution and intuitive situational awareness, benefiting from recent advances in deep learning–based object detection and classification.

The Fundamental Challenge

However, computer vision–based approaches inherently suffer from scale and range ambiguity due to their reliance on two-dimensional image projections. Object detection algorithms typically represent targets using bounding boxes in image space, where the apparent size of a detected object depends jointly on its physical dimensions and its distance from the sensor.

The Scale-Range Ambiguity Problem

A large UAV at long range can produce a bounding box similar in size to that of a smaller UAV operating at closer proximity. For example, a drone flying near the camera might have the same bounding box as a Boeing plane flying far away. Without additional inputs about the object, this ambiguity can lead to misclassification and inaccurate location estimation.

The Solution: RF Passive Sensing with AI

Passive RF sensing is proposed as a technique that can help resolve this ambiguity. By exploiting UAS-related RF emissions such as control links, telemetry, and navigation signals, passive RF systems provide information that is largely independent of visual scale effects.

Signal characteristics such as received power, spectral occupancy, modulation structure, and temporal behavior can be used to infer operational range, link geometry, or platform type, thereby providing context unavailable to purely image-based systems.

01

Weather Independence

RF passive sensing works effectively in adverse weather conditions where computer vision fails, including fog, rain, and low-light environments.

02

Scale Independence

Unlike visual detection, RF signals provide range information independent of the drone's physical size, resolving the scale-range ambiguity.

03

AI-Powered Accuracy

Convolutional neural networks trained on RF power maps can predict drone distance with mean errors as low as 7 meters.

04

Complementary Fusion

When combined with computer vision outputs, passive RF observations reduce uncertainty in target interpretation and improve overall situational awareness.

Technical Approach: RF Simulations

Mathematical Setup

We developed a simplified simulation approach wherein a 5G base station, whose location and antenna pattern is known, is assumed to be the transmitter. The received power (which is scattered by the drone) at different points on the ground is simulated using the well-known radar equation.

The simulation setup assumes a drone flying at a fixed altitude in a single direction from a transmitter towards the receiver. The power received by the receiver can be computed by the radar equation, which accounts for transmitted power, antenna gains, distances, wavelength, and the radar cross-section (RCS) of the drone.

Key Parameters

Frequency: 3.6 GHz (aligned with CBRS spectrum for 5G)
Altitude: 200m constant flight altitude
Antenna Configuration: Array of linear dipoles with 7° down-tilt toward horizon
Receiver Gain: 0 dBi baseline (optimizable based on receiver sensitivity)

Radar Cross-Section Modeling

A critical unknown in the simulation is the RCS of the drone, which depends on the drone characteristics and frequency. To obtain this, a simplified drone model was imported into FEKO, and the RCS for different angles of incidence was simulated.

The angle of incidence was chosen to lie between 90° and 180°. 90° represents the drone being directly above the base station, and 180° represents the drone being very far from the base station. For each angle of incidence, the scattered fields over the entire hemisphere were computed—only the bottom hemisphere is required since the only concern is the fields scattered towards the earth.

Base Station Antenna Modeling

The base-station antenna was modeled as an array of linear dipoles spacing λ/2 apart and phased to down-tilt the beam by 7° toward the horizon. Notably, the drone flies dominantly over the sidelobes of the base-station antenna, leading to rapid fluctuations of the incident power as the drone travels.

Simulation Results: Received Power Maps

Based on the formulations presented, received power maps were computed for two drone sizes: (a) for the baseline drone size, and (b) for a model scaled up by a factor of 4.5 to model a larger drone.

In these plots, the YZ plane represents the ground where the transmitter and receiver are positioned. The drone location is swept along the Z-axis, keeping the altitude constant at 200m.

Key Observation 01
The scattered power maximizes at the region around the specular reflection zone—the point on the earth that corresponds to the angle of incidence equaling the angle of scattering. This trend remains true even when the drone size is scaled by 4.5x.
Key Observation 02
The larger drone concentrates the power on a smaller region on the earth, which aligns with physical expectations. These results show specific trends that the received power follows, making it ideal for training an ML model.
Key Observation 03
The received power patterns exhibit consistent characteristics that can be learned by neural networks to predict the range of the drone from the transmitter based on the received power measured by different receivers on the ground.

CNN Architecture for Range Prediction

Based on the received power maps created through RF simulations, a convolutional neural network (CNN) model was developed to process the maps as input and predict the distance of the drone from the transmitter base station at the output.

Network Architecture

LAYER 01

Input Layer

The CNN takes the flattened version of the power maps as input. Each map consists of 41×141 pixels, leading to 5,781 neurons at the input layer. This layer processes the spatial distribution of received power across the ground plane.

LAYER 02

Hidden Layer

A hidden layer consisting of 128 neurons processes the input features. This layer learns to extract relevant patterns from the power distribution that correlate with drone distance.

LAYER 03

Output Layer

The output layer predicts the distance of the drone from the transmitting base station. For the demonstration, we assume the drone is flying at a constant altitude of 200m along a straight line.

Training Methodology

To generate the training dataset, a 2.5x scaled drone model was used. The position of the transmitter base station was incremented by 0.5m between -500m and -100m, generating 800 heatmaps. The training was performed on 80% of the maps, and evaluation was performed on the remaining 20%, both randomly selected.

Performance Results

The results achieved by the CNN model clearly illustrate the potential of using convolutional neural networks to estimate the range of the drone, thereby successfully augmenting computer-vision techniques and alleviating the bounding-box anomaly.

Exceptional Accuracy on Training Data

The trained model is capable of predicting the range (distance of drone from the base station) with a mean error of only 7 meters when evaluated on test data from the same 2.5x scaled drone model used in training.

Generalization Performance

To further evaluate the CNN algorithm, the model that was trained on the 2.5x scaled model was evaluated on simulated power maps for:

Drone Configuration Performance Range Key Finding
2.5x Scaled (Training Data) Excellent up to 350m Mean error: 7 meters
Baseline Drone (Smaller) Successful up to 350m Linear distance: ~300m at 200m altitude
4.5x Scaled (Larger) Successful up to 350m Size-independent prediction achieved
The CNN model is remarkably successful in predicting range for both baseline and scaled drone configurations up to a range of 350m for an altitude of 200m, which translates to a linear distance of 300m. Critically, the RF passive sensing—unlike the CV counterpart—can accurately predict range independent of the size of the drone, thereby resolving the scale-range ambiguity suffered by CV techniques.

Strategic Advantages of the Hybrid Approach

  • Resolves Scale-Range Ambiguity: RF passive sensing provides accurate range estimation independent of drone size, eliminating the fundamental limitation of computer vision systems.
  • Weather-Independent Operation: RF signals penetrate fog, rain, and other environmental conditions that degrade visual detection, ensuring continuous monitoring capability.
  • Size-Agnostic Detection: The trained CNN model successfully predicts range for drones both smaller and larger than those in the training dataset, demonstrating robust generalization.
  • Complementary Information Fusion: By combining RF range estimation with visual bounding box data, the system achieves superior tracking accuracy compared to either modality alone.
  • Passive Operation: Unlike active radar systems, passive RF sensing does not emit signals, reducing detectability and enabling covert monitoring operations.
  • AI-Enhanced Precision: Machine learning models extract complex patterns from RF power distributions that would be difficult or impossible to capture with traditional analytical approaches.

Practical Applications

Counter-UAS Systems

The hybrid RF-CV approach provides comprehensive drone detection and tracking for critical infrastructure protection, airport security, and military installations. By combining visual identification with RF range estimation, security operators gain precise location information regardless of lighting conditions or drone size.

Airspace Security

Integration with the Department of Defense's Golden Dome concept enables persistent monitoring of aerial threats. The system can distinguish between legitimate aircraft and unauthorized drones while providing accurate tracking data for response coordination.

Urban Operations

In complex urban environments where visual line-of-sight is frequently obstructed, RF passive sensing maintains tracking capability by detecting scattered signals from 5G infrastructure, providing continuous situational awareness even when drones move behind buildings.

Future Directions

This research demonstrates a proof-of-concept for AI-enabled RF passive sensing augmenting computer vision techniques. Several avenues exist for extending this work:

Conclusion

This research proposed a methodology wherein artificial intelligence was employed to predict the location of a drone via RF passive sensing. Specifically, the transmitter was assumed to be a 5G base station whose location is known, and the drone was assumed to be the scatterer.

The received power (after being scattered by the drone) was simulated using the radar equation. The RCS of representative drones was simulated using FEKO electromagnetic modeling software. The received power maps were then used to train a convolutional neural network to predict the distance of the drone from the base-station antenna.

The results clearly show the potential of the CNN to accurately predict the distance of the drone up to a distance of 350m. This technique, when combined with CV techniques, can alleviate the well-known scale and range ambiguity problem and lead to extremely precise drone tracking.

By combining RF passive sensing with computer vision, PhoenixAI has demonstrated a powerful approach to counter-UAS operations that overcomes the fundamental limitations of single-modality detection systems. This hybrid approach represents a significant advancement in drone detection and tracking technology, with immediate applications in defense, security, and critical infrastructure protection.

Advancing Counter-UAS Technology

PhoenixAI continues to push the boundaries of multi-modal sensor fusion for counter-UAS applications. Our research demonstrates how combining RF passive sensing with computer vision—enhanced by artificial intelligence—creates detection systems far more capable than either technology alone.