The US Navy is helping to eliminate the need for a human operator to counter drone swarm attacks. An effort led by the Naval Postgraduate School (NPS) used AI to make laser weapons better able to target and destroy multiple drone attacks.
With their ability to engage targets at the speed of light, lasers are being seriously developed by the major military powers as a counter to many threats – not the least of which is the presence of increasingly sophisticated drones.
However, lasers are hardly a panacea, and they have a number of problems that need to be overcome if they are to become practical weapons. For starters, current laser systems require a human operator with a certain degree of finesse when it comes to identifying and firing on targets.
Essentially, the problem can be divided into two tasks. In the case of attacking drones, first is to identify what kind of drone it is in order to determine which weak spots to attack. The second is to train the laser beam on that weak spot long enough to destroy or neutralize the target – a tricky challenge that's bound to get trickier as autonomous drones become quicker and more agile in flight.
Human operators still have a chance of succeeding against a single drone, but swarms of the things are another matter. True, a laser can flick from one target to the next in a fraction of a second, but identifying a weak spot and fixing the beam on it is another matter entirely. In a combat situation, a human operator would be quickly overwhelmed. As lasers advance to handle hypersonic missiles, the problem gets even worse.
As a collaboration between NPS, Naval Surface Warfare Center Dahlgren Division, Lockheed Martin, Boeing, and the Air Force Research Laboratory (AFRL), a new tracking system for anti-drone lasers is being developed that uses AI to overcome human limitations in not only targeting, but handling atmospheric distortions over long distances that can cause a laser beam to stray off target.
The team trained an AI system using a miniature model of a Reaper drone, 3D printed out of titanium alloy. This was scanned in infrared light and with radar to simulate how a full-sized drone would look through a telescope from various angles and distances under conditions of less-than-perfect visibility.
The image catalogs produced two datasets of 100,000 images that were used to train an AI system so that it could identify the drone, confirm its angle relative to the observer, seek out the weak spot, and fix the beam on that spot. Meanwhile, radar input provided data for determining the drone's course and distance. A series of three AI training scenarios were then posed to train the system. The first used only synthetic data, the second a combination of synthetic and real-world data, and the third with only real-world data.
According to the US Navy, the third scenario worked the best with the least margin of error.
The next move will be field testing with radar and optical tracking of real targets with a semi-autonomous system with a human operator controlling some aspects of tracking.
"We now have the model running in real-time inside of our tracking system,” says Eric Montag, an imaging scientist at Dahlgren. "Sometime this calendar year, we're planning a demo of the automatic aimpoint selection inside the tracking framework for a simple proof of concept,” Montag adds. “We don't need to shoot a laser to test the automatic aimpoint capabilities. There are already projects — [The High Energy Laser Expeditionary (HELEX) demonstrator] being one of them — that are interested in this technology. We've been partnering with them and shooting from their platform with our tracking system.”
The research was published in Machine Vision and Applications.
Source: US Navy