Aerial Warfare

General Atomics Aeronautical Systems Inc Flies Multiple Missions Using Artificially Intelligent Pilots

243
General Atomics Aeronautical Systems Inc Flies Multiple Missions Using Artificially Intelligent Pilots
General Atomics Aeronautical Systems Inc Flies Multiple Missions Using Artificially Intelligent Pilots

General Atomics Aeronautical Systems, Inc. (GA-ASI) further advanced its Collaborative Combat Aircraft (CCA) ecosystem by flying three unique missions with artificially intelligent (AI) pilots on an operationally relevant Open Mission System (OMS) software stack. A company-owned Avenger® Unmanned Aircraft System (UAS) was paired with “digital twin” aircraft to autonomously conduct Live, Virtual, and Constructive (LVC) multi-objective collaborative combat missions. The flights, which took place on Dec. 14, 2022, from GA-ASI’s Desert Horizons flight operations facility in El Mirage, Calif., demonstrate the company’s commitment to maturing its CCA ecosystem for Autonomous Collaborative Platform (ACP) UAS using Artificial Intelligence (AI) and Machine Learning (ML). This provides a new and innovative tool for next-generation military platforms to make decisions under dynamic and uncertain real-world conditions. The flight used GA-ASI’s novel Reinforcement Learning (RL) architecture built using agile software development methodology and industry-standard tools such as Docker and Kubernetes to develop and validate three deep learning RL algorithms in an operationally relevant environment.

“The concepts demonstrated by these flights set the standard for operationally relevant mission systems capabilities on CCA platforms,” said GA-ASI Senior Director of Advanced Programs Michael Atwood. “The combination of airborne high-performance computing, sensor fusion, human-machine teaming, and AI pilots making decisions at the speed of relevance shows how quickly GA-ASI’s capabilities are maturing as we move to operationalize autonomy for CCAs.”

Reinforcement Learning agents demonstrated single, multi, and hierarchical agent behaviors. The single agent RL model successfully navigated the live plane while dynamically avoiding threats to accomplish its mission. Multi-agent RL models flew a live and virtual Avenger to collaboratively chase a target while avoiding threats. The hierarchical RL agent used sensor information to select courses of action based on its understanding of the world state. This demonstrated the AI pilot’s ability to successfully process and act on live real-time information independently of a human operator to make mission-critical decisions at the speed of relevance. For the missions, real-time updates were made to flight paths based on fused sensor tracks provided by virtual Advanced Framework for Simulation, Integration, and Modeling (AFSIM) models, and RL agent missions were dynamically selected by operators while the plane was airborne, demonstrating live, effective human-machine teaming for autonomy. This live operational data describing AI pilot performance will be fed into GA-ASI’s rapid retraining process for analysis and used to refine future agent performance.

The team used a government-furnished Collaborative Operations in Denied Environment (CODE) autonomy engine and the government-standard OMS messaging protocol to enable communication between the RL agents and the LVC system. Utilizing government standards such as OMS will make rapid integration of autonomy for CCAs possible. In addition, GA-ASI used a General Dynamics Mission Systems’ EMC2 to run the autonomy architecture. EMC2 is an open architecture Multi-Function Processor with multi-level security infrastructure that is used to host the autonomy architecture, demonstrating the ability to bring high-performance computing resources to CCAs to perform quickly tailorable mission sets depending on the operational environment. This is another in an ongoing series of autonomous flights performed using internal research and development funding to prove out important AI/ML concepts for UAS.

Exit mobile version