F16 fighting falcons have already been at the center of international attention when an agreement was signed by Taiwan government with USA. Sixty-six of the aircrafts have been slated for purchase in the agreement in a groundbreaking moment considering that this is the first deal since George H W Bush had approved for 150 F16s for Taiwan. That this has offended Chinese ambition in the South China Sea is not unexpected or surprising. An ignored part of the story of the F16 Vipers is that in a recent simulation-based dogfight which ran for three days between two F16 vipers, one defeated the other in a 5,0 score. The F16 which won the dogfight was being flown by an Artificial Intelligence developed by the Heron systems. It literally resembles the real-world version of the idea of Skynet of Terminator fame. This is not the first use of AI in defense systems. Carmel armored fighting vehicle in the Israeli army is using AI systems to upgrade its battlefield operations.

John Hopkin’s University’s Applied Physics Laboratory which streamed the Defense Advanced Research Projects Agency competition had created the aerial combat environment for the simulation dogfight. The finale of the dogfight simulation was to be held in Las Vegas at the AFWERX innovation hub of the Air force but was organized at the APL itself. The competition was set between firms which develop AI for air combat like Aurora flight Sciences, a Boeing subsidiary, Perspecta labs. Soar tech, EpiSys Science, a DARPA initiative named Physics AI and the aircraft manufacturers like Lockheed martin. The competition in the form of trials of alpha dogfight came as the final stage of the endeavor of the DARPA under the umbrella of the air force project named as the Air combat evolution program.

The victory of the AI to an extent assures that in the future, in an extreme combat situation, the pilot’s life will not be in danger. But, taking out human intervention from the combat scenarios and placing AI instead can have serious ramifications. Where one places the AI in the decision- making tree also matters. This exercise proved the superiority of the AI in the tactical level air operations. Placing an AI at the strategic level in decision making will have bigger ripple effects and this must be considered by the afficionados of AI utility in the defense framework. The advocates of the usefulness of AI in air fights would argue like the character Captain George Cummings from the War flick Stealth, that AI based fighters would have the ability to go beyond the limits of human endurance. But can the learning of the AI in defense systems be incorporated to have a filter of who is friend vs who is foe or what ought to vs what not to. That probably would depend on the training data which would help the AI learn. The advocates of traditional human efforts and skills in air fights would opine as Colonel James Rhodes in the iconic MCU flick Iron Man, that nothing can beat the insight, instinct and judgment of human pilots.

The program manager of the competition held under the Air combat evolution, Colonel Daniel Javorsek stressed that this development in the field of use of the AI in the fighter jets would help in the bringing down the air defenses of enemy and counter strike offensive. The ACE program is a long-term project whose initial phase is scheduled to be concluded in 2021. Initially the project would focus on unmanned aircrafts which are propeller driven and jet powered. In the subsequent stages of the ACE program, Defense Advanced Research Projects Agency has plans to phase up into larger kind of aircrafts. Col. Daniel reaffirmed his confidence in the ability of the AI systems to execute fine motor movements in live air combat and get kills while doing so.

The success of the AI as the fighter has now opened floodgates of funding in the defense AI research by the Pentagon. As a result, it would kick start a race among the USA based corporations and firms developing tactical AI to get better in their end products for the USAF. That can transform the global defense AI development ecosystem into a gold brush in its own way. Again, the question remains as to what extent the AI intervention can be allowed into the decision making in the defense systems. Giving AI the operational control of aircrafts, vehicles and equipment is fair and safe as of now. But, caution must be exercised when taking the AI to the higher levels in the hierarchy of the decision tree. During the general development of AI, the question of an all-powerful artificial general intelligence has spooked many observers. One can also not forget that there have been cases where when two AI chatbots were allowed to talk/chat to one another, they started doing so in a language which could not be understood by the humans. There is also the peculiar case of the AI- Bina 48, which during a long chat spoke about dominating the world and remotely hacking a nuclear missile and riding on it around the world. At the March 2016 SXSW technology conference, the famous AI android Sophia, during the course of discussion was asked whether she wanted to destroy humans. The Android designed to look like Audrey Hepburn agreed on a cheerful note that she would like to. In October 2016, at New York University, in a conference on Ethics of Artificial intelligence, Peter Asaro gave a presentation on Lethal Autonomous Weapons Systems and stated that semi- autonomic weapons are already on the ground in the demilitarized zone between the North and South Koreas. It is no doubt that the benefits of AI are visible at the small scale like in the case of Tesla cars avoiding accidents. But, things are way more different in the larger scale in the context of use of AI in defense as they have many lives at stake.

Before one chalks out the costs and benefits of getting AI into defense architecture, it is important to visit an event from 1983. Stanislav Y Petrov was an on-duty officer in the Oko facility in the USSR of 1983. Oko is a nuclear missile early warning system which gives indications of incoming ballistic missiles including those with nuclear warheads in the Soviet airspace. In the system, Petrov received signals which looked like incoming missile towards USSR. He received five more such reports from the system. He knew what to do. But he followed his instincts and did not go by the books. He went against the standard operating procedures of reporting to his higher ups in the USSR defense which would have resulted in the retaliatory nuclear strikes by USSR on the USA. That, in turn, would have resulted in a full-scale nuclear war or world war III between the two then superpowers which they had closely evaded during the Cuban missile crisis. He was right. Later, it was found out that the system was misreading the signal as a missile due to a special rare alignment of the insolation rays of sunlight on the clouds at high altitudes over North Dakota and the Molniya orbits (high altitude communication and remote sensing orbit) of the satellites in the Oko system. He was neither recognized nor punished for his actions which essentially prevented World War III.

Experts who vouch for AI must consider whether an Artificial Intelligence can be expected go in such direction which can prevent war. Can it go against the SOPs which are likely to be fed as part of its training data, to prevent a dangerous outcome? Can an AI be expected to convert a zero-sum game of war into a positive sum game as was done by Petrov?

The experts, defense personnel, corporations celebrating the success of AI in the ACE dogfight must ask these questions to one another as a measure of introspection before taking the next leap of AI integration.