1. Introduction
On 9 August 1945, President Harry Truman dwelled on the decision to use nuclear bombs on Japan in World War II. He wrote to Senator Richard Russell, “…I am not going to do it unless it is absolutely necessary… My object is to save as many American lives as possible but I also have a humane feeling for the women and children in Japan” (Truman, 1945). Truman’s dilemma is an example of the extremes of presidential decision-making in conflict, contemplating the use of an unknown wartime technology and coming after years of intense fighting throughout World War II. Nevertheless, nearly 80 years later, the integral human element of Truman’s decision-making remains salient, reflected in contemporary debates over proposals to prohibit autonomous systems from initiating nuclear launch without meaningful human control (Klare & Rostampour, 2023).
We conjure overwhelming images when characterising a modern conflict zone: fleets of drones tethered by fibre-optic cables across fields in Ukraine; fifth-generation aircraft decimating an entire country’s air defence infrastructure without loss; satellite constellations enabling mesh networks; and uncrewed aircraft loitering for persistent full-motion video. The image of a modern conflict zone is fast, information-dense, and deeply dependent on networks. The obvious response is to question how human decision-making will keep up—and the obvious criticism is that humans are too slow, may make mistakes, and might lose. The obvious answer is to turn to machines, and with the rise of artificial intelligence (AI), the temptation to integrate AI into modern conflict aligns with the expectation of a decisive edge.
In this modern conflict, the Royal Australian Air Force (RAAF) does require a decisive edge. On-boarding new technological capabilities will sharpen the RAAF’s ability to deliver precise effects. Current RAAF capabilities include: crewed multi-role stealth fighter such as the F35A Lightning; crewed aircraft for advanced airborne intelligence, surveillance and reconnaissance (ISR) and electronic warfare such as the MC-55A Peregrine; multi-role, uncrewed combat aerial vehicle such as the MQ-28 Ghost Bat; high-altitude long endurance uncrewed aerial vehicle (UAV) such as the MQ-4C Triton; and standoff weapons such as extended range joint air-to-surface standoff missile (JASSM-ER). These capabilities describe a future of warfare for the RAAF that will provide, and depend upon, extensive situational awareness at ever-increasing ranges, a key priority in the Integrated Investment Program 2024 (Department of Defence, 2024), with the lynchpin being accurate information at the right time, delivered fast. Militaries worldwide are adapting decision-making processes to provide a fighting edge. In one example, Brînză (2025) argues that AI’s role is central to emerging decision-making situations in modern conflict and must be considered, but must be paired with security and ethical safeguards. And so, the RAAF turns to autonomy and to the question: When information is overwhelming and speed is critical, can AI be trusted to make an autonomous command decision?
In a military context, there are levels of autonomy already carried out by human-operators as they relate to decision-making. There is system level autonomy, whereby decision-making is enabled by policy and direction set at the strategic level and guided by senior decision-makers. There is delegated autonomy, whereby decision-making power is deferred to an autonomous process to follow set direction and rules in the absence of input. Finally, there is operational and tactical autonomy, whereby decision-makers are empowered to execute decisions to achieve intent, and an autonomous process focuses on restrictions and limitations to set its path to achieving a commander’s intent. For this paper, autonomy is defined as the ability to execute desired tasks in order to achieve a commander’s intent with minimal input and in the absence of additional guidance or direction. The focus is specifically on air power and the human implementation of air power effects.
There are a number of assumptions in arguing that AI and autonomous decision-making are inevitable in the application of air power. Such as, reliable sensors providing accurate and timely information; contested information environments whereby communication is degraded and disrupted; a shift in political and legal constraints on the military use of autonomy; and, the capability and resources to research, acquire and implement autonomous processes faster than an adversary. This article focuses on the integration of AI and autonomous decision-making with a human operator, rather than the replacement. The sources in this article are contemporary and use a doctrinal and literature-based synthesis: reviewing publicly available Australian and allied doctrine, official program releases, and open research to derive a practical framework and vignettes for autonomy in RAAF operations.
2. Autonomy in contemporary air power
Aviation has gradually delegated discrete functions to machines, especially where aviation safety is the foremost concerned. However, autonomous functions have proliferated as information requirements have increased and distances have expanded across any area of operation. In civil aviation, aircraft have progressed from heavy operator input to more streamlined autonomous processes to ensure that modern aircraft, especially for the rate of effort that civilian aviation maintains, create a safe flying experience. Autopilot is the quintessential example of an autonomous process in aviation that has replaced constant operator input and has enabled long-distance aviation. Autopilot includes a diverse range of automated processes that ensure routes are maintained in conjunction with an array of information inputs, up until the critical moment of taking off and landing, where a human operator will operate the aircraft. However, these autonomous processes have not entirely replaced the human operator. They have augmented the ability of a human operator to extend their endurance and enhance their awareness without taking away the function of a human operator to takeover should it become necessary at any moment. These autonomous processes are integrated with human operators.
Autonomy in aviation already demonstrates the enhancement of operator-centric processes through the application of autonomous processes aimed at enhancing the human operator. In a military context, these autonomous processes continue to evolve and embed themselves in processes whereby accuracy and efficiency are demanded by decision-makers and human operators. The interconnectedness of modern fighter jets to an ecosystem of information provided by uncrewed platforms demonstrates a progressive delegation of discrete functions to autonomous machines, enabling speed and precision, but still centred on a human decision-maker. Likewise, the rise in drone warfare has demonstrated a future of conflict whereby swarms of lethal autonomous aircraft can be coordinated to produce lethal effects – planned by human operators. Added to these technological improvements, the rise of AI has accelerated the enabling of data fusion, pattern recognition, and dynamic replanning in contested environments. However, despite these technological advancements, the criticality of a human operator remains at the centre of decision-making.
3.1. Command autonomy
There is an inherent hierarchy to military decision-making, devised in orders and explained with intent, to ensure that military actions are understood and actioned to achieve desired outcomes. Command authority sets direction, tone, discipline, tempo, and provides the justification for action. Adding autonomy to command authority may provide opportunities for commanders at every level to organise oversaturated information and competing demands more quickly and precisely—especially in a force so reliant on information for a competitive edge
However, the intangible elements of command authority are responsibility and accountability. Who is blamed if something goes wrong, and who bears the consequences? These elements are conferred to commanders with their authority. Weighing responsibility and accountability is a subjective, ethically and morally bound judgement reflecting an individual’s leadership. International humanitarian law constrains the application of force, including proportionality and civilian protection—requirements that underscore the need for human judgement (Négyesi, 2025). AI may not reliably fulfil such principles without human oversight, particularly where target discrimination and proportionality are at stake (Négyesi, 2025).
There is nonetheless a case for the autonomous commanding officer in the sense of autonomy-enabled command support. Air power often fights at long range in dense, cluttered electromagnetic environments that can cut off tactical elements from higher headquarters. Australian doctrine already accounts for this with centralised command and decentralised execution supported by distributed control, contingent on shared understanding and commander’s intent (Australian Defence Force [ADF], 2023 p. 88). Under protracted, distributed conflict, air power may execute disaggregated from main bases under agile operations and must execute orders as delivered. In such conditions, autonomy can help interpret data, propose courses of action, and maintain alignment with intent—even beyond a complete air tasking cycle—provided a human retains authority for judgement and direction).
3.2. Functional lethal autonomy
Separate to the subjective art of decision-making responsibilities and accountabilities, there is the objective appreciation of rational autonomous systems that are programmed to make decisions. The rise of AI has been rapid, and the pursuit of self-learning AI tools is strong. At the consumer level, AI has created the impression that small, non-consequential decisions can be delegated to ease everyday life. The military debate is more consequential. Advocates contend AI can accelerate autonomy for vehicles, targeting, surveillance, aerial combat, and cyber operations, improving interpretation and response time during crises (Agarwala, 2023). Others emphasise that AI’s predictive capabilities can improve planning, coordination, and simultaneous execution, but still require careful human direction (Agarwala, 2023).
Technology is incrementally progressing towards a scenario where lethal force can be conducted autonomously. In Ukraine, a ground-based robotic system outfitted with a .50 calibre machine gun, operated remotely by a human operator, held back Russian troops along a line of offensive for several months (Khomenko, 2025). The robotic system was implemented to minimize risks to Ukrainian troops who are at a numerical disadvantage compared to Russian Forces. However, military industry is clear that the implementation of autonomous programming within the lethal robotic systems is the next step, if not already iteratively being utilised within the fast-paced Ukrainian theatre. One example is the Sky Sentinal system used in Ukraine to autonomously find, track, and provide firing solutions as an anti-drone capability, shooting down Shahed drones and even, in some cases, cruise missiles (Baker, 2025). The bridge between AI enabling lethal force and deciding to use lethal force is narrowing in modern conflict.
At its core, AI is about data—fusing, collating, and summarising to support decision rules and generate answers. There are opportunities in using AI as a form of autonomy for the RAAF, especially in data-rich environments where speed matters. As Tadjdeh (2020) summarises from United States Air Force leadership, the aim is to streamline repetitive tasks and free human cognition for higher-order reasoning and judgement. The obvious extension in air power is greater autonomy in uncrewed systems. These systems already exist and have been used in recent conflicts. For example, public reporting describes target-recommendation systems in Israel that accelerate ISR-to-strike workflows while retaining human decision authority (D’Evereux, 2024).
3.3. The human in the loop
Ultimately, the role of autonomy in Australian air power comes down to individual operators delivering effects. As Goztepe et al (2015) note, the human factor remains a determiner even in sophisticated decision systems and intuitive skills bridge the gap between machine output and human judgement.
Four subjective aspects underscore the need for a human in the loop. First, prioritisation: while data collection can be automated, deciding what matters most is inherently contextual and subjective, shaped by factors like personalities, politics, international relations, economic effects, and violent conflict. Second, intent: capability can be measured objectively, but the intent to deploy an aircraft, strike a target, or disrupt an operation remains a human judgement. Third, piloting and control: while some responsibility is delegated to automation, keeping a human operator in the loop preserves subjective decision-making in high-stakes, high-tempo situations—and reflects the observe-orient-decide-act (OODA) loop’s human-centred design (Boyd, 1996). Fourth, human reaction: subjective responses to complex inputs can generate ‘micro-doses of chaos’ that create disruption and opportunity. For the RAAF, creating opportunities to generate such micro-doses of creative chaos through aviator ingenuity makes spaces for decisive action against a better-equipped adversary.
4. Hypothetical operational case studies
The following case studies help to illustrate hypothetical scenarios where autonomous processes augment human operators to create an operator-centric operational environment where AI does not make decisions, but does enable human operators to make the best decision at the right time. Here, I explore three scenarios: (1) a long-range strike under degraded communications; (2) teaming between manned and unmanned aircraft; and (3) distributed ISR informing a strike package. Each scenario explores how autonomous processes create an operator-centric autonomous environment and assessed with respect to functional autonomy, operator judgement, trust mechanisms and execution. Here, functional autonomy refers to the autonomous processes operating within existing capabilities. Operator judgement refers to the human operator’s decision-making, complimented by functional autonomous processes. Trust mechanisms refer to how functional autonomy and operator judgement is integrated to enhance decision making. Finally, execution refers to the outcome of a functional autonomy and human judgement
A careful calibration of autonomous processes reinforce the delegation of a human in the loop as the ultimate decision-maker, in line with doctrine. Calibrating autonomous processes also helps to emphasise how critical it remains for the RAAF to exercise familiarity with autonomous processes in order to bolster trust in human made decisions and identify space for a human operator to exercise their subjective decision-making autonomy.
4.1. Strike under degraded communications
Scenario: In a high-threat environment, an F-35A pair receives a pre-planned strike tasking against a hardened facility using JASSM-ER. Shortly after entering the area of operations, the aircraft experience severe communications degradation, reflecting the expected contested electromagnetic environments in which the RAAF will operate (Vedula et al., 2023).
Functional Autonomy. The F-35A’s mission systems continuously assess the threat environment via integrated sensor fusion in line with the platforms mission system design. Once released, JASSM-ER autonomously refines approach and aim-point via on-board guidance and survivability features designed for GPS-challenged environments (Hansen, 2006).
Operator Judgement. The pilots retain authority to proceed, abort, or re-task in line with mission command principles that emphasise human judgement under degraded command and control (C2). With limited contact from higher headquarters, they assess fuel, threat levels, and collateral risks from cockpit data and decide to continue with an adjusted ingress to reduce exposure.
Trust Mechanisms. Training intimately with the platforms operating systems, aided by clear displays of system status and logic within the pilot interface, builds trust in autonomous processes. The aircrew validate that autonomy-driven updates remain within commander’s intent and pre-briefed parameters before weapon release.
Execution. From standoff range, the F-35As release JASSM-ER and the weapons navigate a contested environment (Henley, 2026). The aircraft withdraw under emissions control with no further C2 contact, demonstrating an autonomy model where machines handle adaptation and optimisation, while humans exercise intent, risk judgement, and go/no-go authority.
4.2. Manned–unmanned teaming (MUM-T)
Scenario: During a major air campaign, an F-35A element conducts a Suppression of Enemy Air Defence (SEAD) mission alongside uncrewed MQ-28A Ghost Bats. The operating environment is dynamic, with pop-up surface-to-air threats and heightened radar activity typical of modern integrated air defence systems.
Functional Autonomy. Ghost Bat employs on-board autonomy to maintain formation and conduct sensor-led search and classification (Boeing, n.d.). Through its autonomous functions, Ghost Bat produces a machine-generated recommendation to adjust formation geometry and sensor tasking to manage an emergent threat.
Operator Judgement. The F-35A mission commander evaluates the recommendation against mission priorities and imposes human-defined constraints, retaining authority over lethal effects and broader sequencing in line with concepts of operator supervision within a MUM-T system (Howitt, 2009). The mission commander authorises the Ghost Bats to execute micro-level manoeuvre decisions while maintaining control of the overall execution of the SEAD plan.
Trust Mechanisms. Building trust in a MUM-T process requires repeated training events, transparent confidence indicators, and consistent autonomous behaviour to develop human and machine teaming principles (BAE Systems, n.d.; Mayer, 2021).
Execution. Ghost Bat executes the authorised manoeuvres to manage radar attention, enabling the F-35A formation to prosecute SEAD tasks while the human commander retains responsibility for effects and risk. This case study illustrates shared autonomy where humans set direction and boundaries and overall discretion and judgement for the mission, and autonomous systems deliver speed, optimisation, and threat-reactive agility.
4.3. ISR-to-Strike
Scenario: As part of distributed maritime surveillance, an MQ-4C Triton and an MC-55A Peregrine operate as a paired ISR constellation monitoring a contested archipelago, according to their platform roles and the requirements of maritime patrol and reconnaissance missions (Laird, 2023). An unexpected hostile surface action group enters the area of operations. Intelligence gaps emerge as the vessels disperse and vary emissions.
Functional Autonomy. The MQ-4C autonomously re-prioritises sensor tasks based on what is detected, while the MC-55A fuses multi-source ISR to tag vessels for further attention—per open descriptions of the MQ-4C’s autonomous sensor management and MC-55A’s strategic ISR role.
Operator Judgement. The ISR mission commander evaluates the reprioritisation and retains authority to escalate to a strike package, balancing proportionality and reporting confidence. They further consider whether additional information is required before recommending action.
Trust Mechanisms. Consistent machine performance in autonomous functions correlated with track-quality indicators and clear explanations for any autonomous prioritization builds trust and confidence. Prior exercises and rehearsals with similar logic where human operators are involved in scenarios where autonomous processes augment their capabilities further reinforce human confidence in autonomous processes.
Execution. The ISR constellation executes the autonomous reprioritisation while the mission commander informs higher headquarters. Where a strike package is then tasked, it receives a target derived from machine-generated information and human validation, demonstrating a division of labour where autonomous processes conduct continuous sensing, fusion, and prioritisation, and humans apply operational judgement and authority to effect – ensuring the appropriate decision makers are informed (Laird, 2023).
5. Progressing towards operator-centric autonomy
Throughout the case studies, AI performs discrete tasks and human operators remain in charge of command decisions. Therefore, the next step for AI is to ask: Can AI be trusted to make a command decision? Experiments demonstrate significant potential for complex autonomy in air operations. For example, the Have Raider demonstration showed autonomous planning and execution of strike missions, formation flight with a manned aircraft, and dynamic reaction to changing threats (Lockheed Martin, n.d.). Proponents argue effective manned–unmanned teaming reduces cognitive load and allows war fighters to focus on creative and complex planning (Lockheed Martin, n.d.). In circumstances where a command decision results in the delivery of lethal effects that are likely to include human casualties, the primacy of a human decision maker remains. The future of autonomy in Australian air power ultimately resides with aviators who will make critical decisions daily. Operator-centric autonomy will be achieved through equipping operators with the tools and training to deliver effects more precisely and efficiently than the adversary—and through clear commander’s intent that can be enacted even when cut off from higher command.
There are several recommendations that could be considered to progress towards operator-centric autonomy and increased lethality that would come through information processing efficiencies enabled by AI, but not replacing operators and their decision making responsibilities.
Exercising degraded C2 environments. Air Force must prioritise exercising in environments where communication is degraded and information is disrupted. Integrating AI processes in these circumstances is critical for understanding the advantages of accelerated information processing and for understanding the limitations of technological input for decision-making if it were to become a standard process, highlighting the key-role of an operator in making decisions.
Practicing delegated autonomy and decision-making. Similarly to degraded C2 environments, specifically exercising the delegation of decision-making to different tasked entities will contribute to iteratively improving procedures and processes in degraded environments. In particular, practicing delegated autonomy and decision-making in subjective grey-areas are critical to ensure that human operators are making ethical, legal decisions enabled by AI, and the enabling effects of AI can be seen to be contributing to, not contradicting, ethical and legal decision-making.
Iterative tactics, techniques and procedures. Air Force should adopt software-style iterative development cycles for TTPs, replacing rigid, fixed-development approaches with continual refinement based on operational learning. An iterative development pipeline for TTPs can be red-teamed by AI to be understood and executed by operators. Further, the accumulated and accelerated number of serials exercised can expose procedural improvements at an increased rate.
“Micro-doses of creative chaos”— exercising the unplanned and unexpected. Instead of replacing the operator with artificial intelligence, and despite efficiencies that may be facilitated by AI in iterative planning, putting operators against AI planners with a focus on subjective decision-making presents an opportunity to exercise a creative chaos. This creative chaos is not easily replicated by AI. Shifting the emphasis from whether AI makes a better decision, and instead focusing on where a human operator decides differently, presents an opportunity to devise how a human operator might integrate into an AI enabled environment without being solely reliant on AI to execute.
On-boarding new capabilities while upscaling existing ones. The rapid pace of technological change puts pressure on legacy systems that have been acquired to do the job today, but might not be able to do the job tomorrow. The temptation to acquire the new capability as soon as it is available is compelling, but unsustainable to resource. There is required a shift in perspective towards capability management that emphasizes upscaling existing capabilities within what is possible in addition to on-boarding new capabilities to address emerging mission sets. Fundamentally, this approach can achieve two objectives: 1) air power is fight tonight capable and 2) air power has redundancy in processes, procedures and knowledge that might be enhanced with AI but in its absence can still be used to achieve a desired effect.
Operator-level integration with AI. Finally, there is an imperative need to exercise with AI enabling and assisting with decision-making in order to rapidly increase operator familiarity with AI processes but also to consolidate operator-centric integration with AI. Failing to exercise this integration now means that habits, processes and familiarity with AI will be gained through commercial use cases that individuals are already utilising and the implementation of AI into assisting decision-making processes will require a reprogramming of AI usage to ensure operators are capable of their own autonomous processes.
6. Conclusion
Truman displayed the intrinsically human element of decision-making during conflict as he was rationalising the use of a nuclear weapon in World War II, and the continuing debates around autonomous decision-making regarding the use of nuclear weapons goes to show that human decision-making during conflict matters. However, autonomous command support can enable commanders with information and analysis that improves decision-making. A human still, ultimately, pilots autonomous lethal machines either through programming or direct input. The decisive factor in future conflicts will still involve humans in decision-making and judgement because, ultimately, conflict remains a human endeavour suffused with subjective and emotive elements. AI should enable autonomous actions - it should not implement autonomous decisions. Somewhere, a human must still be in the loop to judge an action to be taken. Where a human remains in or on the loop autonomy should be integrated as a tool to enable decision-making, not to make decisions.
