AI and autonomous drones now sit at the centre of military modernisation, autonomous logistics, industrial inspection, and contested-environment operations. India's autonomy roadmap accelerated after two anchor events. The Defence Acquisition Council cleared the Ghatak UCAV programme on 27 March 2026 (Ministry of Defence, 27 March 2026). The Indian Army issued its sovereign swarm Request for Proposal in May 2026 (Indian Army RFP, May 2026). The sections below explain how autonomous drones work end to end: how unmanned systems classify objects, fuse sensor inputs, execute autonomous flight control, and operate inside GPS-denied environments.

Why this matters now

Autonomous capability has shifted from a supporting software layer to the architecture that defines modern unmanned systems. Indian defence procurement now prioritises sovereign autonomy frameworks, offline mission execution, and distributed swarm coordination ahead of manually piloted drone fleets. The Indian Army's autonomous swarm demonstration on Army Day 2021 set the operational direction (Indian Army, 15 January 2021). Commercial adoption tracks the same shift, with BVLOS inspection corridors, autonomous survey missions, and drone logistics programmes all requiring onboard autonomy. Edge AI processors operating within a 5 to 15 watt envelope now execute computer vision workloads directly onboard (Ministry of Civil Aviation, 2025). This category sits alongside India's military drone programme as a procurement priority. Autonomous drone technology is no longer a feature added to a platform; it is the architecture the platform is built around.

The difference between AI drones and autonomous drones

The difference between AI drones and autonomous drones is structural, not cosmetic. AI in drones refers to algorithms trained for a specific task such as object classification, route optimisation, obstacle identification, or terrain mapping. Autonomy refers to the system architecture that allows the unmanned platform to execute actions without continuous human control. AI is the component; autonomy is the system property.

This distinction matters because operational unmanned systems are partially autonomous rather than fully autonomous. A drone may use machine learning for object recognition while still requiring human approval for navigation or payload release. In those systems, AI operates as a subsystem inside a larger human-supervised control structure. Across the types of unmanned aerial systems in service today, the dominant pattern is supervised automation with AI inside specific modules.

The policy language reflects this. DGCA's Drone Rules 2021 treat commercial UAS operations as supervised automation rather than unsupervised autonomous activity. The operator remains legally responsible for mission execution (DGCA, 25 August 2021). The regulatory frame assumes a human in or on the loop, even where the platform can technically fly a complete mission without intervention.

The technical frame is therefore a three-layer stack. Perception (what the drone senses). Decision (what the drone chooses). Action (how the drone executes). AI lives inside each layer, but autonomy is the property that emerges when all three layers run without human input. The next six sections walk the stack end to end.

The five levels of drone autonomy

The drone industry recognises six steps on the autonomy ladder, mirroring the SAE taxonomy used for self-driving cars. The drone autonomy levels run from Level 0 manual control through Level 5 full autonomy.

Level 0 is fully manual. The pilot controls every input and the platform has no automated behaviours. Level 1 introduces pilot-assist features such as altitude hold, GPS hold, and auto-hover, but the pilot remains in command of navigation. Level 2 is partial autonomy. The platform executes waypoint missions, return-to-home logic, and basic obstacle warnings, but the pilot must take over when conditions change.

Level 3 is conditional autonomy. The platform performs detect-and-avoid, dynamic re-planning, and BVLOS flight inside defined geofences, with the pilot monitoring the mission and intervening only on exception. Level 4 is high autonomy. The platform completes the full mission without human input inside a defined operational envelope, with human-on-the-loop oversight available. Level 5 is full autonomy across all operational conditions, with the platform holding decision authority across every contingency.

Indian and global commercial fleets cluster between Level 3 and Level 4 today. Level 5 remains a research target, not a deployed state in either civil or military fleets. The DRDO Autonomous Flying Wing Technology Demonstrator validated Level 4 equivalent capability on 1 July 2022. The platform executed autonomous take-off, waypoint navigation, and autonomous landing without external ground radar assistance (DRDO, 1 July 2022). That trial validated the control architecture feeding into India's Ghatak unmanned combat aerial vehicle programme.

The autonomy level a platform reaches determines how much of the perception, decision, and action stack runs onboard. Level 2 systems run perception and basic action onboard but defer decision to the pilot. Level 3 and Level 4 systems run all three layers onboard inside the operational envelope. Level 5 systems run all three layers onboard across every condition. The ladder is therefore the cleanest single test of how much intelligence the unmanned system carries.

The perception stack - how drones see the world

Autonomous drones operate through a layered perception stack that converts raw sensor data into a navigational model of the environment. Cameras, radar, LiDAR, inertial measurement units, GNSS receivers, ultrasonic sensors, magnetometers, and infrared payloads all contribute different forms of situational awareness.

Each sensor has an operational failure mode. Cameras degrade in smoke, fog, or low light. GNSS receivers become unreliable during jamming or spoofing. LiDAR performance drops in heavy rain. Radar penetrates weather better but returns lower object detail. Sensor fusion is the discipline of combining inputs so the perception stack holds a usable picture when any one sensor fails. Sensor fusion is therefore the layer that separates a fragile autonomy stack from a resilient one.

Kalman filtering remains the dominant method for state estimation. It allows the flight computer to merge noisy sensor inputs into a stable estimate of position, orientation, and velocity. Above the state-estimation layer, more advanced systems use neural-network-based fusion to classify terrain, moving objects, or target signatures from multiple simultaneous sensor streams.

SLAM, or simultaneous localisation and mapping, forms the other core perception capability. SLAM allows an unmanned system to construct a map of an unfamiliar environment while estimating its own location inside that map. A SLAM autonomous drone can therefore navigate indoor inspection missions, urban canyons, and electronic warfare environments where GNSS is unavailable. SLAM is the perception module that enables the GPS-denied behaviour examined later in this article.

The DRDO Autonomous Flying Wing Technology Demonstrator validated autonomous landing on surveyed coordinates without ground radar infrastructure (DRDO, 1 July 2022). That validation depended on a perception stack that could hold position from onboard sensors alone. Perception accuracy also determines swarm reliability. Distributed autonomous systems rely on consistent state-sharing between unmanned nodes, and weak perception data degrades collective coordination and increases collision probability during dense formation operations.

The decision layer - how drones make decisions

How do drones make decisions? The decision layer converts sensor inputs into mission actions through three distinct architectures running in parallel, not one model running alone.

Rule-based systems form the safety foundation. These systems execute deterministic instructions such as geofence enforcement, low-battery return logic, altitude restrictions, or mission abort procedures. Rule-based architectures are the easiest to certify because every response path is predefined. Regulators and safety auditors can verify each rule against expected behaviour.

Behaviour-based systems sit above the safety layer. Instead of one rigid mission script, the unmanned system selects from a library of behaviours such as obstacle avoidance, formation keeping, terrain following, or target tracking. A supervisory arbiter prioritises the behaviour suited to the mission state. Behaviour arbitration is how production autonomy stacks handle the open-ended middle ground between safety rules and learned models.

Learning-based systems represent the research frontier. Reinforcement learning trains the drone through repeated simulations in which the algorithm receives rewards for successful mission completion. Imitation learning trains the model on expert flight behaviour captured from human operators. These architectures improve adaptability but remain harder to certify for safety-critical operations. Production systems therefore combine all three. Safety logic stays rule-based. Mission execution uses behaviour arbitration. Perception and prediction modules use machine learning. AI-driven mission planning follows this same hybrid pattern across Indian defence programmes.

The OODA loop, originally formalised by US Air Force Colonel John Boyd, remains the doctrinal frame behind autonomous decision speed. The observe, orient, decide, and act cycle explains why compressed decision timelines create operational advantage. Autonomous systems shorten this cycle by processing sensor data onboard instead of routing every action through a remote operator. The cluster blog on the OODA loop in drone combat treats this doctrine at length. Inside contested environments where communication delays or electronic interference degrade human response speed, the OODA advantage is what separates a survivable unmanned system from a vulnerable one.

The action layer - flight control and execution

The action layer is where the decision becomes motion. Three control loops, running at three different timescales, translate a chosen action into stable flight.

The inner loop runs the attitude controller at 400 to 1,000 hertz. Flight controllers such as Pixhawk-class open hardware, indigenous DRDO controllers, and proprietary military flight boards execute this loop. The inner loop converts attitude commands (pitch, roll, yaw rates) into individual motor inputs that hold the platform stable against wind, payload shifts, and aerodynamic disturbance. Without a stable inner loop, no higher autonomy layer can function.

The outer loop runs trajectory generation, waypoint sequencing, and mission abort logic at 10 to 50 hertz. It accepts position targets from the mission planner and produces the attitude references the inner loop will execute. The outer loop is where detect-and-avoid manoeuvres, return-to-home logic, and dynamic re-planning live. Above the outer loop, the strategic layer evaluates next-mission decisions at roughly 1 hertz, or only on event triggers. Three loops, three timescales, one motion output.

Human oversight maps cleanly onto this stack. A human in the loop must approve engagement actions before the platform proceeds. Human-on-the-loop systems allow supervisory intervention during autonomous execution. Human-off-the-loop systems continue execution independently after activation. The Indian defence position emphasises supervised operational structures rather than unrestricted autonomous targeting (Ministry of Defence, 2025). Treatment of human-on-the-loop oversight and the regulatory implications sits in the cluster.

The action layer is also where certification authorities focus their attention. DGCA, AAI, and military test ranges assess the inner loop's stability margins and the outer loop's failure modes. They also test the strategic layer's abort logic before approving an autonomy stack for operational use (DGCA, 2025). The control loop architecture is therefore not just an engineering choice; it is the surface against which certification happens.

Edge AI and the move to onboard intelligence

Until the late 2010s, complex drone perception ran on the ground. The drone streamed sensor data over a radio link to a cloud or a base station, where heavy compute handled object detection and decision logic. Two pressures broke that architecture. First, electronic warfare denies the radio link. Second, latency over a contested link is too slow for sub-second decisions.

The response is edge AI. Onboard AI drone capability now runs the neural networks for object detection, target tracking, terrain classification, and route planning directly on the platform. Compact AI accelerators (NPUs in the 5 to 15 watt envelope) execute the models that previously needed a server rack. Edge AI is the architecture that makes contested-environment autonomy possible.

It is also the architecture that India's sovereign swarm RFP explicitly demands. The May 2026 RFP requires fully offline capability and bans third-party black-box autonomy engines. Source code and intellectual property are to be jointly shared with the selected partner (Indian Army RFP, May 2026). Systems dependent on permanent cloud connectivity or external processing introduce unacceptable operational risk during jamming conditions, and the procurement language now reflects that lesson.

Edge AI also reshapes commercial autonomy. BVLOS inspection of power lines, pipelines, and wind turbines now runs on platforms that process sensor data onboard and surface only summary alerts to the ground operator. Precision agriculture spraying decisions, infrastructure inspection anomaly detection, and logistics routing all benefit from onboard decision-making because latency and connectivity limits reduce efficiency during complex missions. This is the architecture inflection that separates 2026 autonomy from the radio-link-tethered platforms of the previous decade.

Swarm autonomy - when many drones think together

A drone swarm is not a fleet. A fleet has a centralised controller and the platforms are subordinate to that controller. A swarm has distributed intelligence. Each drone exchanges state with its neighbours, contributes to a collective decision, and continues to function if a peer is lost. AI drone swarm coordination is therefore an architecture question as much as an algorithm question.

Swarm autonomy splits into two architectural classes. Centralised swarms run a leader-follower model or a ground-station-coordinated formation. They are easier to certify because mission logic sits inside one command architecture, but they are brittle under jamming. Decentralised swarms run the same algorithm on every drone, with no designated leader and redundancy built in. They are harder to engineer but resilient under arbitrary node loss. The cluster on centralised and decentralised swarm coordination treats this trade-off in depth.

The Indian Army's Army Day demonstration on 15 January 2021 showcased a swarm with no designated leader. Each drone contributed to collective decisions, and the formation reorganised itself when individual nodes were lost (Indian Army, 15 January 2021). That demonstration set the operational direction for India's swarm doctrine. The May 2026 sovereign swarm RFP is the procurement vehicle that will scale the demonstrated capability into a unified autonomy framework. The RFP is structured in two phases over six months. It ends in a live field demonstration of multi-drone surveillance and coordinated payload delivery under a unified command structure (Indian Army RFP, May 2026).

The April 2026 Indian Air Force Mehar Baba Competition MBC-3 funded a parallel concept. The proposal is a swarm of small autonomous nodes acting as a distributed airborne radar network. Each node shares data to a centralised monitoring station, with no single point of failure (Indian Air Force, April 2026). Swarm autonomy and distributed sensing are converging into the same architectural pattern.

How autonomous drones operate when GPS fails

GPS-denied autonomous navigation is the capability that separates a battlefield drone from a consumer one. The toolkit is layered and combines machine learning drone navigation techniques with classical state-estimation methods.

Visual-inertial odometry uses a camera and an inertial measurement unit to estimate position by tracking visual features over time. The technique works because feature motion across successive frames, combined with the IMU's measured acceleration, yields a velocity estimate that integrates into position. SLAM extends this further by building a map of the environment while estimating the drone's position inside the map. Terrain matching compares onboard sensor observations against a pre-loaded terrain model and locks position when the match score crosses threshold.

Scene matching does the same with overhead imagery rather than terrain elevation. Optical homing locks onto a target's visual signature in the final seconds of a strike. It is the dominant terminal-phase guidance method for India's loitering munition fleet and other kamikaze drones and terminal-phase autonomy. Each method has a failure mode, so production systems run two or three in parallel and fuse the outputs.

The DRDO Autonomous Flying Wing Technology Demonstrator validated autonomous landing on surveyed coordinates without ground radar infrastructure. The trial established a Level 4 equivalent capability in GPS-denied conditions (DRDO, 1 July 2022). Operation Sindoor in May 2025 stressed these capabilities at operational scale, with indigenous loitering munitions executing terminal guidance under contested electromagnetic conditions (Ministry of Defence, May 2025). The operational lesson is consistent: a platform that cannot fly without GPS cannot survive a contested environment, and procurement language now reflects that.

Where India's indigenous autonomy stack stands today

Autonomous drones India programmes now span the full autonomy spectrum, from loitering munitions to autonomous flying-wing UCAVs.

The Ghatak Unmanned Combat Aerial Vehicle programme sits at the top end. The Defence Acquisition Council cleared procurement of sixty Ghatak units on 27 March 2026 (Ministry of Defence, 27 March 2026; Business Standard, 4 March 2026). The clearance fell under the Ministry of Defence capital acquisition framework. The Ghatak is a stealth flying-wing UCAV designed for autonomous deep-strike under DRDO development. The Autonomous Flying Wing Technology Demonstrator validated the autonomous-landing and stealth controls feeding into the Ghatak (DRDO, 1 July 2022).

India's loitering munition fleet incorporates autonomous target-tracking capability during terminal phase. The Nagastra-1 was inducted in June 2024, and Sheshnaag-150 was demonstrated at the World Defence Show 2026 in Riyadh (Ministry of Defence, 2024; PIB, February 2026). Both systems carry onboard guidance and autonomous navigation modules designed for precision strike under communication degradation.

The April 2026 Indian Air Force Mehar Baba Competition MBC-3 expanded the autonomy roadmap further. The competition funded indigenous swarm-as-radar concepts that distribute sensing across many small autonomous nodes. The model replaces a single high-value airborne warning platform with a redundant network (Indian Air Force, April 2026). The Indian Army's May 2026 sovereign swarm RFP is the procurement instrument that may unify these strands into a single indigenous autonomy framework. Source code and intellectual property are held jointly with the selected partner, and third-party black-box autonomy engines are banned (Indian Army RFP, May 2026).

India's challenge is no longer basic unmanned flight. The challenge is sovereign autonomous coordination across air, ground, and maritime unmanned systems operating inside degraded communication environments. The procurement signal across the Ghatak clearance, the loitering munition inductions, and the sovereign swarm RFP points in one direction: indigenous edge autonomy as the architectural default.

[ALT TEXT: India's DRDO Autonomous Flying Wing Technology Demonstrator validated autonomous landing without ground radar infrastructure during its July 2022 flight trial.]

Where the autonomy frontier is heading next

Three frontiers will define the next three years of unmanned autonomy.

The first is embodied AI. Large vision-language models running on the drone allow the platform to reason about unexpected events using common-sense priors rather than hand-coded recovery rules. Academic testbeds have demonstrated drones interpreting urban environments and selecting safe-landing manoeuvres from scratch when a primary plan fails (UC Santa Cruz and KTH research, October 2025). Embodied AI moves autonomy from rule-bound behaviour to context-aware reasoning. It is the layer where general AI capability begins to meet operational drone design.

The second is decentralised swarm autonomy. The shift from centralised command-and-control to fully distributed decision-making is the explicit target of Phase 2 of the Indian Army's May 2026 RFP. Phase 2 tests adaptive formations, task reassignment, collision avoidance, and sustained operations under communication disruption and drone losses in realistic battlefield conditions (Indian Army RFP Phase 2, 2026). A decentralised swarm continues to function when arbitrary nodes are lost, and it removes the single point of failure that a centralised architecture creates.

The third frontier is human-on-the-loop oversight as the regulatory boundary. The United Nations Convention on Certain Conventional Weapons Group of Governmental Experts on Lethal Autonomous Weapon Systems continues to negotiate this question. The negotiation centres on the line between supervised autonomy and fully autonomous lethal targeting (UN CCW GGE on LAWS, ongoing). AI inside counter-drone systems faces the same regulatory question from the defensive side. Procurement language inside India's indigenous autonomy roadmap already reflects a position closer to human-on-the-loop than to off-the-loop targeting.

Autonomy is no longer a feature added to a drone; it is the architecture the drone is built around. India's indigenous stack is now mature enough to set the autonomy procurement standard rather than chase one. The next two years will be defined by how quickly the sovereign swarm framework moves from RFP to fielded capability across the three services.