“AI can inform decisions. Only humans can make the judgment and take responsibility,” said Lt. Gen. Vipul Singhal during a defence-focused session at the India AI Impact Summit.
Singhal’s remark framed a broader debate at the summit on how India’s armed forces are deploying artificial intelligence across operations, logistics, intelligence, and decision-support systems. However, speakers repeatedly stressed that faster analysis and compressed decision timelines cannot dilute command responsibility, particularly in situations involving the use of force.
Taken together, military leaders, defence scientists, industry executives, and academics converged on a central message: India must deploy AI as a force multiplier without surrendering moral agency, operational control, or strategic autonomy.
How is AI already reshaping military operations?
Senior Army officers described AI as operational rather than experimental.
“AI is totally transforming the way we analyse, decide and act, and transforming warfare,” said Brig. Deepak Kumar.
Notably, Lt. Gen. Rajiv Kumar Sahni, Director General of Electronics and Mechanical Engineering, said military effectiveness increasingly depends not on platforms alone but on engineering support, sustainment, and decision velocity.
“It is the engineering support which provides the flexibility, endurance, and stamina to commanders in the field,” Sahni said.
Moreover, Sahni outlined three priority areas where the Army is actively seeking collaboration with industry and academia:
- Improving sensor-to-shooter linkages, or the ability to rapidly connect surveillance data to weapons systems for faster targeting
- Using predictive insights to regenerate combat forces faster under resource constraints
- Designing, producing, and sustaining drones with minimal external dependence
“Help us place sensors at the right place, manipulate the data elements we already have, and give us predictive insights,” Sahni said.
He added that drones are no longer peripheral systems but a central focus of Army engineering, with emphasis on indigenous navigation, control analytics, production quality, and adversarial simulation to test performance in contested environments.
Why does the Army want to smartise legacy systems?
Army leaders rejected the idea that modernisation requires replacing large parts of India’s existing arsenal.
“Legacy is not equal to obsolete,” said Maj. Gen. Mohit Gandhi.
Gandhi said cost, logistical familiarity, and operational constraints make wholesale replacement unrealistic. Instead, the Army has prioritised embedding sensors, analytics, and AI into existing platforms.
Accordingly, he said smartisation aims to:
- Improve mission readiness and accuracy
- Enable predictive and prescriptive maintenance
- Reduce human error under combat stress
- Extend the lifecycle of legacy platforms
However, Gandhi flagged a structural challenge.
“There are limited labelled datasets available for military equipment,” he said, referring to the lack of high-quality historical data needed to train AI systems reliably in combat settings.
Furthermore, he said AI systems must remain explainable, resilient to jamming and spoofing, and capable of operating on secure or offline networks, with humans firmly in the loop to comply with the laws of armed conflict.
Beyond battlefield decision systems, the Army is also applying AI to core sustainment functions.
How does predictive maintenance improve readiness?
Maj. Gen. P. S. Bindra framed predictive maintenance as a direct battlefield advantage, particularly for armoured fighting vehicles operating in extreme climates.
“These machines are speaking to us,” Bindra said. “Are we listening? Yes. But we need to now listen to them better.”
Bindra said the Army plans to move from scheduled maintenance to condition-based monitoring using sensors, data loggers, and AI models that predict residual useful life, or how long a component can safely operate before failure.
Importantly, he said this work is moving beyond conceptual pilots. The Army has initiated indigenous R&D projects, plans to float bids on the Government e-Marketplace (GeM), the government’s online procurement platform, and will follow a pilot-to-scale approach, with successful systems eventually deployed across platforms and commands.
Why must humans control lethal decisions?
Ethical concerns sharpened when speakers discussed AI-enabled decision-making in combat.
Lt. Gen. Vipul Singhal described a high-tempo operation in which a machine-generated analysis recommended an immediate strike.
“The commander paused,” Singhal said. “What does the machine not know?”
The data showed adversary troops. However, it failed to capture an ongoing civilian evacuation. As a result, the commander stopped the strike.
Importantly, speakers said AI increases leadership burden rather than reducing it, as compressed decision cycles raise the risk of escalation if human judgment is sidelined.
They raised core questions:
- Which decisions must commanders retain?
- Which decisions can algorithms support?
- Can rules of engagement function alongside black-box systems, where the reasoning behind outputs cannot be fully explained?
“Are we subjecting AI systems to the same rigor as other weapon systems?” Singhal asked.
Can militaries shift accountability to machines?
Maj. Gen. Harsh Chhibber warned against treating AI outputs as morally neutral.
“The requirement is to make better decisions, not bad decisions faster,” Chhibber said.
He said AI systems fail when battlefield context changes and lack abductive reasoning.
Referring to the Israeli military’s Lavender database, Chhibber highlighted the ethical consequences of statistical error, noting that even high accuracy rates can translate into large numbers of wrongful deaths at scale.
“Command responsibility is absolute in the military,” he said. “You cannot do cognitive offloading to a machine.”
Accordingly, he said any decision involving lethality must remain under human agency, with accountability resting squarely with the command, not with algorithms, developers, or statistical thresholds.
Why do defence systems need “glass box” AI?
Academic speakers stressed that defence AI must prioritise explainability, observability, and override mechanisms.
“AI can be used for situation summarisation and pattern recognition,” said Prof. Ramakrishna, “but domain experts are better at precision.”
Moreover, he said commanders must remain in the driver’s seat, with visibility into what is being delegated to data pipelines, models, and hardware accelerators.
“We need a glass box model,” Ramakrishna said, referring to systems whose logic and decision paths can be inspected and overridden, unlike opaque black-box models.
Where are the governance gaps?
Despite growing deployment, several speakers said defence AI governance remains underdeveloped.
Pawan Anand said military AI differs fundamentally from civilian applications.
“This is probabilistic technology being used for deterministic outcomes,” Anand said, referring to systems that produce statistical predictions but are used in life-and-death decisions.
He identified gaps that remain largely unaddressed:
- Bias detection and mitigation
- Drift detection in deployed systems, where models degrade as real-world conditions change
- Bounded operational envelopes, or clearly defined limits on where and how systems can operate
- Lifecycle controls, including decommissioning
- Oversight of agentic AI systems, which can act with limited human intervention
“You have to ensure you can destroy it at the right time if you lose control,” Anand said, adding that responsibility must be embedded across the entire AI lifecycle, not just at deployment.
How do sovereignty and supply chains shape defence AI?
Speakers repeatedly warned against dependence on foreign AI systems.
“Off-the-shelf AI is strategic suicide,” one speaker said.
Singhal also flagged India’s reliance on imported Graphics Processing Units (GPUs), or specialised chips used to train and run AI models, as a long-term vulnerability.
“We are reliant on imports,” he said. “We need indigenous capability over the long term.”
Additionally, industry leaders stressed the need for sovereign platforms, alternative compute approaches, and on-premise edge systems.
“The algorithms that run on the cloud may not come to your rescue in a battlefield scenario,” one speaker said.
What do military leaders want next?
Finally, speakers outlined several proposals:
- A dedicated defence AI mission
- A sovereign defence AI platform
- A joint military AI command
- Specialised AI training for commanders and staff
Notably, Sahni acknowledged that experimentation would involve failure.
“Failures are part of our success story,” he said, signalling a shift from traditional zero-failure defence acquisition models toward iterative AI development.
Ultimately, speakers drew a firm boundary.
“Technology without warrior spirit is hollow,” one speaker said. “But warrior spirit without technology in the modern battlefield is a tragedy.”
As India accelerates defence AI adoption, military leaders said the challenge lies not in deploying AI faster, but in ensuring humans remain firmly responsible for its consequences.
Also read:
Support our journalism by subscribing
For YouSource link

