Connect with us

Research&Report

Japan Requires Human Control for Artificial Intelligence in Defense

Published

on

Japonya Sabunma YZ- Japan Defense AI

Japonya Sabunma YZ- Japan Defense AI

Tokyo draws a red line against fully autonomous lethal weapons while aiming to contribute to the formation of global norms

Japan’s Ministry of Defense’s Acquisition, Technology & Logistics Agency (ATLA) announced its first comprehensive guidelines for defense-related artificial intelligence (AI) research and development projects in the summer of 2025. The guidelines are based on a “human-centric” approach and specifically emphasize that no support will be given to the development of fully autonomous lethal weapon systems (LAWS) that can select and fire at targets without human intervention. (ATLA, 2025).

Officials say this step aligns with both Japan’s national security strategy and its claim to ethical responsibility in the international arena. ATLA’s statement included: “Integration of artificial intelligence in defense is inevitable; however, the final decision on the use of lethal force must always belong to humans” (Japan Ministry of Defense, 2025).

Three-Stage Risk Management

The guidelines present a framework consisting of risk classification, legal-political review, and technical evaluation stages for newly developed AI-based systems:

  • Risk classification: Projects will be marked as “low” or “high risk” based on AI’s contribution to the weapon’s destructive capacity.
  • Legal-political review: High-risk projects will be scrutinized in terms of international humanitarian law and “meaningful human control” criteria.
  • Technical evaluation: Systems will be tested against technical standards such as transparency, reliability, safety, and bias reduction (ATLA, 2025).

This approach will serve as a guide not only for military R&D processes but also for collaborations with the private sector. According to Tokyo-based analyst Hiroyasu Harada, “Timing is critical. Japan wants both to rapidly integrate advanced technology into defense and to build trust in international norms” (Harada, 2025).

Parallel to Record Defense Budget

Japan is rapidly expanding its defense capacity at a time of increasing regional security concerns. The record budget of approximately $60 billion (8.8 trillion yen) requested by the government for fiscal year 2026 came to the agenda in the same period as these guidelines (Nikkei Asia, 2025).

A significant portion of the appropriations has been allocated to unmanned air, land, and sea vehicles under the multi-layered coastal defense concept SHIELD (Defense News, 2025). This picture shows that Japan is trying to remain committed to ethical boundaries while rapidly adopting advanced technologies.

International and Civil Society Responses

The guidelines are also directly connected to the ongoing “autonomous weapons” debate at the global level.

  • Civil society: Human Rights Watch and the Stop Killer Robots campaign have long been calling for “a complete ban on lethal autonomous systems.” HRW’s 2025 report states, “Without meaningful human control, these systems are unacceptable both legally and ethically” (Human Rights Watch, 2025). Although Japan’s guidelines align with this demand, the government’s advocacy for guiding principles rather than a binding global ban is considered insufficient by some circles.
  • Diplomacy: Japan supports the principle of “meaningful human oversight” at United Nations Conventional Weapons Convention (CCW) meetings. However, Tokyo’s approach is based on seeking international consensus rather than calling for a unilateral strict ban (UN CCW, 2024–2025).
  • Western think tanks: Organizations such as the US-based CSIS argue that Japan’s emphasis on “soft law,” i.e., non-binding guidelines, may increase the pace of innovation but weaken enforcement and oversight mechanisms (CSIS, 2025).

Ethics and Strategy Balance

The “AI Promotion Act” adopted in March 2025 similarly emphasizes the principles of human dignity, social trust, and international harmony while accelerating innovation. However, this law also relies on voluntary cooperation and guidance mechanisms rather than sanctions (Japanese Diet, 2025).

Defense analyst Harada summarizes this dual approach as follows: “Japan wants neither to fall behind in technology nor to blur its ethical red lines. The real issue will be the extent to which these red lines can be implemented in the field.” (Harada, 2025).

Open Questions

  • Oversight power: How high-risk projects will be prevented and what sanctions will be implemented in practice remains unclear.
  • Global impact: Japan’s guidelines could serve as an example for other pro-democracy countries. However, which coalitions it can build at the UN for a binding global standard is still a question mark.
  • Technology-ethics balance: As actors such as China, Russia, and North Korea continue aggressive AI integration, how much will Japan’s “human-centric” line affect its deterrence?

In conclusion, Tokyo’s new guidelines make a strong contribution to the global ethical debate on military artificial intelligence. As Japan continues to adopt advanced technology in defense, it stands out as one of the countries emphasizing the indispensable role of humans in the use of lethal force. However, the real impact of the guidelines will be measured not only on paper but by the oversight power in practice.

Source Notes

  1. ATLA (JMOD). (2025). Responsible AI Guidelines.
  2. Japan Ministry of Defense. (2025). Press Release.
  3. Harada, H. (2025). Crisis Intelligence.
  4. Nikkei Asia. (2025). Japan requests record ¥8.8 trillion defense budget.
  5. Defense News. (2025). Japan’s SHIELD program for unmanned defense systems.
  6. Human Rights Watch. (2025). Stop Killer Robots.
  7. UN CCW. (2024–2025). Meeting Records.
  8. CSIS. (2025). Japan’s AI policy: soft law approach.
  9. Japanese Diet. (2025). AI Promotion Act.

Discover more from Gazete Makina

Subscribe now to keep reading and get access to the full archive.

Continue reading