
James Johnson
Dr James Johnson is a Senior Lecturer (Associate Professor) and Director of Strategic Studies in the Department of Politics & International Relations at the University of Aberdeen. He is the founder of the Strategic Studies Wargaming Society. He is also an Honorary Fellow at the University of Leicester, a Non-Resident Research Associate on the European Research Council funded Towards a Third Nuclear Age Project, and a Mid-Career Cadre Member with the Center for Strategic and International Studies (CSIS) Project on Nuclear Issues. He advises various parts of the US, UK, and EU governments on AI and nuclear policy, including the US Department of Defense Office, the UK Office for Artificial Intelligence, the NATO Nuclear Planning Group, and the Dutch Foreign Ministry's Global Commission on Responsible AI in the Military Domain (REAIM). Previously, he was an Assistant Professor at Dublin City University, a Non-Resident Fellow with the Modern War Institute at West Point, and a Postdoctoral Research Fellow at the James Martin Center for Nonproliferation Studies in Monterey, USA.
His research examines the intersection of nuclear weapons, artificial intelligence, political psychology, and strategic affairs. His work has been featured in Journal of Strategic Studies, The Washington Quarterly, Strategic Studies Quarterly, Defence Studies, European Journal of International Security, Asian Security, Pacific Review, Journal for Peace & Nuclear Disarmament, Defense and Security Analysis, RUSI Journal, Journal of Cyber Policy, Journal of Military Ethics, War on the Rocks, and other outlets. He is the author of "AI Commander: Centaur Teaming, Command, and Ethical Dilemmas" (Oxford University Press, 2024), "AI and the Bomb: Nuclear Strategy and Risk in the Digital Age" (Oxford University Press, 2023), "Artificial Intelligence and the Future of Warfare: USA, China & Strategic Stability" (Manchester University Press, 2021), and "The US-China Military & Defense Relationship During the Obama Presidency" (Palgrave Macmillan, 2018).
For more see, jamesjohnsonphd.com
Supervisors: Andrew Futter and Benjamin Zala
Address: Aberdeen, United Kingdon
His research examines the intersection of nuclear weapons, artificial intelligence, political psychology, and strategic affairs. His work has been featured in Journal of Strategic Studies, The Washington Quarterly, Strategic Studies Quarterly, Defence Studies, European Journal of International Security, Asian Security, Pacific Review, Journal for Peace & Nuclear Disarmament, Defense and Security Analysis, RUSI Journal, Journal of Cyber Policy, Journal of Military Ethics, War on the Rocks, and other outlets. He is the author of "AI Commander: Centaur Teaming, Command, and Ethical Dilemmas" (Oxford University Press, 2024), "AI and the Bomb: Nuclear Strategy and Risk in the Digital Age" (Oxford University Press, 2023), "Artificial Intelligence and the Future of Warfare: USA, China & Strategic Stability" (Manchester University Press, 2021), and "The US-China Military & Defense Relationship During the Obama Presidency" (Palgrave Macmillan, 2018).
For more see, jamesjohnsonphd.com
Supervisors: Andrew Futter and Benjamin Zala
Address: Aberdeen, United Kingdon
less
Related Authors
Varun Sahni
Jawaharlal Nehru University
Konstantin Milekić
Nankai University
Rıfat Öncel
Middle East Technical University
Aaron Frank
RAND Corp.
Rajesh Basrur
Nanyang Technological University
Sannia Abdullah
Stanford University
Heinz Duthel
PCU
InterestsView All (7)
Uploads
Books by James Johnson
Peer-review journals by James Johnson
cannot effectively or reliably compliment (let alone replace) the role of
humans in understanding and apprehending the strategic environment
to make predictions and judgments that inform strategic decisions.
Furthermore, the rapid diffusion of and growing dependency on
AI technology at all levels of warfare will have strategic consequences
that counterintuitively increase the importance of human involvement
in these tasks. Therefore, restricting the use of AI technology to automate
decision-making tasks at a tactical level will do little to contain or
control the effects of this synthesis at a strategic level of warfare. The
article re-visits John Boyd’s observation-orientation-decision-action
metaphorical decision-making cycle (or “OODA loop”) to advance an
epistemological critique of AI-enabled capabilities (especially machine
learning approaches) to augment command-and-control decisionmaking
processes. In particular, the article draws insights from Boyd’s
emphasis on “orientation” as a schema to elucidate the role of human
cognition (perception, emotion, and heuristics) in defense planning in a
non-linear world characterized by complexity, novelty, and uncertainty.
It also engages with the Clausewitzian notion of “military genius” – and
its role in “mission command” – human cognition, systems, and evolution
theory to consider the strategic implications of automating the
OODA loop.
cannot effectively or reliably compliment (let alone replace) the role of
humans in understanding and apprehending the strategic environment
to make predictions and judgments that inform strategic decisions.
Furthermore, the rapid diffusion of and growing dependency on
AI technology at all levels of warfare will have strategic consequences
that counterintuitively increase the importance of human involvement
in these tasks. Therefore, restricting the use of AI technology to automate
decision-making tasks at a tactical level will do little to contain or
control the effects of this synthesis at a strategic level of warfare. The
article re-visits John Boyd’s observation-orientation-decision-action
metaphorical decision-making cycle (or “OODA loop”) to advance an
epistemological critique of AI-enabled capabilities (especially machine
learning approaches) to augment command-and-control decisionmaking
processes. In particular, the article draws insights from Boyd’s
emphasis on “orientation” as a schema to elucidate the role of human
cognition (perception, emotion, and heuristics) in defense planning in a
non-linear world characterized by complexity, novelty, and uncertainty.
It also engages with the Clausewitzian notion of “military genius” – and
its role in “mission command” – human cognition, systems, and evolution
theory to consider the strategic implications of automating the
OODA loop.
(A2-AD) capabilities will put at risk US military assets and forward forces operating in
the Western Pacific region, enabling China to deter, delay and deny US intervention in
future regional conflict and crisis. US defence analysts in their assessments have
frequently, and often erroneously, conflated a Chinese operational capability with an
underlying strategic intention that conceptualises the United States as its primary (if
not sole) target. The central argument this article proffers is that US perceptions of A2-AD have been framed by specific analytical baselines that have overlooked the
evolution of Chinese operational and doctrinal preferences, and over-reliant upon
military material-based assessments to determine Beijing’s strategic intentions, and
formulate US military countervails. The article concludes that the strategic ambiguities and opacity associated with Chinese A2-AD capabilities and its ‘active defence’ concept reinforced Washington’s reliance upon capacity-based assessments that in turn, exacerbated misperceptions confounded by cognitive bias of Chinese strategic intentions. The critical framing assumptions of this article draw heavily upon the ideas and rationale associated with the international relations ‘Security Dilemma’ concept.