Human oversight not good enough for AI war machines
As artificial intelligence (AI) becomes more powerful – even being used in warfare – there’s an urgent need for governments, tech companies and international bodies to ensure it’s safe. And a common thread in most agreements on AI safety is a need for human oversight of the technology.
In theory, humans can operate as safeguards against misuse and potential hallucinations (where AI generates incorrect information). This could involve, for example, a human reviewing content that the technology generates (its outputs).
However, as a growing body of research and several real-life examples of military use of AI demonstrate, there are inherent challenges to the idea of humans acting as an effective check on computer systems.
Across the efforts thus far to create regulations for AI, many already contain language promoting human oversight and involvement. For instance, the EU’s AI act stipulates that high-risk AI systems – for example, those already in use that automatically identify people using biometric technology such as a retina scanner – need to be separately verified and confirmed by at least two humans who possess the necessary competence, training and authority.
In the military arena, the UK government recognized the importance of human oversight in its February 2024 response to a parliamentary report on AI in weapon systems. The report emphasizes “meaningful human control” through the provision of appropriate training for humans. It also stresses the notion of human accountability and says that decision-making in actions by, for instance, armed aerial drones cannot be shifted to machines.
This principle has largely been kept in place so far. Military drones are currently controlled by human operators and their chain