Pages
  • First Page
  • Economy
  • Iranica
  • Special issue
  • Sports
  • National
  • Arts & Culture
Number Seven Thousand Five Hundred and Thirty Two - 14 April 2024
Iran Daily - Number Seven Thousand Five Hundred and Thirty Two - 14 April 2024 - Page 4

Bypassing human judgment

The Lavender system not only raises concerns regarding the violation of human dignity by depersonalizing individuals targeted but also by circumventing human involvement in the targeting process. As individuals are targeted based on pre-set rules and abstract hypotheticals determined by algorithms, the nuanced considerations of individualized circumstances are disregarded. This mechanized approach to decision-making fundamentally undermines the principles of human dignity by depriving individuals of the right to have their fate determined through a deliberative process involving human considerations.
Irrespective of the exigency to make quick decisions during armed conflicts against combatants, it does not necessarily follow that such decisions can be made in an abstract or theoretical manner, with no human authorization (as defended by Ulgen, pp. 14–15). The possibility of a deliberative process somewhere down the line, where a change of mind and fate is possible, is almost ruled out in advance by the introduction of the Lavender system since human control is sacrificed in the process. The research highlights a concerning reality where human personnel serve merely as a “rubber stamp” for the decisions made by AI systems (para. 4), devoting minimal time to verifying targets before authorizing bombings.
In spite of the evident margin of error in Lavender’s calculations, the human oversight focuses on superficial checks, like verifying the target’s gender, rather than conducting thorough assessments of the target’s legitimacy. As detailed in the investigation (paras. 45-47), the supervision protocol before targeting suspected militants involves confirming the AI-selected target’s gender, with the assumption that female targets are erroneous and male targets are appropriate, according to an interviewed official.
“I would invest 20 seconds for each target at this stage and do dozens of them every day. I had zero added value as a human, apart from being a stamp of approval. It saved a lot of time. If [the operative] came up in the automated mechanism, and I checked that he was a man, there would be permission to bomb him, subject to an examination of collateral damage” (para. 47).
This reduction of human involvement to a cursory gender verification process reflects a systemic failure to incorporate human judgment.
As Asaro recalls, justice requires a human duty to “consider the evidence, deliberate alternative interpretations, and reach an informed opinion”. For instance, irreducible sentences of life imprisonment have been found to violate human dignity since it means “writing off” the person, deciding on a merely abstract basis and not leaving space for change or hope. Certainly, the structure of law and the processes of justice need the presence of a human as a legal agent and this lack of human control leads to an inherent violation of the human right to dignity. In no way can the Lavender system be said to comply with this standard.
Another pressing concern regarding Lavender lies in its opacity, as the algorithms it employs are being shielded from public scrutiny (either due to legal protection or simple inaccessibility). Insufficient accessibility obstructs comprehension and oversight of the targeting process, a critical aspect especially relevant when international law mandates investigations for violations of its own rules (pp. 199–210). Furthermore, as AI increasingly relies on neural networks, which are particularly deep learning algorithms inherently lacking transparency, users frequently struggle to grasp the underlying reasoning behind each particular decision made by these systems. While some scholars, such as Chehtman, argue that introducing human oversight may not fully address these challenges, this would, at the very least, provide more information on why and how the targeting selection was conducted and finalized.
In this matter, algorithm-based decision-making is often touted as objective and impartial, but writing unbiased algorithms is a complex task and programmers may, by mistake or even by intentional design, build in misinformation, racism, bias, and prejudice. Potential discrimination is exacerbated by the opacity of the programs and a social tendency to assume a machine-made decision is more likely to be objective and efficient. While there has been significant scholarly and increasingly policy-focused work directed towards solutions for creating fair algorithms, there are no firmly established international standards for audit, accountability, or transparency. I do not deny here that there may be (limited) benefits in introducing AI and algorithmic targeting systems in modern warfare, such as the ones listed by Heller (pp. 31–49). Nevertheless, none of these advantages can plausibly justify the use of Lavender, a system where all the mentioned concerns are significantly amplified.  

Search
Date archive