An investigation by an online publication has revealed that the Israeli military has been utilizing artificial intelligence to help identify bombing targets in Gaza. The report cited six Israeli intelligence officials who were allegedly involved in a program using an AI-based tool called 'Lavender,' which reportedly had a 10% error rate.
According to the officials, the human review of the suggested targets was minimal, with personnel often serving as a 'rubber stamp' for the machine's decisions. It was mentioned that only around 20 seconds were typically devoted to each target to ensure they were male before authorizing a bombing.
When questioned about the report, the Israel Defence Forces (IDF) did not deny the existence of the tool but refuted claims that AI was being used to identify suspected terrorists. The IDF emphasized that their information systems are tools for analysts in the target identification process and that they strive to minimize harm to civilians in operational circumstances.
The IDF stated that analysts independently assess whether targets meet the relevant definitions according to international law and IDF directives. However, the allegations suggest that the AI tool played a significant role in target selection, raising concerns about the level of human oversight in the process.
This revelation has sparked debate about the ethical implications of using AI in military operations and the potential risks associated with relying heavily on automated systems for critical decision-making. The situation underscores the need for transparency and accountability in the use of advanced technologies in conflict zones to ensure compliance with international humanitarian law.