a notable point in here, particularly given the recent WCK murders:

In an unprecedented move, according to two of the sources, the army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians; in the past, the military did not authorize any “collateral damage” during assassinations of low-ranking militants. The sources added that, in the event that the target was a senior Hamas official with the rank of battalion or brigade commander, the army on several occasions authorized the killing of more than 100 civilians in the assassination of a single commander.

  • Gaywallet (they/it)@beehaw.orgM
    link
    fedilink
    arrow-up
    24
    ·
    7 months ago

    They were going to kill these people whether an AI was involved or not, but it certainly makes it a lot easier to make a decision when you’re just signing off on a decision someone else made. The level of abstraction made certain choices easier. After all, if the system is known to be occasionally wrong and everyone seems to know it yet you’re still using it, is that not some kind of implicit acceptance?

    One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male. This was despite knowing that the system makes what are regarded as “errors” in approximately 10 percent of cases, and is known to occasionally mark individuals who have merely a loose connection to militant groups, or no connection at all.

    It also doesn’t surprise me that when you’ve demonized the opposition, it becomes a lot easier to just be okay with “casualties” which have nothing to do with your war. How many problematic fathers out there are practically disowned by their children for their shitty beliefs? Even if there were none, it still doesn’t justify killing someone at home because it’s ‘easier’

    Moreover, the Israeli army systematically attacked the targeted individuals while they were in their homes — usually at night while their whole families were present — rather than during the course of military activity. According to the sources, this was because, from what they regarded as an intelligence standpoint, it was easier to locate the individuals in their private houses. Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.

    All in all this is great investigative reporting, and it’s absolutely tragic that this kind of shit is happening in the world. This piece isn’t needed to recognize that a genocide is happening and it shouldn’t detract from the genocide in any way.

    As an aside, I also help it might get people to wake up and realize we need to regulate AI more. Not that regulation will probably ever stop the military from using AI, but this kind of use should really highlight the potential dangers.

    • t3rmit3@beehaw.org
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      7 months ago

      Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.

      This kind of flippant and humorous treatment of the murder of families (given the name, specifically children) is literally Nazi shit.

      • qdJzXuisAndVQb2@lemm.ee
        link
        fedilink
        arrow-up
        7
        ·
        7 months ago

        I genuinely scrolled up to double check the post wasn’t about an Oniom article or something. Unreal callousness.

      • derbis@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        There are reports that low level Nazis involved in the Holocaust drank themselves stupid wracked with guilt. Meanwhile the IDF thinks they’re a bunch of comedians.

    • luciole (he/him)@beehaw.org
      link
      fedilink
      arrow-up
      15
      ·
      7 months ago

      Step 6 is baffling. They bomb the Hamas operative’s family house, but they don’t bother checking if their target is even there at the time of striking - let alone minimizing civilian deaths. Then once the residential building is destroyed they don’t even care to know if they actually killed their target. The alignment between the declared objective and the methods employed is awkward.

      • Gaywallet (they/it)@beehaw.orgM
        link
        fedilink
        arrow-up
        10
        ·
        7 months ago

        When you abstract out pieces of the puzzle, it’s easier to ignore whether all parts of the puzzle are working because you’ve eliminated the necessary interchange of information between parties involved in the process. This is a problem that we frequently run into in the medical field and even in a highly collaborative field like medicine we still screw it up all the time.

        In the previous process, intelligence officers were involved in multiple steps here to validate whether someone was a target, validate information about the target, and so on. When you let a machine do it, and shift the burden from these intelligence officers to someone without the same skill set who’s only task is to review information given to them by a source which they are told is competent and their role is to click yes/no, you lose the connection between this step and the next.

        The same could be said, for example, about someone who has the technical proficiency to create new records, new sheets, new displays, etc. in an electronic health record. A particular doctor might come and request a new page to make their workflow easier. Without appropriate governance in place and people who’s job is to observe the entire process, you can end up with issues where every doctor creates their own custom sheet, and now all of their patient information is siloed to each doctors workflow. Downstream processes such as the patient coming back to the same healthcare system, or the patient going to get a prescription, or the patient being sent to imaging or pathology or labs could then be compromised by this short-sighted approach.

        For fields like the military which perhaps are not used to this kind of collaborative work, I can see how segmenting a workflow into individual units to increase the speed or efficiency of each step could seem like a way to make things much better, because there is no focus on the quality of what is output. This kind of misstep is extremely common in the application of AI because it often is put in where there are bottlenecks. As stated in the article-

        “We [humans] cannot process so much information. It doesn’t matter how many people you have tasked to produce targets during the war — you still cannot produce enough targets per day.”

        the goal here is purely to optimize for capacity, how many targets you can generate per day, rather than on a combination of both quality and capacity. You want a lot of targets? I can just spit out the name of every resident in your country in a very short period of time. The quality in this case (how likely they are to be a member of hamas) will unfortunately be very low.

        The reason it’s so fucked up is that a lot of it is abstracted yet another level away from the decision makers. Ultimately it is the AI that’s making the decision, they are merely signing off on it. And they weren’t involved in signing off on the AI, so why should they question it? It’s a dangerous road - one where it becomes increasingly easy to allow mistakes to happen, except in this case the mistake can be counted as innocent lives that you killed.