A big biometric security company in the UK, Facewatch, is in hot water after their facial recognition system caused a major snafu - the system wrongly identified a 19-year-old girl as a shoplifter.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 months ago

    This raises questions about how ‘good’ this technology is.

    But it also raises the question of how well your police can deal with false suspicions and false accusations.

    • CeeBee@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      13
      ·
      3 months ago

      This raises questions about how ‘good’ this technology is.

      No it doesn’t. For every 10 million good detections you only hear about the 1 or 2 false detections. The issue here are the policies around detections and how to verify them. Some places are still taking a blind faith approach to the detections.

      • NeoNachtwaechter@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        3 months ago

        For every 10 million good detections you only hear about the 1 or 2 false detections.

        Considering the impact of these faults, it is obviously not good enough.

        • CrayonMaster@midwest.social
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          3 months ago

          But that really says more about the user then the tech. This issue here isn’t that the tech has too many errors, it’s that stores use it and it alone to ban people despite it having a low but well known error rate.

          • NeoNachtwaechter@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            says more about the user then the tech.

            “You need to have a suitable face for our face recognition?” ;-)

            stores use it and it alone to ban people

            No. Read again. The stores did not use technology, they used the services of that tech company.

          • nyan@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            stores use it and it alone to ban people despite it having a low but well known error rate.

            And it is absolutely predictable that some stores would do that, because humans. At very least, companies deploying this technology need to make certain that all the store staff are properly trained on what it does and doesn’t mean, including new hires who arrive after the system is put in. Forcing that is going to require that a law be passed.