• NutWrench@lemmy.ml
    link
    fedilink
    arrow-up
    46
    arrow-down
    2
    ·
    25 days ago

    Ever wonder why these captchas are always cars, bicycles, motorcycles, traffic lights and crosswalks? Because YOU are doing the work of teaching the next generation of AI for self-driving cars.

    • Squorlple@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      24 days ago

      Can’t wait until we get trolley problem CAPTCHAs and we have to choose the square with the most expendable human lives

    • yetAnotherUser@discuss.tchncs.de
      link
      fedilink
      arrow-up
      16
      ·
      24 days ago

      I don’t believe it, at least not anymore.

      Google has had more than enough data to train AI models from reCAPTCHA for many years. In 2010 it displayed 100 million captchas per day. You simply do not need hundreds of billions of solved captchas in your data set.

      I feel like its only purpose nowadays is stopping basic bots and annoying people who don’t let themselves be tracked as much as advertisers would like.

    • TheOakTree@lemm.ee
      link
      fedilink
      arrow-up
      9
      ·
      25 days ago

      My favorite is when it asks me to identify stairs. I just imagine a self-driving car mistaking a set of stairs as more road and deciding to try and climb the steps.

      • Daemon Silverstein@thelemmy.club
        link
        fedilink
        arrow-up
        6
        ·
        24 days ago

        Actually, it’s training a self-driving humanoid robot that’s supposed to climb stairs in order to terminate any potential John Connor that’s inside a house upstairs.

    • whoisearth@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      23 days ago

      I can’t believe I never put that 2 and 2 together.

      It stresses how stupid AI is then if it was a human the question would be “is this a stop sign?” So it’s not even asking us to validate data. To me that means AI is still far from being intelligent. It’s requiring our input to learn. That’s not how we operate. My kids don’t require me to show them images of a stop sign for them to know what one is.

      • gamermanh@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        24 days ago

        You and many other humans are doing verification work

        It’s pretty sure it’s already right, but if enough people get the same image and get it wrong the same way then something’s up, flag it

          • gamermanh@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            24 days ago

            I took some compsci classes years ago when this tech was new and that’s exactly how it was described as being handled

            Once image recognition software got good enough to be right most of the time they started this shit to help get it the rest of the way to all of the time

            Do it any other way and you have to pay those people

      • chatokun@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        24 days ago

        Theres a CGPGrey video that describes old techniques. It’s not quite up to date on some of its predictions, but it is how some machine learning works. Of course, it doesn’t discuss current proprietary techniques, because those are company secrets. Still, it’s as good a guess we’ll likely get, unless something radically different has been invented:

        https://youtu.be/R9OHn5ZF4Uo

        There is also a second video about more modern stuff, but it’s more a footnote:
        https://youtu.be/wvWpdrfoEv0