In particular, know how to identify the common and deadly species (eg: much of the genus Amanita) yourself, and get multiple trustworthy field guides for your part of the world.

  • JohnEdwa@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    6 months ago

    You are lumping a whole lot of different things that work in completely different ways under the singular label of AI, and while I can’t really blame you as that is what the industry does as well, image recognition, image generation and large language models like chat-gpt all work entirely differently.
    Image recognition especially can be trained to be extremely accurate with a properly restricted scope and a good dataset, but even so it would never be enough for identifying mushrooms because no matter if it’s being done by the perfect AI or an organic meatbag, mushrooms simply cannot be accurately identified from a single picture as they can look literally identical to one another in many ways.

    And parrots totally can learn what words mean. Just like how a dog can learn what “Sit”, “Paw” or “Let’s go for a walk” mean, parrots just also have the ability to “talk”.

    • spujb@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      6 months ago

      what’s wrong with lumping a lot of things with different substrate together if, as you admit yourself, there’s still no evidence any of them work well?

      • JohnEdwa@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 months ago

        LLMs are the current big buzzword and the main ones that “don’t work”, because people assume and expect them to be intelligent and actually know and understand things, which they simply do not. Their purpose is to generate text in a way that a human would and for that they actually work perfectly - get a competent LLM and a human and ask them to write about something, and you are very unlikely to spot which one is the machine unless you can catch them lying, and even then it might just be a clueless human talking about things he kinda understands but isn’t an expert of. Like me.
        But they are constantly being used for all kinds of purposes that they really don’t yet fit well, because you can’t actually trust anything they say.

        Image generation mainly has issues with hands and fingers so they aren’t bullet proof at making fake realistic imagery, but for many subjects and style they can create images that are pretty much impossible to identify as being generated. Civit.ai is full of examples. Most people think it doesn’t work yet because they mostly see someone throwing simple prompts into midjourney and taking the first thing it generates for an article thumbnail.

        And image identification definitely works, but it’s… Quirky. I said it can’t be used to identify mushrooms, because nothing can identify two things that look exactly the same from one another. But give one enough photos of every single hot wheels car that exists, and you can get one that will perfectly recognize which one you have. But it will also tell you that a shoe or a tree is one of them, because it only knows about hot wheels cars.
        Making one that is trying to identify absolutely everything from a photo, like Google Lens, will still misidentify some things as the dataset is so enormous, but so would a human. Just that for an AI, “I don’t know” is never an option, it always says the most likely answer it thinks is right.

        • spujb@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          6 months ago

          okay? so i am quite aware of all of this already; none of this info is new.

          my question is still, “what’s wrong with lumping all of these technologies together as ‘AI’ when all of them are ineffective at identifying mushrooms (and certain other tasks)?”