• Bipta@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    10 months ago

    This is the dumbest take. Humans have a lot of needs and the AI will likely have considerable control over them.

  • Alien Nathan Edward@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    doesn’t take a lot to imagine a scenario in which a lot of people die due to information manipulation or the purposeful disabling of safety systems. doesn’t take a lot to imagine a scenario where a superintelligent AI manipulates people into being its arms and legs (babe, wake up, new conspiracy theory just dropped - roko is an AI playing the long game and the basilisk is actually a recruiting tool). doesn’t take a lot to imagine an AI that’s capable of seizing control of a lot of the world’s weapons and either guiding them itself or taking advantage of onboard guidance to turn them against their owners, or using targeted strikes to provoke a war (this is a sub-idea of manipulating people into being its arms and legs). doesn’t take a lot to imagine an AI that’s capable of purposefully sabotaging the manufacture of food or medicine in such a way that it kills a lot of people before detection. doesn’t take a lot to imagine an AI capable of seizing and manipulating our traffic systems in such a way to cause a bunch of accidental deaths and injuries.

    But overall my rebuttal is that this AI doom scenario has always hinged on a generalized AI, and that what people currently call “AI” is a long, long way from a generalized AI. So the article is right, ChatGPT can’t kill millions of us. Luckily no one was ever proposing that chatGPT could kill millions of us.

  • stravanasu@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    “Bayesian analysis”? What the heck has this got to do with Bayesian analysis? Does this guy have an intelligence, artificial or otherwise?

  • teft@startrek.website
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    What happens in the scenario where a super-intelligence just uses social engineering and a human is his arms and legs?

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    I think a sufficient “Doom Scenario” would be an AI that is widespread and capable enough to poison the well of knowledge we ask it to regurgitate back at us out of laziness.