• Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    I remember role playing cops and robbers as a kid. I could point my finger and shout “bang bang I got you” but if my friend didn’t pretend to be mortally wounded and instead just kept running around there’s really nothing I could do.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    I didn’t read this but I confident it can be summarized as “how many hostile AGIs can we confine to the head of a pin?”

    • skillissuer@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      2 points for every statement that is clearly vacuous.

      3 points for every statement that is logically inconsistent.

      this could be enough

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    td;lr

    No control method exists to safely contain the global feedback effects of self-sufficient learning machinery. What if this control problem turns out to be an unsolvable problem?

    While I agree this article is TL and I DR it, this is not an abstract. This is a redundant lede and attempted clickbait at that.

    Oh wait I just noticed the L and D are swapped. Feel free not to tell me whether that’s a typo or some smarmy lesswrongism.

  • Evinceo@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Nobody tell these guys that the control problem is just the halting problem and first year CS students already know the answer.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      remembering how Thiel paid Buterin to drop out of his comp sci course so he spent all of 2018 trying to implement plans for Ethereum that only required that P=NP

    • kuna@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      On a similar note, Yud’s decision theory that hinges on an AI (presumably a Turing Machine) predicting what a human (Turing-Complete at the least) does with 100% accuracy.

      • self@awful.systemsM
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        …huh. somehow among all the many things wrong with TDT, I never cottoned to the fact that it just reduces to the halting problem

        are rats just convinced that Alan Turing never considered what if computer but more complex? cause there’s a whole branch of math dedicated to computability regardless of the complexity of the computation substrate, and Alan helped invent it. of course they don’t know about this because they ignore the parts of computer science that disagree with their stupid ideas

        • kuna@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Actually I might have done goofed with that one; now that I think of it, if you assume some jackoff amount of computing power then a human brain (assuming nothing uncomputable happens there, so sad Penrose noises) could be simulated from first principles for a limited amount of time, no actual proof of possible future outcomes needed. This still leaves the problem of how exactly do you get all the data for that (and I think any uncertainity would require an exponential increase in paths you have to simulate), especially without killing the human in question.

  • Soy@masto.ai
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    @sue_me_please Don’t think this reply will properly show up on awful.systems, but I can’t resist to sneer.

    It amuses me that for a while the LW people saw Musk as a great example, and he just went ‘I would solve the control problem by making them human friendly and making the robots have low grip strength. Easy peasy’ Amazed that wasn’t a crack ping moment for a lot of them.