• swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    Raytheon: we’re developing a blueprint for evaluating the risk that a large laser-guided missile could aid in someone threatening biology with death

    (Ok I know you need to pretend I’m an AI doomer for this sneer but whatever)

  • Sailor Sega Saturn@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    While none of the above results were statistically significant, […] Overall, especially given the uncertainty here, our results indicate a clear and urgent need for more work in this domain.

    Heh

    • self@awful.systemsM
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      I keep flashing back to that idiot who said they were employed as an AI researcher that came here a few months back to debate us. they were convinced multimodal LLMs would be the turning point into AGI — that is, when your bullshit text generation model can also do visual recognition. they linked a bunch of papers to try and sound smart and I looked at a couple and went “is that really it?” cause all of the results looked exactly like the section you quoted. we now have multimodal LLMs, and needless to say, nothing really came of it. I assume the idiot in question is still convinced AGI is right around the corner though.