• LainOfTheWired@lemy.lol
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    2
    ·
    9 months ago

    Wouldn’t trying to train an AI to be politically neutral from twitter be a pretty lost cause considering the majority of the site is very left leaning? Like sure it wouldn’t be as bad for political bias as say truth social( or whatever it’s called), but I hope they’re using a good amount or external data or at least trying to pick more unbiased parts of twitter to train it with. If they’re goal is to be politically neutral.

    • squiblet@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      The majority of the site was left leaning in the past, but the extent has been exaggerated. There was always a sizable right wing presence of the “PATRIOT who loves Jesus and Trump and 2A!” variety, and some of the most popular accounts were people like Dan Bongina and Ben Shapiro. Many people who disagree with Musk and fascists have left the site since then at the same time as its attracted more right wingers, so I don’t know what the mix is at this point.

    • hoot@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 months ago

      “Reality has a well-known liberal bias.” - Stephen Colbert

    • Andy@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I’m just gonna share a theory: I bet that to get better answers, Twitter’s engineers are going to silently modify the prompt input to append “Answer as a political moderate” to the first prompt given in an conversation. Then, someone is going to do a prompt hack and get it to repeat the modified prompt to see how the AI was “retrained”.