The local Effective Altruism chapter had a stand at the university hobby fair.

Last time I read their charity guide spam email for student clubs, they were still mostly into the relatively benign end of EA stuff, listing some charities they had deemed most effective by some methodology. My curiosity got the best of me and I went to talk to them. I wanted to find out if they’d started pushing seedier stuff and whether the people at the stand were aware of the dark side of TESCREAL.

They seemed to have gotten into AI risk stuff, which was not surprising. Also, they seemed to be unaware of most of the incidents and critics I referred to, mostly only knowing about the FTX debacle.

They invited me to attend their AI risk discussion event, saying (as TREACLES adjacents always do) that they love hearing criticism and different points of view and so on.

On one hand, EA is not super big here and most of their members and prospectively interested participants are probably not that invested in the movement yet. This could be an opportunity to spread awareness of the dark side of EA and its adjacent movements and maybe prevent some people from falling for the cult stuff.

On the other hand, acting as the spokesman for the opposing case is a big responsibility and the preparation is a lot of work. I’m slightly worried that pushing back at the event might escalate into a public debate or even worse, some kind of Ben Shapiro style affair where I’m DESTROYED with FACTS and LOGIC by some guy with a microphone and a primed audience. Also, dealing with these people is usually just plain exhausting.

So, I’m feeling conflicted and would like some advice from the best possible source: random people on the internet. Do y’all think it’s a good idea to go? Do you think it’s a terrible idea?

  • bitofhope@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 year ago

    They claim that through technology, they will be able to usher in a utopia where people don’t have to work as much. Funny how they don’t lobby for laws that would require technological advancements to benefit workers, not the owners.

    This is a good point, but I think it’s best to be careful with anything they might perceive at too overtly “political”. It’s one thing to argue why AI doomsday cultism is bad and another to advocate for fully automated luxury communism.

    It’s no accident that the people claiming that AGI is a risk to humanity are also the ones trying hardest to get there. They are just a little scared of AGI because it could truly cause societal upheaval, and those at the top of a society have the most to lose in that situation. It’s self preservation, not benevolence. The power structures of modern society are vital to their continued lives of extravagance. In the end, they all just want to accumulate wealth, not pay any taxes, and try to make themselves feel like a hero for doing it.

    I might be cynical, but this sounds like overselling AGI and not just because I don’t believe we are anywhere close to creating anything I’d consider one.

    I’m not looking to have a debate or take an adversarial position. If I am to go, I’ll focus on making a case for why AI doom is an unrealistic sci-fi scenario, what actual AI risks we should worry about, why some people benefit from the doomer narrative and possibly touch on why Effective Altruism isn’t a wholly benign movement. The point is only to give them the background so they can make their own decisions with healthy skepticism.

    I don’t assume students interested in rationality and charity work to be bad people or anything. Sneering and berating them right in their face would be counterproductive.