deleted by creator
deleted by creator
This linked interview of Brian Merchant by Adam Conover is great. I highly recommend watching the whole thing.
For example, here is Adam, decribing the actual reasons why striking writers were concerned about AI, followed by Brian explaining how Sam Altman et al hype up the existential risk they themselves claim to be creating, just so they can sell themselves as the solution. Lots of really edifying stuff in this interview.
She really is insufferable. If you’ve ever listened to her Pivot podcast (do not advise), you’ll be confronted by the superficiality and banality of her hot takes. Of couse this assumes you’re able to penetrate the word salad she regularly uses to convey any point she’s trying to make. She is not a good verbal communicator.
Her co-host, “Professor” [*] Scott Galloway, isn’t much better. While more verbally articulate, his dick joke-laden takes are often even more insufferable than Swisher’s. I’m pretty sure Kara sourced from him her opinion that you should “use AI or be run over by progress”; it’s one of his most frequent hot takes. He’s also one of the biggest tech hype maniacs, so of course he’s bought a ticket on the AI hype express. Before the latest AI boom, he was a crypto booster, although he’s totally memory-holed that phase of his life now that the crypto hype train has run off a cliff.
[*] I put professor in quotes, because he’s one of those people who insist on using a title that is equal parts misleading and pretentious. He doesn’t have a doctorate in anything, and while he’s technically employed by NYU’s business school, he’s a non-tenured “clinical professor”, which is pretty much the same as an adjunct. Nothing against adjunct professors, but most adjuncts I’ve known don’t go around insisting that you call them “professor” in every social interaction. It’s kind of like when Ph.D.s insist you call them “doctor”.
I wonder what percentage of fraudulent AI-generated papers would be discovered simply by searching for sentences that begin with “Certainly, …”
I’m probably not saying anything you didn’t already know, but Vox’s “Future Perfect” section, of which this article is a part, was explicitly founded as a booster for effective altruism. They’ve also memory-holed the fact that it was funded in large part by FTX. Anything by one of its regular writers (particularly Dylan Matthews or Kelsey Piper) should be mentally filed into the rationalist propaganda folder. I mean, this article throws in an off-hand remark by Scott Alexander as if it’s just taken for granted that he’s some kind of visionary genius.
Non-paywall link.
Glowfic feels like a writing format designed in a lab to be the perfect channel for Eliezer’s literary diarrhea.
My P(harrassment scandal) for EA is 0.98.
Exactly. It would be easier to take Scott’s argument more seriously if it wasn’t coming from the very same person who previously labeled as unstable and thereby non-credible a woman who accused his rationalist buddies of sexual harrassment – a woman who, by the way, went on to die by suicide.
So fuck him and his contrived rationalizations.
Imagine thinking there is actually some identifiable thing called “white culture”. As if a skin color defines a culture.
Yeah, sounds like a Nazi.
I think in their minds, there is this magical threshold below which all the brown and disabled people live, and once you get rid of all the people residing below that threshold all you have left is smart people who want to make the world better.
Only an EA could take seriously someone who approvingly cites journals like “Mankind Quarterly” and crackpots like Richard Lynn, Steven Hsu, Jonathan Anomaly, and Emil Kirkegaard.
The author considers himself a “rationalist of the right” and a libertarian who enjoys Richard Hanania and Scott Alexander. He describes ten tenets of right-wing rationalism, 8 of which are simply rephrasings of various ideas promoted by scientific racists. It would be an understatement to say this guy is monomaniacally focused on a single topic.
(Oh, and he publishes his brain farts on Substack. Because of course he does.)
She seems to do this kind of thing a lot.
According to a comment, she apparently claimed on Facebook that, due to her post, “around 75% of people changed their minds based on the evidence!”
After someone questioned how she knew it was 75%:
Update: I changed the wording of the post to now state: 𝗔𝗿𝗼𝘂𝗻𝗱 𝟳𝟓% 𝗼𝗳 𝗽𝗲𝗼𝗽𝗹𝗲 𝘂𝗽𝘃𝗼𝘁𝗲𝗱 𝘁𝗵𝗲 𝗽𝗼𝘀𝘁, 𝘄𝗵𝗶𝗰𝗵 𝗶𝘀 𝗮 𝗿𝗲𝗮𝗹𝗹𝘆 𝗴𝗼𝗼𝗱 𝘀𝗶𝗴𝗻*
And the * at the bottom says: Did some napkin math guesstimates based on the vote count and karma. Wide error bars on the actual ratio. And of course this is not proof that everybody changed their mind. There’s a lot of reasons to upvote the post or down vote it. However, I do think it’s a good indicator.
She then goes on to talk about how she made the Facebook post private because she didn’t think it should be reposted in places where it’s not appropriate to lie and make things up.
Clown. Car.
What a bunch of monochromatic, hyper-privileged, rich-kid grifters. It’s like a nonstop frat party for rich nerds. The photographs and captions make it obvious:
The gang going for a hiking adventure with AI safety leaders. Alice/Chloe were surrounded by a mix of uplifting, ambitious entrepreneurs and a steady influx of top people in the AI safety space.
The gang doing pool yoga. Later, we did pool karaoke. Iguanas everywhere.
Alice and Kat meeting in “The Nest” in our jungle Airbnb.
Alice using her surfboard as a desk, co-working with Chloe’s boyfriend.
The gang celebrating… something. I don’t know what. We celebrated everything.
Alice and Chloe working in a hot tub. Hot tub meetings are a thing at Nonlinear. We try to have meetings in the most exciting places. Kat’s favorite: a cave waterfall.
Alice’s “desk” even comes with a beach doggo friend!
Working by the villa pool. Watch for monkeys!
Sunset dinner with friends… every day!
These are not serious people. Effective altruism in a nutshell.
People who use the term “race realism” unironically are telling on themselves.
Reading his timeline since the revelation is weird and creepy. It’s full of SV investors robotically pledging their money (and fealty) to his future efforts. If anyone still needs evidence that SV is a hive mind of distorted and dangerous group-think, this is it.
All factual and counter-factual statements are evidence for my position. Heads I win, tails you lose.
“Fucking probabilities, how do they work?”
The first comment and Yud’s response.
Strange man posts strange thing.