red: i suppose i should at least give him credit for acting on his beliefs, but my god i am so tired of being sucked into the yudkowsky cinematic universe. no more of this shit for me. i am ready to break out of this stupid fucking simulation.
blue: what did he even think was going to happen after the time piece? this is the kind of shit that makes people either laugh at him or start hoarding GPUs. it’s not as if he’s been putting any skill points into being persuasive to normies and it shows. wasn’t he the one who taught us about consequentialism?
(i thought initially that blue was going to disagree with red but no, blue is just mad in a different way)
red: it’s just insane to me in retrospect how much this one man’s paranoid fantasies have completely derailed the trajectory of my life. i came across his writing when i was in college. i was a child. this man is in some infuriating way my father and i don’t even have words for how badly he fucked that job up. my entire 20s spent in the rationality community was just an endless succession of believing in and then being disappointed by men who acted like they knew what they were doing and eliezer fucking yudkowsky was the final boss of that whole fucking gauntlet.
blue: speaking of consequentialism the man dedicated his entire life to trying to warn people about the dangers of AI risk and, by his own admission, the main thing his efforts accomplished was get a ton of people interested in AI, help both openAI and deepmind come into existence, and overall make the AI situation dramatically worse by his own standards. what a fucking clown show. openAI is his torment nexus.
yellow: i just want to point out that none of this is actually a counterargument to -
red: yellow, shut the FUCK up -
yellow: like i get it, i get it, okay, we need to come to terms with how we feel about this whole situation, but after we do that we also need to maybe, like, actually decide what we believe? which might require some actual thought and actual argument?
red: if i never have another thought about AI again it’ll be too soon. i would rather think about literally anything else. i would rather think about dung beetles.
yellow: heh remember that one tweet about dung beetles -
red, blue: NOT THE TIME.
yellow: it’s a good tweet though, you know i love a good tweet.
red: we all love a good tweet. now. as i was saying. the problem is eliezer fucking yudkowsky thinks he can save the world with fear and paranoia and despair. in his heart he’s already given up! the “death with dignity” post was a year ago! it’s so clear from looking at him and reading his writing that whatever spark he had 15 years ago when he was writing the sequences is gone now. i almost feel sorry for him.
blue: the thing that really gets my goat about the whole airstrikes-on-datacenters proposal is it requires such a bizarre mix of extremely high and extremely low trust to make any sense - on the one hand, that you trust people so little not to abuse access to GPUs that you can’t let a single one go rogue, and on the other hand, that you trust the political process so much to coordinate violence perfectly against rogue GPUs and nothing else. “shut down all the large GPU clusters,” “no exceptions for anyone, including governments and militaries” - none of the sentences here have a subject. who is supposed to be doing this, eliezer???
red: not that i should be surprised by this point but i think way too many people are being fooled by the fact that he still talks in the rationalist register, so people keep being drawn into engaging with his ideas intellectually at face value instead of paying attention to the underlying emotional tone, which is insane. there’s no reason to take the airstrikes-on-datacenters proposal at face value. all it does is communicate how much despair he feels, that this is the only scenario he can imagine that could possibly do anything to stop what he thinks is the end of the world.
blue: ugh i don’t even want to talk about this anymore, now i actually do feel sorry for him. if his inner circle had any capacity to stand up to him at all they’d be strong-arming him into a nice quiet retirement somewhere. his time in the spotlight is over. he’s making the same points in the same language now as he was 10 years ago. it’s clear he neither can nor wants to change or grow or adapt in any real way.
yellow: so what should everyone be doing instead? who should everyone be listening to if not eliezer?
red: i have no idea. that’s the point. eliezer’s fantasy for how this was gonna go was clearly explained in harry potter and the methods of rationality - a single uber-genius, either him or someone else he was gonna find, figuring out AI safety on their own, completely within the comfort of their gigantic brain, because he doesn’t trust other people. that’s not how any of this is gonna go. none of us are smart enough individually to figure out what to do. we do this collectively, in public, or not at all. all i can do is be a good node in the autistic peer-to-peer information network. beyond that it’s in god’s hands.
blue, yellow: amen.
I saw this when randomly checking my old email and unsubscribing from all the cult stuff I fell into as a mentally ill young person. "Yudkowsky was my father" hits home. His writing (and others) was there for me like my parents weren't -- but not there for me like parents need to be for a kid to grow up healthy and able to contribute to our collective well-being.
People like me are susceptible to what I might call his pompously grandiose paranoia. It feels like taking a red pill when you don't know any better. A lot of us have committed suicide because the perspective vortex is so punishing -- the promises of capital-R Rationality were so fantastical, and the reality of attempting to carry out the vision of the dojo was so banal and predictably like any other cult (or high-demand group.) The story of Leverage is a perfect example of the movement's irony.
Yud has a lot to lose by admitting his life's work is essentially a new form of Scientology. But I don't. I'm happy to admit I succumbed to a cult of personality. I was a kid who liked Harry Potter and didn't have a real father figure to guide me in utilizing my relative surplus of potential.
I survived. I have another chance. Thanks for reading.
I'll concede that I think Eliezer's conclusions follow from his premises. What I really don't understand is his ur-premise, his epistemological premise, the conviction that he can reason this all out a priori without either formal proof or empirical evidence.
That's ~never worked before. It didn't work for Aristotle (how many teeth did your wife have, Aristotle? why didn't you check?) and I don't expect it to work this time.
What really gets me is that he thinks A(G)I will work the same way: Just like how he can figure out the inevitability of nanotech foom by thinking really hard, AGI will be able to build the diamondoid bacteria factories by thinking even harder.