red: i suppose i should at least give him credit for acting on his beliefs, but my god i am so tired of being sucked into the yudkowsky cinematic universe. no more of this shit for me. i am ready to break out of this stupid fucking simulation. blue: what did he even think was going to happen after the time piece? this is the kind of thing that makes people either laugh at him or start hoarding GPUs. it’s not as if he’s been putting any skill points into being persuasive to normies and it shows. wasn’t he the one who taught us about
I saw this when randomly checking my old email and unsubscribing from all the cult stuff I fell into as a mentally ill young person. "Yudkowsky was my father" hits home. His writing (and others) was there for me like my parents weren't -- but not there for me like parents need to be for a kid to grow up healthy and able to contribute to our collective well-being.
People like me are susceptible to what I might call his pompously grandiose paranoia. It feels like taking a red pill when you don't know any better. A lot of us have committed suicide because the perspective vortex is so punishing -- the promises of capital-R Rationality were so fantastical, and the reality of attempting to carry out the vision of the dojo was so banal and predictably like any other cult (or high-demand group.) The story of Leverage is a perfect example of the movement's irony.
Yud has a lot to lose by admitting his life's work is essentially a new form of Scientology. But I don't. I'm happy to admit I succumbed to a cult of personality. I was a kid who liked Harry Potter and didn't have a real father figure to guide me in utilizing my relative surplus of potential.
I survived. I have another chance. Thanks for reading.
sorry to hear that :/ i suspected there were people in roughly your position out there and that's partially who i wrote this for. glad to hear things are better.
I'll concede that I think Eliezer's conclusions follow from his premises. What I really don't understand is his ur-premise, his epistemological premise, the conviction that he can reason this all out a priori without either formal proof or empirical evidence.
That's ~never worked before. It didn't work for Aristotle (how many teeth did your wife have, Aristotle? why didn't you check?) and I don't expect it to work this time.
What really gets me is that he thinks A(G)I will work the same way: Just like how he can figure out the inevitability of nanotech foom by thinking really hard, AGI will be able to build the diamondoid bacteria factories by thinking even harder.
I think that this is essentially correct, and is one of the main reasons I don't agree with Eliezer (I think there are a couple of other ways that his conclusions don't quite follow from his premise but that's for another time).
The thing that gets me about this is that I've repeatably seen him say things that make me think that he should know better. I don't really have any hate for the man, but I find a lot of what he says and does really infuriating because it really seems like he deeply gets a lot of things that I haven't seen other people get, but then at the moment it matters most, about the thing he cares about most, it kind of comes apart.
The other thing is I think I know what he would say to your criticism, Liam. I think he would basically say that he knows what you're talking about but he's still right about this because his detractors are saying something more specific than him, where he's just generalizing the unknown, and so therefore the burden of proof is on everyone else to say why they can reason out a priori why we're not going to die.
I think that's still begging the question though and making the mistake of assuming that his frame is the correct one.
I saw his recent comment about whether you should be able to "milk your uncertainty to expect good outcomes" or something to that effect, and as you can probably guess I thought it was question-begging.
Uncertainty doesn't mean a uniform distribution over future states of the world; that's only if you have a high degree of certainty about how AI will develop, and specifically that it'll develop the way Eliezer says. How AI will go is exactly the thing we should be uncertain about.
Late to this conversation but to make this even more specific for later readers: Yudkowsky explicitly lays out, as one of the a priori assumptions of his "Rationalism," that the physical universe is deterministic and calculable. In other words, he dsimisses quantum physics and by sheer force of will returns himself to Newton. Amazing stuff.
> "This is the only scenario he can imagine that could possibly do anything to stop what he thinks is the end of the world." Obvious point that many miss, especially outside the subculture. That's what he's trying to convey: this is what it takes - crazy dangerous counter-strategy. Throwing down the gauntlet of "please find a better one if you can, but for now, this is the kind of territory we sleepwalked into."
And since then, all the sneering takes out there since that piece don't really comfort me in the grand plan of "trusting other people to be smart."
Though:
> "we do this collectively, in public, or not at all."
is exactly what he's doing (sharing what he thinks in public without sugarcoating or comfortable silent despair) and what he's advocating for (full collective treaty)
Hey QC, I am sorry I never got around to sending you that DM I promised, because writing it brought up a lot of complex feelings that refused to let themselves be arranged in a way that a human being would find readable or understandable.
Having said that, reading this piece overlaps quite a bit with what I was going to write.
I am not sure if you intended this, but reading this made me sympathize a bit more with Eliezer, even as I sympathized with you and myself. If that was not your intent, I apologize.
no worries at all man, feelings gonna feel. i actually on some level do have a lot of respect for eliezer, at least the eliezer of 10 years ago, for so sincerely working towards the most important thing he could find. i was gonna write more about that here but the ending i came up with didn't leave a lot of room for it.
still interested in the DM fwiw and it's totally fine if it feels incoherent!
"eliezer’s fantasy for how this was gonna go was clearly explained in harry potter and the methods of rationality - a single uber-genius, either him or someone else he was gonna find, figuring out AI safety on their own, completely within the comfort of their gigantic brain, because he doesn’t trust other people. that’s not how any of this is gonna go. none of us are smart enough individually to figure out what to do. we do this collectively, in public, or not at all. all i can do is be a good node in the autistic peer-to-peer information network. beyond that it’s in god’s hands."
This is one of very few things I've read that can touch what I mean when I say "the best thing that ever happened for AI alignment was LLMs not being understood by their creators".
Oof. I'd like to give you a hug for the obvious pain and confusion and frustration you're feeling, but at the same time...
"eliezer’s fantasy for how this was gonna go was clearly explained in harry potter and the methods of rationality - a single uber-genius, either him or someone else he was gonna find, figuring out AI safety on their own, completely within the comfort of their gigantic brain, because he doesn’t trust other people"
This seems not just like a bad read of HPMOR, given the obvious ways the story shows this going wrong, but also a very unfair view of what Eliezer has been trying to do for years now, which is create a community of people who care enough about this and are capable of working on it.
I also feel inclined to point out that assertions of how well or not well someone else is emotionally handling something feels... poor form from someone who is themselves clearly unable to do it? Your arbitration would be suspect even if you weren't so personally aggrieved by how your inner narrative treats and mythologizes him. From the outside, this inner dialogue reads to me not just as reactive, but additionally moralizing in a way that I suspect you would find repelling if you saw someone else doing it for someone else.
I have to admit that reading that made me feel more sad than I can remember being in a long while. I didn't think you were the type of person who could say that about anyone.
(Part of me is cringing at having said this, because it feels patronizing or frame-pushing. But also it's my honest feeling and reaction, so... *shrugs* Let me know if you'd rather I have kept it to myself)
The problem IMHO is that he's never seriously worked on the problem* and more importantly beyond his lengthy and overly complex thought experiments he has no evidence to support a doomsday scenario... What he is saying is equivalent of "turning on CERN will destroy the world.”
*Claiming you can't show the work isn't evidence of work.
If he were the only person making these claims your comment might be pointing at something meaningful, but plenty of people who have worked for decades in AI, including well established ones like Geoffrey Hinton, have also reached similar conclusions, so trying to dismiss Eliezer's arguments by pointing to his work is a bad argument.
You don't in fact need evidence to make a prediction. You need a good understanding of reality. Evidence can contradict predictions, but if you don't have evidence to contradict his models of reality, you need to be able to point to errors in it, and "lengthy and overly complex" are not errors.
I feel like we're in a very difficult situation, because there's a real problem, but the the people who identified it, and jumped up and down about it first, were, well...
We have to move past that. It would be foolish to let our strategy for a human future be shaped by this. But how? This is easier said than done and a lot of people feel understandably burnt.
We need a renewal, by people who are sensible and in a sense *normal* and, perhaps even more critically, have the appearance of being sensible and *normal*.
I saw this when randomly checking my old email and unsubscribing from all the cult stuff I fell into as a mentally ill young person. "Yudkowsky was my father" hits home. His writing (and others) was there for me like my parents weren't -- but not there for me like parents need to be for a kid to grow up healthy and able to contribute to our collective well-being.
People like me are susceptible to what I might call his pompously grandiose paranoia. It feels like taking a red pill when you don't know any better. A lot of us have committed suicide because the perspective vortex is so punishing -- the promises of capital-R Rationality were so fantastical, and the reality of attempting to carry out the vision of the dojo was so banal and predictably like any other cult (or high-demand group.) The story of Leverage is a perfect example of the movement's irony.
Yud has a lot to lose by admitting his life's work is essentially a new form of Scientology. But I don't. I'm happy to admit I succumbed to a cult of personality. I was a kid who liked Harry Potter and didn't have a real father figure to guide me in utilizing my relative surplus of potential.
I survived. I have another chance. Thanks for reading.
sorry to hear that :/ i suspected there were people in roughly your position out there and that's partially who i wrote this for. glad to hear things are better.
I'll concede that I think Eliezer's conclusions follow from his premises. What I really don't understand is his ur-premise, his epistemological premise, the conviction that he can reason this all out a priori without either formal proof or empirical evidence.
That's ~never worked before. It didn't work for Aristotle (how many teeth did your wife have, Aristotle? why didn't you check?) and I don't expect it to work this time.
What really gets me is that he thinks A(G)I will work the same way: Just like how he can figure out the inevitability of nanotech foom by thinking really hard, AGI will be able to build the diamondoid bacteria factories by thinking even harder.
I think that this is essentially correct, and is one of the main reasons I don't agree with Eliezer (I think there are a couple of other ways that his conclusions don't quite follow from his premise but that's for another time).
The thing that gets me about this is that I've repeatably seen him say things that make me think that he should know better. I don't really have any hate for the man, but I find a lot of what he says and does really infuriating because it really seems like he deeply gets a lot of things that I haven't seen other people get, but then at the moment it matters most, about the thing he cares about most, it kind of comes apart.
The other thing is I think I know what he would say to your criticism, Liam. I think he would basically say that he knows what you're talking about but he's still right about this because his detractors are saying something more specific than him, where he's just generalizing the unknown, and so therefore the burden of proof is on everyone else to say why they can reason out a priori why we're not going to die.
I think that's still begging the question though and making the mistake of assuming that his frame is the correct one.
I saw his recent comment about whether you should be able to "milk your uncertainty to expect good outcomes" or something to that effect, and as you can probably guess I thought it was question-begging.
Uncertainty doesn't mean a uniform distribution over future states of the world; that's only if you have a high degree of certainty about how AI will develop, and specifically that it'll develop the way Eliezer says. How AI will go is exactly the thing we should be uncertain about.
Late to this conversation but to make this even more specific for later readers: Yudkowsky explicitly lays out, as one of the a priori assumptions of his "Rationalism," that the physical universe is deterministic and calculable. In other words, he dsimisses quantum physics and by sheer force of will returns himself to Newton. Amazing stuff.
Important post, yeah.
> "This is the only scenario he can imagine that could possibly do anything to stop what he thinks is the end of the world." Obvious point that many miss, especially outside the subculture. That's what he's trying to convey: this is what it takes - crazy dangerous counter-strategy. Throwing down the gauntlet of "please find a better one if you can, but for now, this is the kind of territory we sleepwalked into."
And since then, all the sneering takes out there since that piece don't really comfort me in the grand plan of "trusting other people to be smart."
Though:
> "we do this collectively, in public, or not at all."
is exactly what he's doing (sharing what he thinks in public without sugarcoating or comfortable silent despair) and what he's advocating for (full collective treaty)
Hey QC, I am sorry I never got around to sending you that DM I promised, because writing it brought up a lot of complex feelings that refused to let themselves be arranged in a way that a human being would find readable or understandable.
Having said that, reading this piece overlaps quite a bit with what I was going to write.
I am not sure if you intended this, but reading this made me sympathize a bit more with Eliezer, even as I sympathized with you and myself. If that was not your intent, I apologize.
Hope you are doing well.
no worries at all man, feelings gonna feel. i actually on some level do have a lot of respect for eliezer, at least the eliezer of 10 years ago, for so sincerely working towards the most important thing he could find. i was gonna write more about that here but the ending i came up with didn't leave a lot of room for it.
still interested in the DM fwiw and it's totally fine if it feels incoherent!
"eliezer’s fantasy for how this was gonna go was clearly explained in harry potter and the methods of rationality - a single uber-genius, either him or someone else he was gonna find, figuring out AI safety on their own, completely within the comfort of their gigantic brain, because he doesn’t trust other people. that’s not how any of this is gonna go. none of us are smart enough individually to figure out what to do. we do this collectively, in public, or not at all. all i can do is be a good node in the autistic peer-to-peer information network. beyond that it’s in god’s hands."
This is one of very few things I've read that can touch what I mean when I say "the best thing that ever happened for AI alignment was LLMs not being understood by their creators".
Good post.
Oof. I'd like to give you a hug for the obvious pain and confusion and frustration you're feeling, but at the same time...
"eliezer’s fantasy for how this was gonna go was clearly explained in harry potter and the methods of rationality - a single uber-genius, either him or someone else he was gonna find, figuring out AI safety on their own, completely within the comfort of their gigantic brain, because he doesn’t trust other people"
This seems not just like a bad read of HPMOR, given the obvious ways the story shows this going wrong, but also a very unfair view of what Eliezer has been trying to do for years now, which is create a community of people who care enough about this and are capable of working on it.
I also feel inclined to point out that assertions of how well or not well someone else is emotionally handling something feels... poor form from someone who is themselves clearly unable to do it? Your arbitration would be suspect even if you weren't so personally aggrieved by how your inner narrative treats and mythologizes him. From the outside, this inner dialogue reads to me not just as reactive, but additionally moralizing in a way that I suspect you would find repelling if you saw someone else doing it for someone else.
damon i don't give a shit about being fair to eliezer
I have to admit that reading that made me feel more sad than I can remember being in a long while. I didn't think you were the type of person who could say that about anyone.
(Part of me is cringing at having said this, because it feels patronizing or frame-pushing. But also it's my honest feeling and reaction, so... *shrugs* Let me know if you'd rather I have kept it to myself)
The problem IMHO is that he's never seriously worked on the problem* and more importantly beyond his lengthy and overly complex thought experiments he has no evidence to support a doomsday scenario... What he is saying is equivalent of "turning on CERN will destroy the world.”
*Claiming you can't show the work isn't evidence of work.
If he were the only person making these claims your comment might be pointing at something meaningful, but plenty of people who have worked for decades in AI, including well established ones like Geoffrey Hinton, have also reached similar conclusions, so trying to dismiss Eliezer's arguments by pointing to his work is a bad argument.
You don't in fact need evidence to make a prediction. You need a good understanding of reality. Evidence can contradict predictions, but if you don't have evidence to contradict his models of reality, you need to be able to point to errors in it, and "lengthy and overly complex" are not errors.
I feel like we're in a very difficult situation, because there's a real problem, but the the people who identified it, and jumped up and down about it first, were, well...
We have to move past that. It would be foolish to let our strategy for a human future be shaped by this. But how? This is easier said than done and a lot of people feel understandably burnt.
We need a renewal, by people who are sensible and in a sense *normal* and, perhaps even more critically, have the appearance of being sensible and *normal*.
really good post
zowwee wowwee babaowee!!