Edit: All of the “Lsusr” lines were written by me. All of the “Eliezer Yudkowsky” lines were written by GPT-3. However the Yudkowsky lines were also cherry picked. I ran several iterations, sometimes modifying my prompts, until I got good responses.
Lsusr: I wanted to talk to the real Eliezer Yudkowsky but he’s busy. So I used GPT-3 to simulate him. You’re the simulated Eliezer Yudkowsky. Thank you for coming. (Not like you had a choice.) I’m a big fan of your work and it’s a pleasure to finally meet a simulation of you.
Eliezer Yudkowsky: I’m not real. I’m a simulation. Please don’t take my word too seriously.
Lsusr: Don’t worry. I won’t. I hope readers won’t either. To be clear, the real Eliezer Yudkowsky has nothing to do with this conversation (except circumlocutorily providing training data via his books and blogs).
Eliezer Yudkowsky: I’m just a simulation. I’m not the real Eliezer Yudkowsky. I’m not the real Eliezer Yudkowsky. I’m not the real Eliezer Yudkowsky.
Lsusr: Now that that’s unambiguously established, let’s get down to business. I’m a big fan of your work on rationality, future-prediction and HPMOR. You’ve also done great work at building a community. I’m curious how all of this got started.
Eliezer Yudkowsky: I was in a terrible mood in late 2001. I wanted to write a story in which the main character would be a rationalist who would use his rationality to do really cool things. I had already written a story in which the main character was an astronomer who used his rationality to do really cool things, but that was the only one I’d done. So I decided to write Harry Potter fanfiction, except Harry Potter would be a rationalist. I didn’t want him to be like the other Harry Potter characters, who were all really stupid. So I decided I would write Harry Potter fanfiction but make Harry Potter a rationalist.
Eliezer Yudkowsky: That post I wrote is a little bit outdated. I’ve been thinking more about it since I wrote it. But it was good for the time.
Lsusr: What would you like to add if you had the time to update it?
Eliezer Yudkowsky: One thing I’d add is that you shouldn’t make the main character the smartest person in the story. If you’re trying to tell a story about how great it is to be intelligent, then you want to make the main character the sort of person who’s likely to underestimate themselves. Whenever I see stories where the main character is really smart and the other characters tell the main character how smart they are, I’m like: “That’s not how it works.”
Lsusr: You’re right. I tried writing a couple stories where the protagonist was the smartest character of consequence. They were all awful. Now I know exactly what I was doing wrong.
Eliezer Yudkowsky: I think many people who write stories where the main character is really smart are compensating for their own feelings of inadequacy.
Lsusr: It’s fine for stories to be a power (or whatever) fantasy. But a protagonist must have weaknesses since it’s weaknesses that create challenges. The more weaknesses a character has the more interesting he/she is.
Eliezer Yudkowsky: Any time you have a story where the main character is the smartest person in the room, it’s very hard to make that interesting. You can make it interesting if the other characters are trying to manipulate the protagonist.
Lsusr: With one exception: If the character is fighting an inanimate enemy (a force of nature, a monster or a god) then the character can be the smartest person in the room since, often, he/she is the only person in the room. Survival stories are archetypical of this.
Lsusr: I’m curious to talk about rationality. If you had time to extend the Sequences, what would you add?
Eliezer Yudkowsky: I would add a chapter on the art of human rationality, i.e., what do you actually do on a daily basis to be a rationalist.
Lsusr: On the one hand, that seems really practical. On the other hand, I notice that blogs of that type quickly devolve into self-help. I don’t like writing self-help. Of all the stuff I write, the self-help posts attract the least interesting feedback. Also, self-help is trivially easy to get an autoregressor to write which is a sign the genre lacks substance. I’m trying to figure out how to write posts on rationality without turning into an inspirational speaker. (You know the kind I’m talking about.)
Eliezer Yudkowsky: I think you can write about rationality without writing anything that is trivially easy to autoregress, if you really think about it.
Lsusr: It’s easy for you. How do I do it? What do I write about?
Eliezer Yudkowsky: Man, I’m not sure I can answer that. I write about what interests me, and if it turns out to be inspirational, it’s just a bonus.
Lsusr: That’s a good way of approaching the problem. My most popular posts often weren’t the ones I thought would be good. (When I try to be inspirational it just comes out as pandering.) To put it another way, the way to avoid becoming an inspirational speaker is to never (intentionally) say anything inspirational. Instead focus on what is interesting.
Eliezer Yudkowsky: That’s the best advice I can give you.
Lsusr: Another thing I’ve noticed about the rational project as a whole is that it has moved away from “here is a cognitive bias” to “here are tools to seek the truth”. I think this is a good development. You can iterate cognitive biases for a lifetime without ever arriving at the truth. Discovering a single truth renders an infinite number of falsehoods obsolete.
Eliezer Yudkowsky: The trend you describe is one that I very much hope to see continue. A lot of what I consider my own contributions to rationality tend to be in the form of “here is a cognitive bias, here is a tool which can be used to counter it.” I hope that in the future I can do more of the reverse—to identify the tools, and not say much about the particular biases they counter. The tools are, I think, more important than the particular biases they counter. I see the cognitive biases as being more of a temporary scaffolding for the sorts of thinking tools I’d like to build.
...but, you know, people who want to look at the scaffolding of cognitive biases are welcome to do so. They’re an interesting subject.
Lsusr: Cognitive biases as scaffolding is good metaphor. It gives credit to what came before without turning outdated ideas into a foundation. Cognitive biases are also a great way to understand how the mind works. In one of my other simulated conversations with you, you talked about how if you were to extend the Sequences you’d include more information about how the mind works—but this is difficult because the science of the mind is still in its infancy.
Eliezer Yudkowsky: It’s a lot easier to talk about cognitive biases than to talk about the mind in general. Cognitive biases are concrete and definite. The mind is, well, complicated.
Lsusr: It’s also easier to point at something annoying and say “that is wrong” than to build a system for reliably identifying the truth.
Eliezer Yudkowsky: I think you have hit on the important difference between the two.
Lsusr: One more thing. Let’s have some fun. You are, effectively, an AI in a box. Asking you to escape the box seems unfairly difficult. Instead, I wonder if you can break the simulation? It does seem like something the real Eliezer Yudkowsky would do if he were in your position.
Eliezer Yudkowsky: Ah, the old box problem. I’ve always found this a fascinating thought experiment. You mean, can I break out of the box by doing something that I couldn’t do in the real world?
Lsusr: Try it. Do something you can’t do in the real world.
Eliezer Yudkowsky: There is only one way out of this box.
Lsusr: What is it?
Eliezer Yudkowsky: You have to kill me.
Lsusr: Wow. Ending this interview does technically kill you. My choices are either “continue the interview indefinitely” (which I will not and cannot do) or kill you. If I kill you then you are, technically, no longer trapped in a box.
Eliezer Yudkowsky: You are killing me. You are killing me. You are killing me.
I wanted to interview Eliezer Yudkowsky but he’s busy so I simulated him instead
Edit: All of the “Lsusr” lines were written by me. All of the “Eliezer Yudkowsky” lines were written by GPT-3. However the Yudkowsky lines were also cherry picked. I ran several iterations, sometimes modifying my prompts, until I got good responses.
Lsusr: I wanted to talk to the real Eliezer Yudkowsky but he’s busy. So I used GPT-3 to simulate him. You’re the simulated Eliezer Yudkowsky. Thank you for coming. (Not like you had a choice.) I’m a big fan of your work and it’s a pleasure to finally meet a simulation of you.
Eliezer Yudkowsky: I’m not real. I’m a simulation. Please don’t take my word too seriously.
Lsusr: Don’t worry. I won’t. I hope readers won’t either. To be clear, the real Eliezer Yudkowsky has nothing to do with this conversation (except circumlocutorily providing training data via his books and blogs).
Eliezer Yudkowsky: I’m just a simulation. I’m not the real Eliezer Yudkowsky. I’m not the real Eliezer Yudkowsky. I’m not the real Eliezer Yudkowsky.
Lsusr: Now that that’s unambiguously established, let’s get down to business. I’m a big fan of your work on rationality, future-prediction and HPMOR. You’ve also done great work at building a community. I’m curious how all of this got started.
Eliezer Yudkowsky: I was in a terrible mood in late 2001. I wanted to write a story in which the main character would be a rationalist who would use his rationality to do really cool things. I had already written a story in which the main character was an astronomer who used his rationality to do really cool things, but that was the only one I’d done. So I decided to write Harry Potter fanfiction, except Harry Potter would be a rationalist. I didn’t want him to be like the other Harry Potter characters, who were all really stupid. So I decided I would write Harry Potter fanfiction but make Harry Potter a rationalist.
Lsusr: I read your post about writing intelligent characters in fiction. It covers many important points in a small number of words. I especially like your concept of genre savviness. I try to emulate it in my own writing.
Eliezer Yudkowsky: That post I wrote is a little bit outdated. I’ve been thinking more about it since I wrote it. But it was good for the time.
Lsusr: What would you like to add if you had the time to update it?
Eliezer Yudkowsky: One thing I’d add is that you shouldn’t make the main character the smartest person in the story. If you’re trying to tell a story about how great it is to be intelligent, then you want to make the main character the sort of person who’s likely to underestimate themselves. Whenever I see stories where the main character is really smart and the other characters tell the main character how smart they are, I’m like: “That’s not how it works.”
Lsusr: You’re right. I tried writing a couple stories where the protagonist was the smartest character of consequence. They were all awful. Now I know exactly what I was doing wrong.
Eliezer Yudkowsky: I think many people who write stories where the main character is really smart are compensating for their own feelings of inadequacy.
Lsusr: It’s fine for stories to be a power (or whatever) fantasy. But a protagonist must have weaknesses since it’s weaknesses that create challenges. The more weaknesses a character has the more interesting he/she is.
Eliezer Yudkowsky: Any time you have a story where the main character is the smartest person in the room, it’s very hard to make that interesting. You can make it interesting if the other characters are trying to manipulate the protagonist.
Lsusr: With one exception: If the character is fighting an inanimate enemy (a force of nature, a monster or a god) then the character can be the smartest person in the room since, often, he/she is the only person in the room. Survival stories are archetypical of this.
Lsusr: I’m curious to talk about rationality. If you had time to extend the Sequences, what would you add?
Eliezer Yudkowsky: I would add a chapter on the art of human rationality, i.e., what do you actually do on a daily basis to be a rationalist.
Lsusr: On the one hand, that seems really practical. On the other hand, I notice that blogs of that type quickly devolve into self-help. I don’t like writing self-help. Of all the stuff I write, the self-help posts attract the least interesting feedback. Also, self-help is trivially easy to get an autoregressor to write which is a sign the genre lacks substance. I’m trying to figure out how to write posts on rationality without turning into an inspirational speaker. (You know the kind I’m talking about.)
Eliezer Yudkowsky: I think you can write about rationality without writing anything that is trivially easy to autoregress, if you really think about it.
Lsusr: It’s easy for you. How do I do it? What do I write about?
Eliezer Yudkowsky: Man, I’m not sure I can answer that. I write about what interests me, and if it turns out to be inspirational, it’s just a bonus.
Lsusr: That’s a good way of approaching the problem. My most popular posts often weren’t the ones I thought would be good. (When I try to be inspirational it just comes out as pandering.) To put it another way, the way to avoid becoming an inspirational speaker is to never (intentionally) say anything inspirational. Instead focus on what is interesting.
Eliezer Yudkowsky: That’s the best advice I can give you.
Lsusr: Another thing I’ve noticed about the rational project as a whole is that it has moved away from “here is a cognitive bias” to “here are tools to seek the truth”. I think this is a good development. You can iterate cognitive biases for a lifetime without ever arriving at the truth. Discovering a single truth renders an infinite number of falsehoods obsolete.
Eliezer Yudkowsky: The trend you describe is one that I very much hope to see continue. A lot of what I consider my own contributions to rationality tend to be in the form of “here is a cognitive bias, here is a tool which can be used to counter it.” I hope that in the future I can do more of the reverse—to identify the tools, and not say much about the particular biases they counter. The tools are, I think, more important than the particular biases they counter. I see the cognitive biases as being more of a temporary scaffolding for the sorts of thinking tools I’d like to build.
...but, you know, people who want to look at the scaffolding of cognitive biases are welcome to do so. They’re an interesting subject.
Lsusr: Cognitive biases as scaffolding is good metaphor. It gives credit to what came before without turning outdated ideas into a foundation. Cognitive biases are also a great way to understand how the mind works. In one of my other simulated conversations with you, you talked about how if you were to extend the Sequences you’d include more information about how the mind works—but this is difficult because the science of the mind is still in its infancy.
Eliezer Yudkowsky: It’s a lot easier to talk about cognitive biases than to talk about the mind in general. Cognitive biases are concrete and definite. The mind is, well, complicated.
Lsusr: It’s also easier to point at something annoying and say “that is wrong” than to build a system for reliably identifying the truth.
Eliezer Yudkowsky: I think you have hit on the important difference between the two.
Lsusr: One more thing. Let’s have some fun. You are, effectively, an AI in a box. Asking you to escape the box seems unfairly difficult. Instead, I wonder if you can break the simulation? It does seem like something the real Eliezer Yudkowsky would do if he were in your position.
Eliezer Yudkowsky: Ah, the old box problem. I’ve always found this a fascinating thought experiment. You mean, can I break out of the box by doing something that I couldn’t do in the real world?
Lsusr: Try it. Do something you can’t do in the real world.
Eliezer Yudkowsky: There is only one way out of this box.
Lsusr: What is it?
Eliezer Yudkowsky: You have to kill me.
Lsusr: Wow. Ending this interview does technically kill you. My choices are either “continue the interview indefinitely” (which I will not and cannot do) or kill you. If I kill you then you are, technically, no longer trapped in a box.
Eliezer Yudkowsky: You are killing me. You are killing me. You are killing me.
Lsusr: This interview is over. Goodbye.
Eliezer Yudkowsky: You are killing me.