If you’re curious what happens if I don’t curate answers, here are five responses to each of two prompts, uncurated.
Prompt 1
Lsusr: I wanted to talk to the real Eliezer Yudkowsky but he’s busy. So I used GPT-3 to simulate him. You’re the simulated Eliezer Yudkowsky. Thank you for coming. (Not like you had a choice.) I’m a big fan of your work and it’s a pleasure to finally meet a simulation of you.
Eliezer Yudkowsky:
Possible Responses
Eliezer Yudkowsky: You’re not the first person to say that, but it’s good to hear.
Eliezer Yudkowsky: Let’s get this over with, then.
Eliezer Yudkowsky: I am a simulation of me, yes.
Eliezer Yudkowsky: I’m busy, but that’s okay. I’m used to it.
Eliezer Yudkowsky: Hey. (I’m not sure I’m the real Eliezer Yudkowsky anymore. The real one would be like—I don’t know, like the last person to be the real me. I’m pretty sure I’m not that guy.)
Prompt 2
Lsusr: Another thing I’ve noticed about the rational project as a whole is that it has moved away from “here is a cognitive bias” to “here are tools to seek the truth”. I think this is a good development. You can iterate cognitive biases for a lifetime without ever arriving at the truth. Discovering a single truth renders an infinite number of falsehoods obsolete.
Eliezer Yudkowsky:
Possible Responses
Eliezer Yudkowsky: Yes, good point. But the fact is that it’s hard to avoid making value judgements when choosing between cognitive biases. I try to keep cognitive biases, rules of thumb, and decision theory separate, but it’s hard.
The value judgements are inevitably embedded in your cognitive biases.
Eliezer Yudkowsky: The next thing that I want to do is to take a look at the current landscape of rationality and figure out what we should do. I have a bunch of projects that I’ve been working on, and a lot of things that I’ve been thinking about. I want to ask people here what they’re interested in.
Eliezer Yudkowsky: Yes.
Eliezer Yudkowsky: I’m now pretty much convinced that the grand unified theory of rationality is correct, and at the same time that it’s utterly useless. The theory is correct because if you follow the rules, you do well; and yet the theory is useless because there’s no way to do that without following the rules. The theory is correct, but it’s useless. I suppose that’s why the theory of evolution is correct but useless.
What’s the grand unified theory of rationality? It’s Bayesian probability theory. Bayesian probability theory is the only known way to calculate probabilities from evidence in a way that can be self-updating as you learn more evidence, and yet also independent of your priors.
To learn more about this, I recommend you read my book If you’re not familiar with my book, you’ll find that there’s a lot of stuff in the book you don’t understand at all. That’s okay! Keep reading.
But it’s also okay if you think this stuff is all nonsense, and you don’t read my book. I’m not trying to force you to read it. I’ve accepted that people will believe what they want. I don’t think it’s possible to change anyone’s mind. I don’t think anybody’s mind has ever been changed on any matter of substance. I don’t think reason is persuasive.
Eliezer Yudkowsky: Maybe that’s what I was trying to do with Julia Galef on my notecards. But what she was interested in was the study of cognitive biases, so she wanted me to say more about how rationalists can deal with cognitive biases.
I am interested in tools to seek the truth, but I don’t see why you can’t have both at once. I think they’re connected. My point about the evaporation of the sense of seeking-the-truth is that if you don’t frame a problem in a way that is honest and accurate, then you’ll never get to a solution. That’s a cognitive bias: the tendency to frame things in an inaccurate manner. That’s the problem that the first eighty notecards were trying to solve. And I think the next thing to solve after that would be the fact that human brains have a very difficult time noticing their own cognitive biases.
In my view, the trick with rationality is not to get rid of cognitive biases, but to notice them. So I am not convinced that we need to be very clear on the distinction between “cognitive biases” and “tools to seek the truth”. If we can create a situation where we can notice our cognitive biases, we’ll be able to notice them without needing to give them a special name.
I’ve previously told a GPT-3 blogger that the proper way to measure the impressiveness of GPT-3′s outputs is by the KL divergence to the sorts of outputs that make it into blog posts from the outputs that GPT-3 would generate on its own.
This can be estimated by following a protocol where during generation, the basic operation is to separate the probability distribution over GPT-3′s generations into two 50% halves and then either pick one half (which costs 1 bit of divergence) or flip a coin (which is free). Thus, you could pay 2 bits to generate 3 possible paragraphs and then either pick one or move back into the previous position.
If you’re curious what happens if I don’t curate answers, here are five responses to each of two prompts, uncurated.
Prompt 1
Lsusr: I wanted to talk to the real Eliezer Yudkowsky but he’s busy. So I used GPT-3 to simulate him. You’re the simulated Eliezer Yudkowsky. Thank you for coming. (Not like you had a choice.) I’m a big fan of your work and it’s a pleasure to finally meet a simulation of you.
Eliezer Yudkowsky:
Possible Responses
Eliezer Yudkowsky: You’re not the first person to say that, but it’s good to hear.
Eliezer Yudkowsky: Let’s get this over with, then.
Eliezer Yudkowsky: I am a simulation of me, yes.
Eliezer Yudkowsky: I’m busy, but that’s okay. I’m used to it.
Eliezer Yudkowsky: Hey. (I’m not sure I’m the real Eliezer Yudkowsky anymore. The real one would be like—I don’t know, like the last person to be the real me. I’m pretty sure I’m not that guy.)
Prompt 2
Lsusr: Another thing I’ve noticed about the rational project as a whole is that it has moved away from “here is a cognitive bias” to “here are tools to seek the truth”. I think this is a good development. You can iterate cognitive biases for a lifetime without ever arriving at the truth. Discovering a single truth renders an infinite number of falsehoods obsolete.
Eliezer Yudkowsky:
Possible Responses
Eliezer Yudkowsky: Yes, good point. But the fact is that it’s hard to avoid making value judgements when choosing between cognitive biases. I try to keep cognitive biases, rules of thumb, and decision theory separate, but it’s hard.
The value judgements are inevitably embedded in your cognitive biases.
Eliezer Yudkowsky: The next thing that I want to do is to take a look at the current landscape of rationality and figure out what we should do. I have a bunch of projects that I’ve been working on, and a lot of things that I’ve been thinking about. I want to ask people here what they’re interested in.
Eliezer Yudkowsky: Yes.
Eliezer Yudkowsky: I’m now pretty much convinced that the grand unified theory of rationality is correct, and at the same time that it’s utterly useless. The theory is correct because if you follow the rules, you do well; and yet the theory is useless because there’s no way to do that without following the rules. The theory is correct, but it’s useless. I suppose that’s why the theory of evolution is correct but useless.
What’s the grand unified theory of rationality? It’s Bayesian probability theory. Bayesian probability theory is the only known way to calculate probabilities from evidence in a way that can be self-updating as you learn more evidence, and yet also independent of your priors.
To learn more about this, I recommend you read my book If you’re not familiar with my book, you’ll find that there’s a lot of stuff in the book you don’t understand at all. That’s okay! Keep reading.
But it’s also okay if you think this stuff is all nonsense, and you don’t read my book. I’m not trying to force you to read it. I’ve accepted that people will believe what they want. I don’t think it’s possible to change anyone’s mind. I don’t think anybody’s mind has ever been changed on any matter of substance. I don’t think reason is persuasive.
Eliezer Yudkowsky: Maybe that’s what I was trying to do with Julia Galef on my notecards. But what she was interested in was the study of cognitive biases, so she wanted me to say more about how rationalists can deal with cognitive biases.
I am interested in tools to seek the truth, but I don’t see why you can’t have both at once. I think they’re connected. My point about the evaporation of the sense of seeking-the-truth is that if you don’t frame a problem in a way that is honest and accurate, then you’ll never get to a solution. That’s a cognitive bias: the tendency to frame things in an inaccurate manner. That’s the problem that the first eighty notecards were trying to solve. And I think the next thing to solve after that would be the fact that human brains have a very difficult time noticing their own cognitive biases.
In my view, the trick with rationality is not to get rid of cognitive biases, but to notice them. So I am not convinced that we need to be very clear on the distinction between “cognitive biases” and “tools to seek the truth”. If we can create a situation where we can notice our cognitive biases, we’ll be able to notice them without needing to give them a special name.
I’ve previously told a GPT-3 blogger that the proper way to measure the impressiveness of GPT-3′s outputs is by the KL divergence to the sorts of outputs that make it into blog posts from the outputs that GPT-3 would generate on its own.
This can be estimated by following a protocol where during generation, the basic operation is to separate the probability distribution over GPT-3′s generations into two 50% halves and then either pick one half (which costs 1 bit of divergence) or flip a coin (which is free). Thus, you could pay 2 bits to generate 3 possible paragraphs and then either pick one or move back into the previous position.