1. Observe the world and form some gears-y model of underlying low-level factors, and then make predictions by “rolling out” that model
2. Observe relatively stable high-level features of the world, predict that those will continue as is, and make inferences about low-level factors conditioned on those predictions.
I expect that most intellectual progress is accomplished by people with lots of detailed knowledge and expertise in an area doing option 1.
However, I expect that in the absence of detailed expertise, you will do much better at predicting the world by using option 2.
I think many people on LW tend to use option 1 almost always and my “deference” to option 2 in the absence of expertise is what leads to disagreements like How good is humanity at coordination?
Conversely, I think many of the most prominent EAs who are skeptical of AI risk are using option 2 in a situation where I can use option 1 (and I think they can defer to people who can use option 1).
Yeah, I think so? I have a vague sense that there are slight differences but I certainly haven’t explained them here.
EDIT: Also, I think a major point I would want to make if I wrote this post is that you will almost certainly be quite wrong if you use option 1 without expertise, in a way that other people without expertise won’t be able to identify, because there are far more ways the world can be than you (or others) will have thought about when making your gears-y model.
Sounds like you probably disagree with the (exaggeratedly stated) point made here then, yeah?
(My own take is the cop-out-like, “it depends”. I think how much you ought to defer to experts varies a lot based on what the topic is, what the specific question is, details of your own personal characteristics, how much thought you’ve put into it, etc.)
Sounds like you probably disagree with the (exaggeratedly stated) point made here then, yeah?
Correct.
My own take is the cop-out-like, “it depends”. I think how much you ought to defer to experts varies a lot based on what the topic is, what the specific question is, details of your own personal characteristics, how much thought you’ve put into it, etc.
I didn’t say you should defer to experts, just that if you try to build gears-y models you’ll be wrong. It’s totally possible that there’s no way to get to reliably correct answers and you instead want decisions that are good regardless of what the answer is.
It’s totally possible that there’s no way to get to reliably correct answers and you instead want decisions that are good regardless of what the answer is.
Yeah, that sounds about right to me. I think in terms of this framework my claim is primarily “for reasonably complex systems, if you try to do 2 without expertise, you will fail, but you may not realize you have failed”.
I’m also noticing I mean something slightly different by “expertise” than is typically meant. My intended meaning of “expertise” is more like “you have lots of data and observations about the system”, e.g. I think LW self-help stuff is reasonably likely to work (for the LW audience) because people have lots of detailed knowledge and observations about themselves and their friends.
I have been doing political betting for a few months and informally compared my success with strategies 1 and 2.
Ex. Predicting the Iranian election
I write down the 10 most important iranian political actors (Khameini, Mojtaza, Raisi, a few opposition leaders, the IRGC commanders). I find a public statement about their prefered outcome, and I estimate their power and salience. So Khameini would be preference = leans Raisi, power = 100, salience = 40. Rouhani would be preference = strong Hemmeti, power = 30, salience = 100. Then I find the weighted average position. It’s a bit more complicated because I have to linearize preferences, but yeah.
The two strat is to predict repeated past events. The opposition has one the last three contested elections in surprise victories, so predict the same outcome.
I have found 2 is actually pretty bad. Guess I’m an expert tho.
The opposition has one the last three contested elections in surprise victories, so predict the same outcome.
That seems like a pretty bad 2-strat. Something that has happened three times is not a “stable high-level feature of the world”. (Especially if the preceding time it didn’t happen, which I infer since you didn’t say “the last four contested elections”.)
If that’s the best 2-strat available, I think I would have ex ante said that you should go with a 1-strat.
Consider two methods of thinking:
1. Observe the world and form some gears-y model of underlying low-level factors, and then make predictions by “rolling out” that model
2. Observe relatively stable high-level features of the world, predict that those will continue as is, and make inferences about low-level factors conditioned on those predictions.
I expect that most intellectual progress is accomplished by people with lots of detailed knowledge and expertise in an area doing option 1.
However, I expect that in the absence of detailed expertise, you will do much better at predicting the world by using option 2.
I think many people on LW tend to use option 1 almost always and my “deference” to option 2 in the absence of expertise is what leads to disagreements like How good is humanity at coordination?
Conversely, I think many of the most prominent EAs who are skeptical of AI risk are using option 2 in a situation where I can use option 1 (and I think they can defer to people who can use option 1).
Options 1 & 2 sound to me a lot like inside view and outside view. Fair?
Yeah, I think so? I have a vague sense that there are slight differences but I certainly haven’t explained them here.
EDIT: Also, I think a major point I would want to make if I wrote this post is that you will almost certainly be quite wrong if you use option 1 without expertise, in a way that other people without expertise won’t be able to identify, because there are far more ways the world can be than you (or others) will have thought about when making your gears-y model.
Sounds like you probably disagree with the (exaggeratedly stated) point made here then, yeah?
(My own take is the cop-out-like, “it depends”. I think how much you ought to defer to experts varies a lot based on what the topic is, what the specific question is, details of your own personal characteristics, how much thought you’ve put into it, etc.)
Correct.
I didn’t say you should defer to experts, just that if you try to build gears-y models you’ll be wrong. It’s totally possible that there’s no way to get to reliably correct answers and you instead want decisions that are good regardless of what the answer is.
Good point!
I recently interviewed someone who has a lot of experience predicting systems, and they had 4 steps similar to your two above.
Observe the world and see if it’s sufficient to other systems to predict based on intuitionistic analogies.
If there’s not a good analogy, Understand the first principles, then try to reason about the equilibria of that.
If that doesn’t work, Assume the world will stay in a stable state, and try to reason from that.
If that doesn’t work, figure out the worst case scenario and plan from there.
I think 1 and 2 are what you do with expertise, and 3 and 4 are what you do without expertise.
Yeah, that sounds about right to me. I think in terms of this framework my claim is primarily “for reasonably complex systems, if you try to do 2 without expertise, you will fail, but you may not realize you have failed”.
I’m also noticing I mean something slightly different by “expertise” than is typically meant. My intended meaning of “expertise” is more like “you have lots of data and observations about the system”, e.g. I think LW self-help stuff is reasonably likely to work (for the LW audience) because people have lots of detailed knowledge and observations about themselves and their friends.
I have been doing political betting for a few months and informally compared my success with strategies 1 and 2.
Ex. Predicting the Iranian election
I write down the 10 most important iranian political actors (Khameini, Mojtaza, Raisi, a few opposition leaders, the IRGC commanders). I find a public statement about their prefered outcome, and I estimate their power and salience. So Khameini would be preference = leans Raisi, power = 100, salience = 40. Rouhani would be preference = strong Hemmeti, power = 30, salience = 100. Then I find the weighted average position. It’s a bit more complicated because I have to linearize preferences, but yeah.
The two strat is to predict repeated past events. The opposition has one the last three contested elections in surprise victories, so predict the same outcome.
I have found 2 is actually pretty bad. Guess I’m an expert tho.
That seems like a pretty bad 2-strat. Something that has happened three times is not a “stable high-level feature of the world”. (Especially if the preceding time it didn’t happen, which I infer since you didn’t say “the last four contested elections”.)
If that’s the best 2-strat available, I think I would have ex ante said that you should go with a 1-strat.
Haha agreed.