Slow-takeoff idea (of morality, not of intelligence) can be traced back even to Plato. I guess in Eliezer’s arguments about FOOM there is still some fresh content.
OK, I cannot remember how much of Freakonomics volumes I have read, as it is trivial enough. My point is that Freakonomics is about seeing incentives and seeing the difference between “forward” and “backward” conditional probabilities. It chooses examples that can be backed by data and where entire mechanisms can be exposed. It doesn’t require much effort or any background to read, and it shows your examples that clearly can affect you, even if indirectly.
I guess in Eliezer’s arguments about FOOM there is still some fresh content.
Is there anything significant? I haven’t looked that hard but I haven’t really noticed anything substantial in that bit other than his potential solution of CEV and that seems to be the most dubious bit of the claims.
My point is that Freakonomics is about seeing incentives and seeing the difference between “forward” and “backward” conditional probabilities.
Sure and this is nice if one is trying to model reality in say a policy basis. But this is on the order of say a subsequence of a general technique. This won’t do much to most people’s daily decision making the same way that say the danger of confirmation bias or the planning fallacy would. This sort of work to be useful often requires accurate data and sometimes models that only appear obvious in hindsight or are not easily testable. That doesn’t impact the sanity waterline that much.
The main value I see in Freakanomics is communicating “the heart of science” to a general audience, namely that science is about reaching conclusions that are uncomfortable but true.
namely that science is about reaching conclusions that are uncomfortable but true.
This seems confused to me, science should reach conclusions that are true whether or not they are uncomfortable. Moreover, I’m not at all sure how Freakanomics would have shown your point. Moreover, I think that the general audience knows something sort of like this already- it is a major reason people don’t like science so much.
I agree! But it’s often easy to arrive at conclusions that are comfortable (and happen to be true). It’s harder when conclusions are uncomfortable (and happen to be true). All other things being equal, folks probably favor the comfortable over the uncomfortable. Lots of folks that care about truth, including LW, worry about cognitive biases for this reason. My favorite Freakanomics example is the relationship between abortions and crime rate. If their claim were true, it would be an extremely uncomfortable kind of truth.
You may be right that the general audience already knows this about science. I am not sure—I often have a hard time popularizing what I do, for instance, because I can never quite tell what the intended audience knows and what it does not know. A lot of “popular science” seems pretty obvious to me, but apparently it is not obvious to people buying the books (or perhaps it is obvious, and they buy books for some other reason than learning something).
It is certainly the case that mainstream science does not touch certain kinds of questions with a ten foot pole (which I think is rather not in the scientific spirit).
Is there anything significant? I haven’t looked that hard but I haven’t really noticed anything substantial in that bit other than his potential solution of CEV and that seems to be the most dubious bit of the claims.
For me FOOM as advertised is dubious, so hard to tell. That doesn’t change my point: it requires intelligence to prepare CEV arguments, but the fact of his support for FOOM scenario and his arguments break consistency of high quality of ideas for people like me. So, yes, there is a lot to respect him for, but nothing truly unique and “consistency of good ideas” is only there if you already agree with his ideas.
This won’t do much to most people’s daily decision making the same way that say the danger of confirmation bias or the planning fallacy would.
Well… It is way easier to concede that you don’t understand other people than that you don’t understand yourself. Freakonomics gives you a chance to understand why people do these strange things (spoiler: because it is their best move in the complex world with no overaching sanity enforcement). Seeing incentives is the easiest first step to make which many people haven’t made yet. After you learn to see that actions are not what they seem, it is way better to admit that your decisions are also not what they seem.
As for planning fallacy… What do you want when there are often incentives to commit it?
Slow-takeoff idea (of morality, not of intelligence) can be traced back even to Plato. I guess in Eliezer’s arguments about FOOM there is still some fresh content.
OK, I cannot remember how much of Freakonomics volumes I have read, as it is trivial enough. My point is that Freakonomics is about seeing incentives and seeing the difference between “forward” and “backward” conditional probabilities. It chooses examples that can be backed by data and where entire mechanisms can be exposed. It doesn’t require much effort or any background to read, and it shows your examples that clearly can affect you, even if indirectly.
Is there anything significant? I haven’t looked that hard but I haven’t really noticed anything substantial in that bit other than his potential solution of CEV and that seems to be the most dubious bit of the claims.
Sure and this is nice if one is trying to model reality in say a policy basis. But this is on the order of say a subsequence of a general technique. This won’t do much to most people’s daily decision making the same way that say the danger of confirmation bias or the planning fallacy would. This sort of work to be useful often requires accurate data and sometimes models that only appear obvious in hindsight or are not easily testable. That doesn’t impact the sanity waterline that much.
The main value I see in Freakanomics is communicating “the heart of science” to a general audience, namely that science is about reaching conclusions that are uncomfortable but true.
This seems confused to me, science should reach conclusions that are true whether or not they are uncomfortable. Moreover, I’m not at all sure how Freakanomics would have shown your point. Moreover, I think that the general audience knows something sort of like this already- it is a major reason people don’t like science so much.
I agree! But it’s often easy to arrive at conclusions that are comfortable (and happen to be true). It’s harder when conclusions are uncomfortable (and happen to be true). All other things being equal, folks probably favor the comfortable over the uncomfortable. Lots of folks that care about truth, including LW, worry about cognitive biases for this reason. My favorite Freakanomics example is the relationship between abortions and crime rate. If their claim were true, it would be an extremely uncomfortable kind of truth.
You may be right that the general audience already knows this about science. I am not sure—I often have a hard time popularizing what I do, for instance, because I can never quite tell what the intended audience knows and what it does not know. A lot of “popular science” seems pretty obvious to me, but apparently it is not obvious to people buying the books (or perhaps it is obvious, and they buy books for some other reason than learning something).
It is certainly the case that mainstream science does not touch certain kinds of questions with a ten foot pole (which I think is rather not in the scientific spirit).
For me FOOM as advertised is dubious, so hard to tell. That doesn’t change my point: it requires intelligence to prepare CEV arguments, but the fact of his support for FOOM scenario and his arguments break consistency of high quality of ideas for people like me. So, yes, there is a lot to respect him for, but nothing truly unique and “consistency of good ideas” is only there if you already agree with his ideas.
Well… It is way easier to concede that you don’t understand other people than that you don’t understand yourself. Freakonomics gives you a chance to understand why people do these strange things (spoiler: because it is their best move in the complex world with no overaching sanity enforcement). Seeing incentives is the easiest first step to make which many people haven’t made yet. After you learn to see that actions are not what they seem, it is way better to admit that your decisions are also not what they seem.
As for planning fallacy… What do you want when there are often incentives to commit it?