I don’t care about someone who has had a single idea that happens to be smarter than Eliezer’s best—it’s easy to have a single outlier, it’s much harder to have consistently good ideas
You are answering to someone who thinks that FOOM description is misguided, for example. And there is not so much evidence for FOOM—inferences are quite shaky there. There are many ideas Eliezer has promoted that dilute the “consistently good” definition unless you agree with his priors.
They’re about teaching people who didn’t come up with this one on their own in 5th grade.
And it doesn’t look like it succeeds on this...
There is a range of intelligence+knowledge where you generally understand the underlying concepts and were quite close but couldn’t put it into shape. Those people would like Sequences unless the prior clash (or value clash...) make them too uncomfortable with shaky topics. These people are noticeably above waterline, by the way.
For raising sanity waterline Freakonomics books do more than Sequences.
Minor note- the intellligence explosion/FOOM idea isn’t due to Eliezer. The idea originally seems to be due to I.J. Good. I don’t know if Eliezer came up with it independently of Good or not but I suspect that Eliezer didn’t come up with it on his own.
For raising sanity waterline Freakonomics books do more than Sequences.
This seems dubious to me. The original book might suggest some interesting patterns and teach one how to do Fermi calculations but not much else. The sequel book has quite a few problems. Can you expand on why you think this is the case?
Slow-takeoff idea (of morality, not of intelligence) can be traced back even to Plato. I guess in Eliezer’s arguments about FOOM there is still some fresh content.
OK, I cannot remember how much of Freakonomics volumes I have read, as it is trivial enough. My point is that Freakonomics is about seeing incentives and seeing the difference between “forward” and “backward” conditional probabilities. It chooses examples that can be backed by data and where entire mechanisms can be exposed. It doesn’t require much effort or any background to read, and it shows your examples that clearly can affect you, even if indirectly.
I guess in Eliezer’s arguments about FOOM there is still some fresh content.
Is there anything significant? I haven’t looked that hard but I haven’t really noticed anything substantial in that bit other than his potential solution of CEV and that seems to be the most dubious bit of the claims.
My point is that Freakonomics is about seeing incentives and seeing the difference between “forward” and “backward” conditional probabilities.
Sure and this is nice if one is trying to model reality in say a policy basis. But this is on the order of say a subsequence of a general technique. This won’t do much to most people’s daily decision making the same way that say the danger of confirmation bias or the planning fallacy would. This sort of work to be useful often requires accurate data and sometimes models that only appear obvious in hindsight or are not easily testable. That doesn’t impact the sanity waterline that much.
The main value I see in Freakanomics is communicating “the heart of science” to a general audience, namely that science is about reaching conclusions that are uncomfortable but true.
namely that science is about reaching conclusions that are uncomfortable but true.
This seems confused to me, science should reach conclusions that are true whether or not they are uncomfortable. Moreover, I’m not at all sure how Freakanomics would have shown your point. Moreover, I think that the general audience knows something sort of like this already- it is a major reason people don’t like science so much.
I agree! But it’s often easy to arrive at conclusions that are comfortable (and happen to be true). It’s harder when conclusions are uncomfortable (and happen to be true). All other things being equal, folks probably favor the comfortable over the uncomfortable. Lots of folks that care about truth, including LW, worry about cognitive biases for this reason. My favorite Freakanomics example is the relationship between abortions and crime rate. If their claim were true, it would be an extremely uncomfortable kind of truth.
You may be right that the general audience already knows this about science. I am not sure—I often have a hard time popularizing what I do, for instance, because I can never quite tell what the intended audience knows and what it does not know. A lot of “popular science” seems pretty obvious to me, but apparently it is not obvious to people buying the books (or perhaps it is obvious, and they buy books for some other reason than learning something).
It is certainly the case that mainstream science does not touch certain kinds of questions with a ten foot pole (which I think is rather not in the scientific spirit).
Is there anything significant? I haven’t looked that hard but I haven’t really noticed anything substantial in that bit other than his potential solution of CEV and that seems to be the most dubious bit of the claims.
For me FOOM as advertised is dubious, so hard to tell. That doesn’t change my point: it requires intelligence to prepare CEV arguments, but the fact of his support for FOOM scenario and his arguments break consistency of high quality of ideas for people like me. So, yes, there is a lot to respect him for, but nothing truly unique and “consistency of good ideas” is only there if you already agree with his ideas.
This won’t do much to most people’s daily decision making the same way that say the danger of confirmation bias or the planning fallacy would.
Well… It is way easier to concede that you don’t understand other people than that you don’t understand yourself. Freakonomics gives you a chance to understand why people do these strange things (spoiler: because it is their best move in the complex world with no overaching sanity enforcement). Seeing incentives is the easiest first step to make which many people haven’t made yet. After you learn to see that actions are not what they seem, it is way better to admit that your decisions are also not what they seem.
As for planning fallacy… What do you want when there are often incentives to commit it?
For raising sanity waterline Freakonomics books do more than Sequences.
Hmmm, if I’m going to talk about “applied intelligence” and “practical results”, I really have to concede this point to you, even though I really don’t want to.
The Sequences feel like they demonstrate more intelligence, because they appeal to my level of thinking, whereas Freakonomics feels like it is written to a more average-intelligence audience. But, of course, there’s plenty of stuff written above my level, so unless I privilege myself rather dramatically, I have to concede that Eliezer hasn’t really done anything special. Especially since a lot of his rationalist ideas are available from other sources, if not outright FROM other sources (Bayes, etc.)
I’d still argue that the Sequences are a clear sign that Eliezer is intelligent (“bright”) because clearly a stupid person could not have done this. But I mean that in the sense that probably most post-graduates are also smart—a stupid person couldn’t make it through college.
Um… thank you for breaking me out of a really stupid thought pattern :)
He is obviously PhD-level bright and probably quite a bit above average PhD-holder level. He writes well, he has learned quite a lot of cognitive science and I think that writing a thesis would be expenditure of diligence and time more than effort for him.
From the other point of view, some of his writings make me think that he doesn’t have the feel of, for example, what is possible and what is not with programming due to relatively limited practice. This also makes me heavily discount his position on FOOM when it clashes with the predictions of people from the field and with predictions of, say, Jeff Hawkins who studied both AI sciences and neurology and Hanson’s economical arguments at the same time.
It feels to me that he skipped all the fundamentals and everything not immediately rewarding when he taught himself.
The AI position is kind of bizarre. I know that people whom themselves have some sort of ability gap when it comes to innovation—similar to lack of mental visualization capability but for innovation—they assume that all innovation is done by straightforward serial process (the kind that can be very much speed up on computer), similar to how people whom can’t mentally visualize assume that the tasks done using mental imagery are done without mental imagery. If you are like this and you come across something like Vinge’s “a fire upon the deep”, then i can see how you may freak out about foom, ‘novamente is going to kill us all’ style. There are people whom think AI would eventually obsolete us, but very few of them would believe in same sort of foom.
As for computation theory, he didn’t skip all the fundamentals, only some parts of some of them. There are some red flags, though.
By the way, I wonder where “So you want to become Seed AI programmer” article from http://acceleratingfuture.com/wiki (long broken) can be found. It would be useful to have it around or have it publicly disclaimed by Eliezer Yudkowsky: it did help me to decide whether I see any value in SIAI plans or not.
There’s awful lot of fundamentals, though… I’ve replied to a comment of his very recently. It’s not a question of what he skipped, it’s a question of what few things he didn’t skip. You got 100 outputs, 10 values each, you get 10^100 actions here (and that’s not even big for innovation). Nothing mysterious about being unable to implement something that’ll deal with that in the naive way. Then if you are to use better methods than bruteforce maximizing, well, some functions are easier to find maximums of analytically, nothing mysterious about that either. Ultimately, you don’t find successful autodidacts among people who had opportunity to obtain education the normal way at good university.
At this point you are being somewhat mean. It does look like honest sloppy writing on his part. With a minimum of goodwill I can accept that he meant “effectively maximizing the expectation of”. Also, it would still be somewhat interesting if only precisely one function could be maximizied—at least some local value manipulations could be possible, after all. So it is not that obvious.
About autodidacts—the problem here is that even getting education in some reputed place can still leave you with a lot of skipped fundamentals.
If he means effectively maximizing the expectation of, then there is nothing mysterious about different levels of ‘effectively’ being available for different functions and his rhetorical point with ‘mysteriously’ falls apart.
I agree that education also allows for skipped fundamentals. Self education can be good if one has good external critique, such as learning to program and having computer tell you when you’re wrong. Blogging, not so much. Internal critique is possible but rarely works, and doesn’t work for things that are in the slightest bit non rigorous.
You are answering to someone who thinks that FOOM description is misguided, for example. And there is not so much evidence for FOOM—inferences are quite shaky there. There are many ideas Eliezer has promoted that dilute the “consistently good” definition unless you agree with his priors.
And it doesn’t look like it succeeds on this...
There is a range of intelligence+knowledge where you generally understand the underlying concepts and were quite close but couldn’t put it into shape. Those people would like Sequences unless the prior clash (or value clash...) make them too uncomfortable with shaky topics. These people are noticeably above waterline, by the way.
For raising sanity waterline Freakonomics books do more than Sequences.
Minor note- the intellligence explosion/FOOM idea isn’t due to Eliezer. The idea originally seems to be due to I.J. Good. I don’t know if Eliezer came up with it independently of Good or not but I suspect that Eliezer didn’t come up with it on his own.
This seems dubious to me. The original book might suggest some interesting patterns and teach one how to do Fermi calculations but not much else. The sequel book has quite a few problems. Can you expand on why you think this is the case?
Slow-takeoff idea (of morality, not of intelligence) can be traced back even to Plato. I guess in Eliezer’s arguments about FOOM there is still some fresh content.
OK, I cannot remember how much of Freakonomics volumes I have read, as it is trivial enough. My point is that Freakonomics is about seeing incentives and seeing the difference between “forward” and “backward” conditional probabilities. It chooses examples that can be backed by data and where entire mechanisms can be exposed. It doesn’t require much effort or any background to read, and it shows your examples that clearly can affect you, even if indirectly.
Is there anything significant? I haven’t looked that hard but I haven’t really noticed anything substantial in that bit other than his potential solution of CEV and that seems to be the most dubious bit of the claims.
Sure and this is nice if one is trying to model reality in say a policy basis. But this is on the order of say a subsequence of a general technique. This won’t do much to most people’s daily decision making the same way that say the danger of confirmation bias or the planning fallacy would. This sort of work to be useful often requires accurate data and sometimes models that only appear obvious in hindsight or are not easily testable. That doesn’t impact the sanity waterline that much.
The main value I see in Freakanomics is communicating “the heart of science” to a general audience, namely that science is about reaching conclusions that are uncomfortable but true.
This seems confused to me, science should reach conclusions that are true whether or not they are uncomfortable. Moreover, I’m not at all sure how Freakanomics would have shown your point. Moreover, I think that the general audience knows something sort of like this already- it is a major reason people don’t like science so much.
I agree! But it’s often easy to arrive at conclusions that are comfortable (and happen to be true). It’s harder when conclusions are uncomfortable (and happen to be true). All other things being equal, folks probably favor the comfortable over the uncomfortable. Lots of folks that care about truth, including LW, worry about cognitive biases for this reason. My favorite Freakanomics example is the relationship between abortions and crime rate. If their claim were true, it would be an extremely uncomfortable kind of truth.
You may be right that the general audience already knows this about science. I am not sure—I often have a hard time popularizing what I do, for instance, because I can never quite tell what the intended audience knows and what it does not know. A lot of “popular science” seems pretty obvious to me, but apparently it is not obvious to people buying the books (or perhaps it is obvious, and they buy books for some other reason than learning something).
It is certainly the case that mainstream science does not touch certain kinds of questions with a ten foot pole (which I think is rather not in the scientific spirit).
For me FOOM as advertised is dubious, so hard to tell. That doesn’t change my point: it requires intelligence to prepare CEV arguments, but the fact of his support for FOOM scenario and his arguments break consistency of high quality of ideas for people like me. So, yes, there is a lot to respect him for, but nothing truly unique and “consistency of good ideas” is only there if you already agree with his ideas.
Well… It is way easier to concede that you don’t understand other people than that you don’t understand yourself. Freakonomics gives you a chance to understand why people do these strange things (spoiler: because it is their best move in the complex world with no overaching sanity enforcement). Seeing incentives is the easiest first step to make which many people haven’t made yet. After you learn to see that actions are not what they seem, it is way better to admit that your decisions are also not what they seem.
As for planning fallacy… What do you want when there are often incentives to commit it?
Hmmm, if I’m going to talk about “applied intelligence” and “practical results”, I really have to concede this point to you, even though I really don’t want to.
The Sequences feel like they demonstrate more intelligence, because they appeal to my level of thinking, whereas Freakonomics feels like it is written to a more average-intelligence audience. But, of course, there’s plenty of stuff written above my level, so unless I privilege myself rather dramatically, I have to concede that Eliezer hasn’t really done anything special. Especially since a lot of his rationalist ideas are available from other sources, if not outright FROM other sources (Bayes, etc.)
I’d still argue that the Sequences are a clear sign that Eliezer is intelligent (“bright”) because clearly a stupid person could not have done this. But I mean that in the sense that probably most post-graduates are also smart—a stupid person couldn’t make it through college.
Um… thank you for breaking me out of a really stupid thought pattern :)
He is obviously PhD-level bright and probably quite a bit above average PhD-holder level. He writes well, he has learned quite a lot of cognitive science and I think that writing a thesis would be expenditure of diligence and time more than effort for him.
From the other point of view, some of his writings make me think that he doesn’t have the feel of, for example, what is possible and what is not with programming due to relatively limited practice. This also makes me heavily discount his position on FOOM when it clashes with the predictions of people from the field and with predictions of, say, Jeff Hawkins who studied both AI sciences and neurology and Hanson’s economical arguments at the same time.
It feels to me that he skipped all the fundamentals and everything not immediately rewarding when he taught himself.
The AI position is kind of bizarre. I know that people whom themselves have some sort of ability gap when it comes to innovation—similar to lack of mental visualization capability but for innovation—they assume that all innovation is done by straightforward serial process (the kind that can be very much speed up on computer), similar to how people whom can’t mentally visualize assume that the tasks done using mental imagery are done without mental imagery. If you are like this and you come across something like Vinge’s “a fire upon the deep”, then i can see how you may freak out about foom, ‘novamente is going to kill us all’ style. There are people whom think AI would eventually obsolete us, but very few of them would believe in same sort of foom.
As for computation theory, he didn’t skip all the fundamentals, only some parts of some of them. There are some red flags, though.
By the way, I wonder where “So you want to become Seed AI programmer” article from http://acceleratingfuture.com/wiki (long broken) can be found. It would be useful to have it around or have it publicly disclaimed by Eliezer Yudkowsky: it did help me to decide whether I see any value in SIAI plans or not.
There’s awful lot of fundamentals, though… I’ve replied to a comment of his very recently. It’s not a question of what he skipped, it’s a question of what few things he didn’t skip. You got 100 outputs, 10 values each, you get 10^100 actions here (and that’s not even big for innovation). Nothing mysterious about being unable to implement something that’ll deal with that in the naive way. Then if you are to use better methods than bruteforce maximizing, well, some functions are easier to find maximums of analytically, nothing mysterious about that either. Ultimately, you don’t find successful autodidacts among people who had opportunity to obtain education the normal way at good university.
At this point you are being somewhat mean. It does look like honest sloppy writing on his part. With a minimum of goodwill I can accept that he meant “effectively maximizing the expectation of”. Also, it would still be somewhat interesting if only precisely one function could be maximizied—at least some local value manipulations could be possible, after all. So it is not that obvious.
About autodidacts—the problem here is that even getting education in some reputed place can still leave you with a lot of skipped fundamentals.
If he means effectively maximizing the expectation of, then there is nothing mysterious about different levels of ‘effectively’ being available for different functions and his rhetorical point with ‘mysteriously’ falls apart.
I agree that education also allows for skipped fundamentals. Self education can be good if one has good external critique, such as learning to program and having computer tell you when you’re wrong. Blogging, not so much. Internal critique is possible but rarely works, and doesn’t work for things that are in the slightest bit non rigorous.