For raising sanity waterline Freakonomics books do more than Sequences.
Hmmm, if I’m going to talk about “applied intelligence” and “practical results”, I really have to concede this point to you, even though I really don’t want to.
The Sequences feel like they demonstrate more intelligence, because they appeal to my level of thinking, whereas Freakonomics feels like it is written to a more average-intelligence audience. But, of course, there’s plenty of stuff written above my level, so unless I privilege myself rather dramatically, I have to concede that Eliezer hasn’t really done anything special. Especially since a lot of his rationalist ideas are available from other sources, if not outright FROM other sources (Bayes, etc.)
I’d still argue that the Sequences are a clear sign that Eliezer is intelligent (“bright”) because clearly a stupid person could not have done this. But I mean that in the sense that probably most post-graduates are also smart—a stupid person couldn’t make it through college.
Um… thank you for breaking me out of a really stupid thought pattern :)
He is obviously PhD-level bright and probably quite a bit above average PhD-holder level. He writes well, he has learned quite a lot of cognitive science and I think that writing a thesis would be expenditure of diligence and time more than effort for him.
From the other point of view, some of his writings make me think that he doesn’t have the feel of, for example, what is possible and what is not with programming due to relatively limited practice. This also makes me heavily discount his position on FOOM when it clashes with the predictions of people from the field and with predictions of, say, Jeff Hawkins who studied both AI sciences and neurology and Hanson’s economical arguments at the same time.
It feels to me that he skipped all the fundamentals and everything not immediately rewarding when he taught himself.
The AI position is kind of bizarre. I know that people whom themselves have some sort of ability gap when it comes to innovation—similar to lack of mental visualization capability but for innovation—they assume that all innovation is done by straightforward serial process (the kind that can be very much speed up on computer), similar to how people whom can’t mentally visualize assume that the tasks done using mental imagery are done without mental imagery. If you are like this and you come across something like Vinge’s “a fire upon the deep”, then i can see how you may freak out about foom, ‘novamente is going to kill us all’ style. There are people whom think AI would eventually obsolete us, but very few of them would believe in same sort of foom.
As for computation theory, he didn’t skip all the fundamentals, only some parts of some of them. There are some red flags, though.
By the way, I wonder where “So you want to become Seed AI programmer” article from http://acceleratingfuture.com/wiki (long broken) can be found. It would be useful to have it around or have it publicly disclaimed by Eliezer Yudkowsky: it did help me to decide whether I see any value in SIAI plans or not.
There’s awful lot of fundamentals, though… I’ve replied to a comment of his very recently. It’s not a question of what he skipped, it’s a question of what few things he didn’t skip. You got 100 outputs, 10 values each, you get 10^100 actions here (and that’s not even big for innovation). Nothing mysterious about being unable to implement something that’ll deal with that in the naive way. Then if you are to use better methods than bruteforce maximizing, well, some functions are easier to find maximums of analytically, nothing mysterious about that either. Ultimately, you don’t find successful autodidacts among people who had opportunity to obtain education the normal way at good university.
At this point you are being somewhat mean. It does look like honest sloppy writing on his part. With a minimum of goodwill I can accept that he meant “effectively maximizing the expectation of”. Also, it would still be somewhat interesting if only precisely one function could be maximizied—at least some local value manipulations could be possible, after all. So it is not that obvious.
About autodidacts—the problem here is that even getting education in some reputed place can still leave you with a lot of skipped fundamentals.
If he means effectively maximizing the expectation of, then there is nothing mysterious about different levels of ‘effectively’ being available for different functions and his rhetorical point with ‘mysteriously’ falls apart.
I agree that education also allows for skipped fundamentals. Self education can be good if one has good external critique, such as learning to program and having computer tell you when you’re wrong. Blogging, not so much. Internal critique is possible but rarely works, and doesn’t work for things that are in the slightest bit non rigorous.
Hmmm, if I’m going to talk about “applied intelligence” and “practical results”, I really have to concede this point to you, even though I really don’t want to.
The Sequences feel like they demonstrate more intelligence, because they appeal to my level of thinking, whereas Freakonomics feels like it is written to a more average-intelligence audience. But, of course, there’s plenty of stuff written above my level, so unless I privilege myself rather dramatically, I have to concede that Eliezer hasn’t really done anything special. Especially since a lot of his rationalist ideas are available from other sources, if not outright FROM other sources (Bayes, etc.)
I’d still argue that the Sequences are a clear sign that Eliezer is intelligent (“bright”) because clearly a stupid person could not have done this. But I mean that in the sense that probably most post-graduates are also smart—a stupid person couldn’t make it through college.
Um… thank you for breaking me out of a really stupid thought pattern :)
He is obviously PhD-level bright and probably quite a bit above average PhD-holder level. He writes well, he has learned quite a lot of cognitive science and I think that writing a thesis would be expenditure of diligence and time more than effort for him.
From the other point of view, some of his writings make me think that he doesn’t have the feel of, for example, what is possible and what is not with programming due to relatively limited practice. This also makes me heavily discount his position on FOOM when it clashes with the predictions of people from the field and with predictions of, say, Jeff Hawkins who studied both AI sciences and neurology and Hanson’s economical arguments at the same time.
It feels to me that he skipped all the fundamentals and everything not immediately rewarding when he taught himself.
The AI position is kind of bizarre. I know that people whom themselves have some sort of ability gap when it comes to innovation—similar to lack of mental visualization capability but for innovation—they assume that all innovation is done by straightforward serial process (the kind that can be very much speed up on computer), similar to how people whom can’t mentally visualize assume that the tasks done using mental imagery are done without mental imagery. If you are like this and you come across something like Vinge’s “a fire upon the deep”, then i can see how you may freak out about foom, ‘novamente is going to kill us all’ style. There are people whom think AI would eventually obsolete us, but very few of them would believe in same sort of foom.
As for computation theory, he didn’t skip all the fundamentals, only some parts of some of them. There are some red flags, though.
By the way, I wonder where “So you want to become Seed AI programmer” article from http://acceleratingfuture.com/wiki (long broken) can be found. It would be useful to have it around or have it publicly disclaimed by Eliezer Yudkowsky: it did help me to decide whether I see any value in SIAI plans or not.
There’s awful lot of fundamentals, though… I’ve replied to a comment of his very recently. It’s not a question of what he skipped, it’s a question of what few things he didn’t skip. You got 100 outputs, 10 values each, you get 10^100 actions here (and that’s not even big for innovation). Nothing mysterious about being unable to implement something that’ll deal with that in the naive way. Then if you are to use better methods than bruteforce maximizing, well, some functions are easier to find maximums of analytically, nothing mysterious about that either. Ultimately, you don’t find successful autodidacts among people who had opportunity to obtain education the normal way at good university.
At this point you are being somewhat mean. It does look like honest sloppy writing on his part. With a minimum of goodwill I can accept that he meant “effectively maximizing the expectation of”. Also, it would still be somewhat interesting if only precisely one function could be maximizied—at least some local value manipulations could be possible, after all. So it is not that obvious.
About autodidacts—the problem here is that even getting education in some reputed place can still leave you with a lot of skipped fundamentals.
If he means effectively maximizing the expectation of, then there is nothing mysterious about different levels of ‘effectively’ being available for different functions and his rhetorical point with ‘mysteriously’ falls apart.
I agree that education also allows for skipped fundamentals. Self education can be good if one has good external critique, such as learning to program and having computer tell you when you’re wrong. Blogging, not so much. Internal critique is possible but rarely works, and doesn’t work for things that are in the slightest bit non rigorous.