For reference, I think a lot of the people at LW love ethereal etherealness so much that the community would marry it if we could. That would be my first guess as to why your article’s substantive point is not a wild success.
Could you elaborate on what you mean by “ethereal etherealness”? What Eliezer is talking about in that post looks to me like what most philosophers would call abstract Platonic entities. And I get the sense (though I may be projecting) that most people here are pretty uncomfortable with those. LWers seem to think it worthwhile to eliminate any reference to anything other than concrete physical referents.
I find the discussions of Decision Theories a little ethereal sometimes. There is a base assumption that Eliezer has made that the manner of making the decision doesn’t matter. So questions of energy efficiency or computational resources used when making a decision don’t come into the discussion. I personally cannot justify that assumption looking at the evolutionary history of brains, i.e. stuff that has worked in the real world. It matters how big the brain is, the smaller the better if you can get away with it. The simpler the better, if you can get away with it, as well.
We can choose whatever reasoning algorithm we like, and will be rewarded or punished only according to that algorithm’s choices, with no other dependency—Omega just cares where we go, not how we got there.
It is precisely the notion that Nature does not care about our algorithm, which frees us up to pursue the winning Way—without attachment to any particular ritual of cognition, apart from our belief that it wins. Every rule is up for grabs, except the rule of winning.
I’ve spent a few days mulling a response to this and tried writing a response with a lot of text that needed to be boiled down with a summary at the top… and then I waited a while and read it again and it didn’t hang together the way I was hoping it would.
I stand by my general assertion as being a useful working hypothesis for guiding behavior relative to this community, but I think I may in incapable of backing it up in a way that is vivid and succinct and comprehensive all at the same time.
I think it is useful to point out that in your worthwhile link, it contains a link to “belief in the implied invisible” which explains why we should believe in the “existence” of the necessarily unobservable by arguments based on the incomputable.
Which is not to say I think solomonff induction isn’t totally sweet, but I think its cool the way I think spherical cows and classical economic assumptions are cool—they are inspiring and offer a nice first draft estimate of the “upper bound” of how things could work.
At the same time I think JaronLanier (who coined the term “cybernetic totalism” in order to criticize an over-hyped and over-politicized version of the computer inspired zeitgeist) is very cool… but he would have to speak with a measure of “delicacy” around here if he wanted up votes...
I’ve spent a few days mulling a response to this and tried writing a response with a lot of text that needed to be boiled down with a summary at the top… and then I waited a while and read it again and it didn’t hang together the way I was hoping it would.
I stand by my general assertion as being a useful working hypothesis for guiding behavior relative to this community, but I think I may in incapable of backing it up in a way that is vivid and succinct and comprehensive all at the same time.
You should post your thoughts anyway :). Even if they don’t “hang together”, I bet that they would be an illuminating expression of the impression that this community gives you. And maybe comprehension and vividness would following from a dialogue about your impressions. (Succinctness is harder to promise ;))
I think it is useful to point out that in your worthwhile link, it contains a link to “belief in the implied invisible” which explains why we should believe in the “existence” of the necessarily unobservable by arguments based on the incomputable.
But do people here like that it’s incomputable? Or do they just tolerate that it’s incomputable, because they think that they can make adequate computable approximations? I think that most people here wish that Solomonoff induction were computable (except for those who worry that it would make building an unFriendly AI too easy).
At the same time I think Jaron Lanier (who coined the term “cybernetic totalism” in order to criticize an over-hyped and over-politicized version of the computer inspired zeitgeist) is very cool… but he would have to speak with a measure of “delicacy” around here if he wanted up votes...
Could you elaborate on what you mean by “ethereal etherealness”? What Eliezer is talking about in that post looks to me like what most philosophers would call abstract Platonic entities. And I get the sense (though I may be projecting) that most people here are pretty uncomfortable with those. LWers seem to think it worthwhile to eliminate any reference to anything other than concrete physical referents.
I find the discussions of Decision Theories a little ethereal sometimes. There is a base assumption that Eliezer has made that the manner of making the decision doesn’t matter. So questions of energy efficiency or computational resources used when making a decision don’t come into the discussion. I personally cannot justify that assumption looking at the evolutionary history of brains, i.e. stuff that has worked in the real world. It matters how big the brain is, the smaller the better if you can get away with it. The simpler the better, if you can get away with it, as well.
Quote from a Newcomb’s Problem article
I’ve spent a few days mulling a response to this and tried writing a response with a lot of text that needed to be boiled down with a summary at the top… and then I waited a while and read it again and it didn’t hang together the way I was hoping it would.
I stand by my general assertion as being a useful working hypothesis for guiding behavior relative to this community, but I think I may in incapable of backing it up in a way that is vivid and succinct and comprehensive all at the same time.
I think it is useful to point out that in your worthwhile link, it contains a link to “belief in the implied invisible” which explains why we should believe in the “existence” of the necessarily unobservable by arguments based on the incomputable.
Which is not to say I think solomonff induction isn’t totally sweet, but I think its cool the way I think spherical cows and classical economic assumptions are cool—they are inspiring and offer a nice first draft estimate of the “upper bound” of how things could work.
At the same time I think Jaron Lanier (who coined the term “cybernetic totalism” in order to criticize an over-hyped and over-politicized version of the computer inspired zeitgeist) is very cool… but he would have to speak with a measure of “delicacy” around here if he wanted up votes...
You should post your thoughts anyway :). Even if they don’t “hang together”, I bet that they would be an illuminating expression of the impression that this community gives you. And maybe comprehension and vividness would following from a dialogue about your impressions. (Succinctness is harder to promise ;))
But do people here like that it’s incomputable? Or do they just tolerate that it’s incomputable, because they think that they can make adequate computable approximations? I think that most people here wish that Solomonoff induction were computable (except for those who worry that it would make building an unFriendly AI too easy).
You might find this Bloggingheads.tv conversation between Eliezer and Jaron Lanier interesting. (Here’s the corresponding Overcoming Bias thread.)
Other than that BHtv diavlog, I haven’t looked at Lanier’s stuff much. I’ll check out your YouTube link.
ETA: This comment thread from February’s Open Thread did not leave me expecting to find much insight in Lanier’s work.