Right now, it is fashionable to criticize economic models. Reading a thesis on LW that I can see in many other places is not so much fun. Furthermore, the thesis as presented is much too strong. For many purposes classical economics works. For example, classical economics does a decent job predicting how different sorts of goods will influence each other, such as how closely related products will interact.
I was surprised to see such negative response to something I found so interesting. This comment and its support suggest either I’ve written this poorly, or using Economics as an example has gotten people sidetracked from my main point (which is I suppose a more specific way of saying I have written it poorly). I shall have to attempt to make the point in a more clear and concise manner in the near future; this is not intended as a criticism of economics, it is about a particular error in our manner of thinking. The fact that economics gets a lot right is actually besides the point, and as I did not comment on the degree of conformity modern economic models have to reality, nothing I wrote was intended to say it is useless.
I assume the point you were trying to make was about the general phenomenon of theorists assuming that reality is more like their theories than it really is. I agree that this is very common and saying that people who are falling into this general trap are “assuming nails” seems like a nice shorthand. I think you missed a few things though…
For example, scientific theories are generally not eliminated from practice by demonstrating that they are ill-founded or incorrect. To “win”, what you need to do is show a better thing for people who know the existing jargon and have the relevant skillsets to do, instead of what they are already doing. Otherwise they may as well keep proving theorems and explaining that they are working out what “the ideal case” looks like, with a nod to physics (where friction is usually neglected until it is explicitly factored in as a correction term). The other thing you could do is attack their funding, but good luck with that :-P
The other issue I see, is that your general point is an area of a substantial and very long running “stylist disagreement” between experimentalists and theorists. My impression is that historically, theorists win the money and fame while experimentalists are mostly remembered as manual laborers. Tesla would be a more recent famous example of this, but a nice non-famous example can be found in the docs for python’s difflib module:
This is a flexible class for comparing pairs of sequences of any type, so long as the sequence elements are hashable. The basic algorithm predates, and is a little fancier than, an algorithm published in the late 1980’s by Ratcliff and Obershelp under the hyperbolic name “gestalt pattern matching.” The idea is to find the longest contiguous matching subsequence that contains no “junk” elements (the Ratcliff and Obershelp algorithm doesn’t address junk). The same idea is then applied recursively to the pieces of the sequences to the left and to the right of the matching subsequence. This does not yield minimal edit sequences, but does tend to yield matches that “look right” to people.
Timing: The basic Ratcliff-Obershelp algorithm is cubic time in the worst case and quadratic time in the expected case. SequenceMatcher is quadratic time for the worst case and has expected-case behavior dependent in a complicated way on how many elements the sequences have in common; best case time is linear.
Can’t you just hear the gritting of teeth here? The resentment at unacknowledged brilliance defeated by clever marketing? (Also: XKCD!)
Most of the arguments in this very broad “debate” over theory versus practice are not concrete enough to falsify and I know of no way to change someone’s stylistic approach in this dimension of human variation using nothing but reasoned discussion. This is one of those “limits of reason” things that I’ve never found a way to deal with other than by taking an audience’s temperature and avoiding arguments that would feel too hot or too cold to them, given their existing and functionally immutable prejudices on the subject.
For reference, I think a lot of the people at LW love ethereal etherealness so much that the community would marry it if we could. That would be my first guess as to why your article’s substantive point is not a wild success.
For reference, I think a lot of the people at LW love ethereal etherealness so much that the community would marry it if we could. That would be my first guess as to why your article’s substantive point is not a wild success.
Could you elaborate on what you mean by “ethereal etherealness”? What Eliezer is talking about in that post looks to me like what most philosophers would call abstract Platonic entities. And I get the sense (though I may be projecting) that most people here are pretty uncomfortable with those. LWers seem to think it worthwhile to eliminate any reference to anything other than concrete physical referents.
I find the discussions of Decision Theories a little ethereal sometimes. There is a base assumption that Eliezer has made that the manner of making the decision doesn’t matter. So questions of energy efficiency or computational resources used when making a decision don’t come into the discussion. I personally cannot justify that assumption looking at the evolutionary history of brains, i.e. stuff that has worked in the real world. It matters how big the brain is, the smaller the better if you can get away with it. The simpler the better, if you can get away with it, as well.
We can choose whatever reasoning algorithm we like, and will be rewarded or punished only according to that algorithm’s choices, with no other dependency—Omega just cares where we go, not how we got there.
It is precisely the notion that Nature does not care about our algorithm, which frees us up to pursue the winning Way—without attachment to any particular ritual of cognition, apart from our belief that it wins. Every rule is up for grabs, except the rule of winning.
I’ve spent a few days mulling a response to this and tried writing a response with a lot of text that needed to be boiled down with a summary at the top… and then I waited a while and read it again and it didn’t hang together the way I was hoping it would.
I stand by my general assertion as being a useful working hypothesis for guiding behavior relative to this community, but I think I may in incapable of backing it up in a way that is vivid and succinct and comprehensive all at the same time.
I think it is useful to point out that in your worthwhile link, it contains a link to “belief in the implied invisible” which explains why we should believe in the “existence” of the necessarily unobservable by arguments based on the incomputable.
Which is not to say I think solomonff induction isn’t totally sweet, but I think its cool the way I think spherical cows and classical economic assumptions are cool—they are inspiring and offer a nice first draft estimate of the “upper bound” of how things could work.
At the same time I think JaronLanier (who coined the term “cybernetic totalism” in order to criticize an over-hyped and over-politicized version of the computer inspired zeitgeist) is very cool… but he would have to speak with a measure of “delicacy” around here if he wanted up votes...
I’ve spent a few days mulling a response to this and tried writing a response with a lot of text that needed to be boiled down with a summary at the top… and then I waited a while and read it again and it didn’t hang together the way I was hoping it would.
I stand by my general assertion as being a useful working hypothesis for guiding behavior relative to this community, but I think I may in incapable of backing it up in a way that is vivid and succinct and comprehensive all at the same time.
You should post your thoughts anyway :). Even if they don’t “hang together”, I bet that they would be an illuminating expression of the impression that this community gives you. And maybe comprehension and vividness would following from a dialogue about your impressions. (Succinctness is harder to promise ;))
I think it is useful to point out that in your worthwhile link, it contains a link to “belief in the implied invisible” which explains why we should believe in the “existence” of the necessarily unobservable by arguments based on the incomputable.
But do people here like that it’s incomputable? Or do they just tolerate that it’s incomputable, because they think that they can make adequate computable approximations? I think that most people here wish that Solomonoff induction were computable (except for those who worry that it would make building an unFriendly AI too easy).
At the same time I think Jaron Lanier (who coined the term “cybernetic totalism” in order to criticize an over-hyped and over-politicized version of the computer inspired zeitgeist) is very cool… but he would have to speak with a measure of “delicacy” around here if he wanted up votes...
I would suspect it is the latter combined with a third factor: your points, so far as I can determine, are (a) models only predict reality if their assumptions are valid, and (b) it’s easy to think that your model is good even with the assumptions aren’t valid.
Point (a) would be interesting if it weren’t trivial.
Point (b) would be interesting if you showed it convincingly.
The ideal post to make these points would, instead of continuing from “If this concept doesn’t make perfect sense [...]”, demonstrate this phenomenon in several examples detailed enough to eliminate other reasonable hypotheses.
I agree completely. I love “etherial etherialness”, and I think (a) is a good point, which was terribly uninteresting to read because I’ve heard it before both on LW and elsewhere.
Right now, it is fashionable to criticize economic models. Reading a thesis on LW that I can see in many other places is not so much fun. Furthermore, the thesis as presented is much too strong. For many purposes classical economics works. For example, classical economics does a decent job predicting how different sorts of goods will influence each other, such as how closely related products will interact.
I was surprised to see such negative response to something I found so interesting. This comment and its support suggest either I’ve written this poorly, or using Economics as an example has gotten people sidetracked from my main point (which is I suppose a more specific way of saying I have written it poorly). I shall have to attempt to make the point in a more clear and concise manner in the near future; this is not intended as a criticism of economics, it is about a particular error in our manner of thinking. The fact that economics gets a lot right is actually besides the point, and as I did not comment on the degree of conformity modern economic models have to reality, nothing I wrote was intended to say it is useless.
I assume the point you were trying to make was about the general phenomenon of theorists assuming that reality is more like their theories than it really is. I agree that this is very common and saying that people who are falling into this general trap are “assuming nails” seems like a nice shorthand. I think you missed a few things though…
For example, scientific theories are generally not eliminated from practice by demonstrating that they are ill-founded or incorrect. To “win”, what you need to do is show a better thing for people who know the existing jargon and have the relevant skillsets to do, instead of what they are already doing. Otherwise they may as well keep proving theorems and explaining that they are working out what “the ideal case” looks like, with a nod to physics (where friction is usually neglected until it is explicitly factored in as a correction term). The other thing you could do is attack their funding, but good luck with that :-P
The other issue I see, is that your general point is an area of a substantial and very long running “stylist disagreement” between experimentalists and theorists. My impression is that historically, theorists win the money and fame while experimentalists are mostly remembered as manual laborers. Tesla would be a more recent famous example of this, but a nice non-famous example can be found in the docs for python’s difflib module:
Can’t you just hear the gritting of teeth here? The resentment at unacknowledged brilliance defeated by clever marketing? (Also: XKCD!)
Most of the arguments in this very broad “debate” over theory versus practice are not concrete enough to falsify and I know of no way to change someone’s stylistic approach in this dimension of human variation using nothing but reasoned discussion. This is one of those “limits of reason” things that I’ve never found a way to deal with other than by taking an audience’s temperature and avoiding arguments that would feel too hot or too cold to them, given their existing and functionally immutable prejudices on the subject.
For reference, I think a lot of the people at LW love ethereal etherealness so much that the community would marry it if we could. That would be my first guess as to why your article’s substantive point is not a wild success.
Could you elaborate on what you mean by “ethereal etherealness”? What Eliezer is talking about in that post looks to me like what most philosophers would call abstract Platonic entities. And I get the sense (though I may be projecting) that most people here are pretty uncomfortable with those. LWers seem to think it worthwhile to eliminate any reference to anything other than concrete physical referents.
I find the discussions of Decision Theories a little ethereal sometimes. There is a base assumption that Eliezer has made that the manner of making the decision doesn’t matter. So questions of energy efficiency or computational resources used when making a decision don’t come into the discussion. I personally cannot justify that assumption looking at the evolutionary history of brains, i.e. stuff that has worked in the real world. It matters how big the brain is, the smaller the better if you can get away with it. The simpler the better, if you can get away with it, as well.
Quote from a Newcomb’s Problem article
I’ve spent a few days mulling a response to this and tried writing a response with a lot of text that needed to be boiled down with a summary at the top… and then I waited a while and read it again and it didn’t hang together the way I was hoping it would.
I stand by my general assertion as being a useful working hypothesis for guiding behavior relative to this community, but I think I may in incapable of backing it up in a way that is vivid and succinct and comprehensive all at the same time.
I think it is useful to point out that in your worthwhile link, it contains a link to “belief in the implied invisible” which explains why we should believe in the “existence” of the necessarily unobservable by arguments based on the incomputable.
Which is not to say I think solomonff induction isn’t totally sweet, but I think its cool the way I think spherical cows and classical economic assumptions are cool—they are inspiring and offer a nice first draft estimate of the “upper bound” of how things could work.
At the same time I think Jaron Lanier (who coined the term “cybernetic totalism” in order to criticize an over-hyped and over-politicized version of the computer inspired zeitgeist) is very cool… but he would have to speak with a measure of “delicacy” around here if he wanted up votes...
You should post your thoughts anyway :). Even if they don’t “hang together”, I bet that they would be an illuminating expression of the impression that this community gives you. And maybe comprehension and vividness would following from a dialogue about your impressions. (Succinctness is harder to promise ;))
But do people here like that it’s incomputable? Or do they just tolerate that it’s incomputable, because they think that they can make adequate computable approximations? I think that most people here wish that Solomonoff induction were computable (except for those who worry that it would make building an unFriendly AI too easy).
You might find this Bloggingheads.tv conversation between Eliezer and Jaron Lanier interesting. (Here’s the corresponding Overcoming Bias thread.)
Other than that BHtv diavlog, I haven’t looked at Lanier’s stuff much. I’ll check out your YouTube link.
ETA: This comment thread from February’s Open Thread did not leave me expecting to find much insight in Lanier’s work.
I would suspect it is the latter combined with a third factor: your points, so far as I can determine, are (a) models only predict reality if their assumptions are valid, and (b) it’s easy to think that your model is good even with the assumptions aren’t valid.
Point (a) would be interesting if it weren’t trivial.
Point (b) would be interesting if you showed it convincingly.
The ideal post to make these points would, instead of continuing from “If this concept doesn’t make perfect sense [...]”, demonstrate this phenomenon in several examples detailed enough to eliminate other reasonable hypotheses.
I agree completely. I love “etherial etherialness”, and I think (a) is a good point, which was terribly uninteresting to read because I’ve heard it before both on LW and elsewhere.
Not only that, the thesis is presented in a really annoying font. I actually suspect this has at least as much influence on the reception of the post.