You are really going to take a concept worked out in dozens of academic papers and declare it meaningless because you have trouble figuring out how to apply it in one particular context?
The near-far view is interesting and useful, but it seems very susceptible to just-so stories, and it doesn’t appear to have a great deal of consistent predictive power. I’ve seen few cases where someone said, “OK, we have X problem. The near view would suggest Y, the far view would suggest Z, let’s look at how people actually think.” Rather, you get “People think both Y and Z about X. Therefore, people who think Y must use the near view, and people who think Z must use the far view.” This is problematic, but it doesn’t make the system meaningless.
Also, the issue isn’t that I have trouble applying it in one particular context, but, rather, that there appears to be a problem in formalizing how it should be applied. Now, perhaps there is a clear methodology that would make near/far falsifiable, but I’ve certainly never seen it used in your writings or anyone else’s.
I also find it interesting that the existence of academic papers on the subject is the gold standard of evidence you apply, particularly because I am not contesting facts, but rather arguing that a particular explanatory framework has less practical use than its proponents seem to think. There were dozens of academic papers many now-discredited explanatory frameworks (phrenology?Freud? Pre-20th century racial theories?). This is, indeed, a major problem with explanatory frameworks: they’re general enough that they often stick around long after they should have died, like most of what Freud wrote. Citing academic papers to support an ideological framework is simply nothing like citing them to support empirical facts, particularly in an area as fuzzy as estimating the thought processes of large numbers of people.I find it telling that your defense is not, say, an efficient summary of the idea or its usefulness or evidence supporting its predictive value, but rather a vague appeal to authority.
My (perhaps trivial) points were that all hypothetical thought experiments are necessarily conducted in Far mode, even when thought experiment is about simulating Near modes of thinking. Does that undermine it a little?
And
while all Thought Experiments are Far
Actual Experiements are Near.
I was illustrating that with what I hoped was an amusing anecdote—the bizarre experience I had last week of having the trolley problem discussed with the fat man actually personified and present in the room, sitting next to me, and how that nudged the thought experiment into something just slightly closer to a real experiment.
It’s easy to talk about sacrificing one person’s life to save five others, but hurting his feelings by appearing to be rude or unkind, in order to to get to a logical truth was harder. This is somewhat relevant to the subject of the talk—decisions may be made emotionally and then rationalised afterwards.
Look, I wasn’t hoping to provoke one of Eliezer’s ‘clicks’, just to raise a weekend smile and to discuss scenario where lesswrong readers had no cached thought to fall back on.
You are really going to take a concept worked out in dozens of academic papers and declare it meaningless because you have trouble figuring out how to apply it in one particular context?
“A major limitation” != “meaningless”
The near-far view is interesting and useful, but it seems very susceptible to just-so stories, and it doesn’t appear to have a great deal of consistent predictive power. I’ve seen few cases where someone said, “OK, we have X problem. The near view would suggest Y, the far view would suggest Z, let’s look at how people actually think.” Rather, you get “People think both Y and Z about X. Therefore, people who think Y must use the near view, and people who think Z must use the far view.” This is problematic, but it doesn’t make the system meaningless.
Also, the issue isn’t that I have trouble applying it in one particular context, but, rather, that there appears to be a problem in formalizing how it should be applied. Now, perhaps there is a clear methodology that would make near/far falsifiable, but I’ve certainly never seen it used in your writings or anyone else’s.
I also find it interesting that the existence of academic papers on the subject is the gold standard of evidence you apply, particularly because I am not contesting facts, but rather arguing that a particular explanatory framework has less practical use than its proponents seem to think. There were dozens of academic papers many now-discredited explanatory frameworks (phrenology?Freud? Pre-20th century racial theories?). This is, indeed, a major problem with explanatory frameworks: they’re general enough that they often stick around long after they should have died, like most of what Freud wrote. Citing academic papers to support an ideological framework is simply nothing like citing them to support empirical facts, particularly in an area as fuzzy as estimating the thought processes of large numbers of people.I find it telling that your defense is not, say, an efficient summary of the idea or its usefulness or evidence supporting its predictive value, but rather a vague appeal to authority.
No, I wasn’t declaring it meaningless.
My (perhaps trivial) points were that all hypothetical thought experiments are necessarily conducted in Far mode, even when thought experiment is about simulating Near modes of thinking. Does that undermine it a little?
And
while all Thought Experiments are Far
Actual Experiements are Near.
I was illustrating that with what I hoped was an amusing anecdote—the bizarre experience I had last week of having the trolley problem discussed with the fat man actually personified and present in the room, sitting next to me, and how that nudged the thought experiment into something just slightly closer to a real experiment.
It’s easy to talk about sacrificing one person’s life to save five others, but hurting his feelings by appearing to be rude or unkind, in order to to get to a logical truth was harder. This is somewhat relevant to the subject of the talk—decisions may be made emotionally and then rationalised afterwards.
Look, I wasn’t hoping to provoke one of Eliezer’s ‘clicks’, just to raise a weekend smile and to discuss scenario where lesswrong readers had no cached thought to fall back on.