Explaining vs. Explaining Away
John Keats’s Lamia (1819) surely deserves some kind of award for Most Famously Annoying Poetry:
…Do not all charms fly
At the mere touch of cold philosophy?
There was an awful rainbow once in heaven:
We know her woof, her texture; she is given
In the dull catalogue of common things.
Philosophy will clip an Angel’s wings,
Conquer all mysteries by rule and line,
Empty the haunted air, and gnomed mine—
Unweave a rainbow...
My usual reply ends with the phrase: “If we cannot learn to take joy in the merely real, our lives will be empty indeed.” I shall expand on that tomorrow.
Today I have a different point in mind. Let’s just take the lines:
Empty the haunted air, and gnomed mine—
Unweave a rainbow...
Apparently “the mere touch of cold philosophy”, i.e., the truth, has destroyed:
Haunts in the air
Gnomes in the mine
Rainbows
Which calls to mind a rather different bit of verse:
One of these things
Is not like the others
One of these things
Doesn’t belong
The air has been emptied of its haunts, and the mine de-gnomed—but the rainbow is still there!
In “Righting a Wrong Question”, I wrote:
Tracing back the chain of causality, step by step, I discover that my belief that I’m wearing socks is fully explained by the fact that I’m wearing socks… On the other hand, if I see a mirage of a lake in the desert, the correct causal explanation of my vision does not involve the fact of any actual lake in the desert. In this case, my belief in the lake is not just explained, but explained away.
The rainbow was explained. The haunts in the air, and gnomes in the mine, were explained away.
I think this is the key distinction that anti-reductionists don’t get about reductionism.
You can see this failure to get the distinction in the classic objection to reductionism:
If reductionism is correct, then even your belief in reductionism is just the mere result of the motion of molecules—why should I listen to anything you say?
The key word, in the above, is mere; a word which implies that accepting reductionism would explain away all the reasoning processes leading up to my acceptance of reductionism, the way that an optical illusion is explained away.
But you can explain how a cognitive process works without it being “mere”! My belief that I’m wearing socks is a mere result of my visual cortex reconstructing nerve impulses sent from my retina which received photons reflected off my socks… which is to say, according to scientific reductionism, my belief that I’m wearing socks is a mere result of the fact that I’m wearing socks.
What could be going on in the anti-reductionists’ minds, such that they would put rainbows and belief-in-reductionism, in the same category as haunts and gnomes?
Several things are going on simultaneously. But for now let’s focus on the basic idea introduced yesterday: The Mind Projection Fallacy between a multi-level map and a mono-level territory.
(I.e: There’s no way you can model a 747 quark-by-quark, so you’ve got to use a multi-level map with explicit cognitive representations of wings, airflow, and so on. This doesn’t mean there’s a multi-level territory. The true laws of physics, to the best of our knowledge, are only over elementary particle fields.)
I think that when physicists say “There are no fundamental rainbows,” the anti-reductionists hear, “There are no rainbows.”
If you don’t distinguish between the multi-level map and the mono-level territory, then when someone tries to explain to you that the rainbow is not a fundamental thing in physics, acceptance of this will feel like erasing rainbows from your multi-level map, which feels like erasing rainbows from the world.
When Science says “tigers are not elementary particles, they are made of quarks” the anti-reductionist hears this as the same sort of dismissal as “we looked in your garage for a dragon, but there was just empty air”.
What scientists did to rainbows, and what scientists did to gnomes, seemingly felt the same to Keats...
In support of this sub-thesis, I deliberately used several phrasings, in my discussion of Keats’s poem, that were Mind Projection Fallacious. If you didn’t notice, this would seem to argue that such fallacies are customary enough to pass unremarked.
For example:
“The air has been emptied of its haunts, and the mine de-gnomed—but the rainbow is still there!”
Actually, Science emptied the model of air of belief in haunts, and emptied the map of the mine of representations of gnomes. Science did not actually—as Keats’s poem itself would have it—take real Angel’s wings, and destroy them with a cold touch of truth. In reality there never were any haunts in the air, or gnomes in the mine.
Another example:
“What scientists did to rainbows, and what scientists did to gnomes, seemingly felt the same to Keats.”
Scientists didn’t do anything to gnomes, only to “gnomes”. The quotation is not the referent.
But if you commit the Mind Projection Fallacy—and by default, our beliefs just feel like the way the world is—then at time T=0, the mines (apparently) contain gnomes; at time T=1 a scientist dances across the scene, and at time T=2 the mines (apparently) are empty. Clearly, there used to be gnomes there, but the scientist killed them.
Bad scientist! No poems for you, gnomekiller!
Well, that’s how it feels, if you get emotionally attached to the gnomes, and then a scientist says there aren’t any gnomes. It takes a strong mind, a deep honesty, and a deliberate effort to say, at this point, “That which can be destroyed by the truth should be,” and “The scientist hasn’t taken the gnomes away, only taken my delusion away,” and “I never held just title to my belief in gnomes in the first place; I have not been deprived of anything I rightfully owned,” and “If there are gnomes, I desire to believe there are gnomes; if there are no gnomes, I desire to believe there are no gnomes; let me not become attached to beliefs I may not want,” and all the other things that rationalists are supposed to say on such occasions.
But with the rainbow it is not even necessary to go that far. The rainbow is still there!
- Thou Art Physics by 6 Jun 2008 6:37 UTC; 150 points) (
- The Gift We Give To Tomorrow by 17 Jul 2008 6:07 UTC; 147 points) (
- Proofs, Implications, and Models by 30 Oct 2012 13:02 UTC; 130 points) (
- Zombies! Zombies? by 4 Apr 2008 9:55 UTC; 114 points) (
- By Which It May Be Judged by 10 Dec 2012 4:26 UTC; 95 points) (
- Zombies Redacted by 2 Jul 2016 20:16 UTC; 94 points) (
- Savanna Poets by 18 Mar 2008 18:42 UTC; 69 points) (
- Changing Your Metaethics by 27 Jul 2008 12:36 UTC; 62 points) (
- Memetic Tribalism by 14 Feb 2013 3:03 UTC; 62 points) (
- Causality and Moral Responsibility by 13 Jun 2008 8:34 UTC; 55 points) (
- On Being Okay with the Truth by 2 May 2011 0:17 UTC; 51 points) (
- Many Worlds, One Best Guess by 11 May 2008 8:32 UTC; 51 points) (
- Math is Subjunctively Objective by 25 Jul 2008 11:06 UTC; 49 points) (
- Probability is Subjectively Objective by 14 Jul 2008 9:16 UTC; 43 points) (
- Dissolving the Problem of Induction by 27 Dec 2020 17:58 UTC; 40 points) (
- Rudimentary Categorization of Less Wrong Topics by 5 Sep 2015 7:32 UTC; 39 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- In Defence of Spock by 21 Apr 2021 21:34 UTC; 36 points) (
- The Comedy of Behaviorism by 2 Aug 2008 20:42 UTC; 32 points) (
- “I know I’m biased, but...” by 10 May 2011 20:03 UTC; 32 points) (
- On the nature of purpose by 22 Jan 2021 8:30 UTC; 29 points) (
- 20 Jun 2019 23:38 UTC; 29 points) 's comment on Let Values Drift by (
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 29 points) (
- Virtue vs Obligation (The Caplan-Singer Debate) by 9 May 2023 9:20 UTC; 28 points) (EA Forum;
- Book Review: Free Will by 11 Oct 2021 18:41 UTC; 28 points) (
- Heading Toward Morality by 20 Jun 2008 8:08 UTC; 27 points) (
- If reductionism is the hammer, what nails are out there? by 11 Dec 2010 13:58 UTC; 23 points) (
- 9 May 2011 18:42 UTC; 23 points) 's comment on Chemicals and Electricity by (
- 18 Feb 2023 8:01 UTC; 22 points) 's comment on Hashing out long-standing disagreements seems low-value to me by (
- “Arbitrary” by 12 Aug 2008 17:55 UTC; 19 points) (
- 5 Aug 2015 2:41 UTC; 17 points) 's comment on Which LW / rationalist blog posts aren’t covered by my books & courses? by (
- 29 May 2023 19:36 UTC; 12 points) 's comment on Morality is Accidental & Self-Congratulatory by (
- 7 Nov 2015 9:31 UTC; 12 points) 's comment on Rationality Quotes Thread November 2015 by (
- 7 Jun 2008 8:27 UTC; 12 points) 's comment on Timeless Control by (
- Phenomenological Complexity Classes by 2 Mar 2017 19:47 UTC; 11 points) (
- 14 Apr 2011 17:57 UTC; 10 points) 's comment on Verbal Overshadowing and The Art of Rationality by (
- 14 Jun 2008 6:48 UTC; 9 points) 's comment on Possibility and Could-ness by (
- Rationality Reading Group: Part P: Reductionism 101 by 17 Dec 2015 3:03 UTC; 8 points) (
- 14 Apr 2011 3:23 UTC; 8 points) 's comment on Eight questions for computationalists by (
- 21 Mar 2009 5:02 UTC; 7 points) 's comment on Raising the Sanity Waterline by (
- 13 Oct 2009 0:59 UTC; 7 points) 's comment on Anticipation vs. Faith: At What Cost Rationality? by (
- 25 Aug 2023 21:01 UTC; 6 points) 's comment on Justified Expectation of Pleasant Surprises by (
- 22 Mar 2010 2:06 UTC; 6 points) 's comment on The scourge of perverse-mindedness by (
- 20 Jan 2010 22:54 UTC; 6 points) 's comment on Explain/Worship/Ignore? by (
- [SEQ RERUN] Explaining vs. Explaining Away by 1 Mar 2012 3:37 UTC; 5 points) (
- 3 Sep 2009 9:47 UTC; 5 points) 's comment on The Featherless Biped by (
- 29 Nov 2020 16:16 UTC; 4 points) 's comment on Inner alignment in the brain by (
- 19 Oct 2009 22:48 UTC; 4 points) 's comment on How to think like a quantum monadologist by (
- 18 May 2011 23:28 UTC; 3 points) 's comment on Theism, Wednesday, and Not Being Adopted by (
- 12 Jun 2011 16:34 UTC; 3 points) 's comment on Welcome to Less Wrong! by (
- 12 Feb 2013 16:56 UTC; 3 points) 's comment on A confusion about deontology and consequentialism by (
- Anti-epistemology: an explanation by 8 Jul 2020 16:34 UTC; 3 points) (
- 21 Nov 2014 4:53 UTC; 2 points) 's comment on Conceptual Analysis and Moral Theory by (
- 29 Mar 2009 20:37 UTC; 2 points) 's comment on Rationality: Common Interest of Many Causes by (
- 26 Nov 2009 1:07 UTC; 2 points) 's comment on In conclusion: in the land beyond money pumps lie extreme events by (
- 20 Nov 2015 20:28 UTC; 2 points) 's comment on Systems Theory Terms by (
- 21 Jun 2019 0:15 UTC; 2 points) 's comment on Let Values Drift by (
- 7 Jan 2012 9:54 UTC; 2 points) 's comment on Welcome to Less Wrong! (2012) by (
- 29 Mar 2021 12:57 UTC; 2 points) 's comment on The importance of how you weigh it by (
- 28 Mar 2012 17:35 UTC; 2 points) 's comment on [LINK] Poem: There are no beautiful surfaces without a terrible depth. by (
- 30 Sep 2011 20:02 UTC; 2 points) 's comment on Examples of mysteries explained *away* by (
- 13 Apr 2015 13:12 UTC; 1 point) 's comment on Hedonium’s semantic problem by (
- 11 Jun 2009 14:13 UTC; 1 point) 's comment on Post Your Utility Function by (
- 16 Dec 2012 16:06 UTC; 0 points) 's comment on Ends Don’t Justify Means (Among Humans) by (
- 12 Apr 2011 18:05 UTC; 0 points) 's comment on We are not living in a simulation by (
- 3 Jul 2012 11:10 UTC; 0 points) 's comment on The scourge of perverse-mindedness by (
- 1 Nov 2010 12:45 UTC; 0 points) 's comment on Levels of Intelligence by (
- 29 Jan 2010 19:59 UTC; 0 points) 's comment on Normal Cryonics by (
- 18 Jan 2013 10:01 UTC; 0 points) 's comment on Causal Diagrams and Causal Models by (
- 1 Mar 2011 17:50 UTC; 0 points) 's comment on Ability to react by (
- 9 Jul 2011 12:26 UTC; -2 points) 's comment on Rationality Quotes July 2011 by (
“My usual reply ends with the phrase: “If we cannot learn to take joy in the merely real, our lives will be empty indeed.” I shall expand on that tomorrow.”
How many times do we take joy in things that are, not only imaginary, but physically impossible? If our utility function values XYZ, and XYZ is currently imaginary, it seems rather silly to adjust the utility function to no longer value XYZ, rather than adjusting the universe so that XYZ exists. My utility function doesn’t value drugs (or drug-equivalents) that zero the human utility function for everything that doesn’t currently exist. Think of what would have happened to human civilization if such drugs were readily available, and people wanted to take them- we’d have remained in the Stone Age until the next asteroid or ice age killed us off.
In my experience, mysterians merely object to reductionism applied to consciousness. Characterizing them as being opposed to reductive explanation of rainbows seems to misrepresent them. Of course, I may not know the contours of the group as well as Eliezer does.
Nowadays, this blog seems less a forum for discussing bias than an arena for Eliezer to propound his materialist take on the world and criticize its naysayers. Nothing wrong with that, but posts are touching less and less on the blog title.
The pattern that seems to be playing out repeatedly is: Eliezer begins a series of posts on a topic → Commenters complain that the topic is straying from the nominal topic of the blog, i.e. bias → Eliezer brings the topic around and shows how it applies to bias. In this case, though, the connection to bias seems pretty clear.
On a side note, does it feel weird to anybody else to refer to Eliezer as Eliezer, like you’re on a first name basis with him? I mean, blogging is an informal style of writing, and one would expect that to carry over into the comments, but I still feel like I should be referring to him as “The Master” or something. :)
I’m not sure if I’m right about this, but to me, calling Eliezer Yudkowsky “The Master” smacks of cult.
Yes it sounds cool and at first I was inclined to say that it was a good idea, but the reason it’s so appealing is that it would be something special that only us Less Wrong rationalists would do, it would strenghten our group identity and push everyone else that much further away.
http://lesswrong.com/lw/lv/every_cause_wants_to_be_a_cult/
Maybe we could call him Mr. Yudkowsky, although that just reminds me of a certain green monster from Monster’s Inc., for some reason.
He refers to himself as Eliezer, sometimes, but he’s obviously on a first name basis with himself.
Someone’s written something that he entitled Arguing With Eliezer ( http://www.philosophyetc.net/2008/03/arguing-with-eliezer-part-i.html ), so maybe he’s okay with the general public calling him by his first name.
Or, I guess, we could just ask him:
Eliezer Yudkowsky, what would you have us call you?
It’s a joke...
The only reason I could see to call EY something like “The Master” is to make him feel incredibly awkward. Anyone who sees him in person tomorrow is encouraged to try this.
I’m sorry, I thought you were serious.
I’m really, really, sorry, and I’m so embarrased right now.
I’ve never seen him complain when other posters address him as Eliezer.
I usually use “EY”—I normally use full names or usernames (which in this case coincide, modulo a space/underscore) for people who I’ve never met in meatspace but who are on the same site as me, but his is so long and I am so lazy.
One way to look at it: how would you refer to other posters here?
Along the lines of my comment on your previous reductionism post, perhaps there would be fewer howls of protest at the declaration that rainbows are not fundamental were you not contrasting them with other things which you are claiming are fundamental (without evidence, I might add).
The other antireductionism argument I can think of looks a little like this:
Anti-reductionist: “If the laws of physics are sufficient to explain reality, then that leaves no room for God or the soul. God and souls exist, therefore reductionism is false.”
And the obvious counterargument, is of course...
Reductionist: “One man’s modus tollens is another man’s modus ponens. Reductionism is true; therefore, there is, in fact, no God.”
At this point, the anti-reductionist gathers a lynch mob and has the reductionist burned at the stake for heresy.
It’s still possible to have a little bit of respect for people who are obviously wrong.
I read this book once about how when we’re looking at other people who we know are wrong we have to see their ignorence and try to solve it instead of making them into the enemy. We have to see the disease behind the person.
Scott, I swapped “mysterian” for “anti-reductionist”, since you’re correct that the term “mysterian” has been used to refer specifically to those who think consciousness can’t be explained.
However, if you google on, for example, “objections to materialism”, the second google hit will turn up a page that includes a short list of “objections to materialism in general”, of which the first is, verbatim:
I really am not attacking a strawman here! If you already understand scientific reductionism, that’s great. Not everyone does.
Observation is the standard for what ideas are true or not. We observe that atoms move in predictable ways, and we also observe that humans, who are made of atoms, have free will. Therefore, by observation, reductionism is false.
We don’t have the kind of free will that would imply a contradiction here.
What about the possibility that free will is either a fundamental/essential property within the universe (like an elan vital for free will), or an emergent property of certain complex systems? In either of these two cases, reductionism would still be true, it would just leave most current reductionists wrong about free will.
I found two things for you to read!
http://lesswrong.com/lw/iv/the_futility_of_emergence/
http://lesswrong.com/lw/ix/say_not_complexity/
Thanks for the reading. I’m still playing some catch-up with the community.
Elezier’s issue with the word “emergence” seems to stem from the fact that people treat the statement “X emerges from process Y” as some sort of explanation. I completely agree with him that it’s actually a nice-sounding non-explanation. I’m in no way claiming or trying to imply that by my above statement that because consciousness is an emergent property that I’ve explained something. However, being an emergent property is the only alternative to being an essential property. I think my statement above does a good job of spanning the space of possible explanations, which was its purpose.
Please do correct me again if I’m wrong on that, though.
Ah, I see — so you were only saying that in contrast to the possibility of it being a fundamental property. I’m still not sure how that would “leave most current reductionists wrong about free will”, though; if you can define in enough detail what free will actually does, then the idea of it being something that emerges from certain complex systems would be agreeable to most reductionists. The only point of disagreement may be over whether to use the term “free will” at all, because of the metaphysical connotations and insufficient denotation it traditionally has.
I think we completely agree about all of this, then. I’m just letting the terminological confusion that was introduced by Ian C’s comment muddy my own attempts at articulating myself.
I guess the point I was trying to make was that, regardless of the result—free will exists, or free will doesn’t exist—there’s no reason to think that this result would have anything to do with the question of whether reductionism is a good research programme. We would still attempt to reduce theories as much as possible, even if free will was “magic”.
The part about “what current reductionists believe”—I assumed that most reductionists think of free-will as nonexistent, or an illusion. So the hypothetical case where free will does exist (magical or otherwise) would leave them hypotheically wrong about it.
Myself, I’m a fan of Dennett’s stance—might as well call the thing we have “free will” even though there’s nothing magic about it. Sorry for the long string of muddled comments. I’ll try thinking harder the first time around next time.
That theory has been thoroughly demolished by the evidence.
The essentialist theory? I agree. I’m simply being as generous as conceivable about empirical details in my still-winning argument.
I don’t understand what you mean. If the free-will-is-fundamental claim has negligible support, it isn’t worth mentioning.
Edit: Ah, I didn’t read the context. Bad RobinZ!
One man’s modus tollens is another man’s modus ponens. Reductionism is true; therefore, there is, in fact, no “free will” in the sense that Ian C. seems to be implying. ;)
I can’t predict the tomorrow’s weather; does that mean atmospheres have free will?
Why do I think I have free will?
I think I have free will because I tell my hand to type and it types.
And why do I think that that was my own free will and not somebody or something else’s?
Wait, what do I even mean when I say “free will”?
I mean that I could do whatever I wanted to.
And what controls what I “want” to do? Is it me or something/one else?
Why do I think that I control my own thoughts?
My thoughts seem instantaneous, maybe I don’t control my own thoughts.
I can say things without thinking about it beforehand, sometimes I agonize over a decision (It’s a Saturday, should I get out of bed right now or later?) and I choose one decision without coming to a conclusion and without knowing why I chose it.
Maybe, subconsiously, I was hungry, or obeying a habit.
If I was hungry, or if some other instinct was propelling me, then I don’t really have free will when it comes to simple things like this, although “I” can override my instincts, so it’s my instincts serving me, as a mental shortcut, and I am not a slave to it, so I do have free will.
If it was a habit, it was I who created my habits by repetition, so I have free will. I can also override my habits. Who’s to say that my overrides aren’t controled by something/one?
I feel like I have free will, but maybe that’s how whatever controls me “wants” me to feel.
Maybe I’m just a zombie, writing paragraphs on free will because the laws of nature are making me do it.
In that case, how am I supposed to assume that I am, in fact, correct about me having free will?
So I don’t have free will at all? Is that the answer that other people have gotten to? Are there gaping holes (or even tiny holes) in my logic, and are there angles that I havn’t considered yet?
I still feel like I have free will. Maybe I should have written that like, ‘I still feel like I have “free will”.’
This may be like the time the math teacher told me to prove that two lines were parallel and I couldn’t because I didn’t know about Thales’ theorem.
Could someone please help me figure this out? I don’t see a way to continue from, “Either I have free will, or who/whatever is controling me is making me think that I have free will.” I’m not sure how those two universes would be different.
Edit: In a universe where someone is controling me, I’m guessing “he” would have a plot in mind. The universe doesn’t appear to have a plot, but maybe I’m just too small to see it, or- wait, who says the universe doesn’t appear to have a plot? I don’t think I know enough to answer this question. Help?
Well, if that’s what you mean, then you certainly don’t have free will, at least not if you’re anything like me. There’s lots of things I’ve wanted to do in my life that I haven’t been able to do.
So, if that’s really what you mean by “free will”, I submit to you that not only do you not have this thing, you don’t even feel like you have this thing. Conversely, if you’re talking about something you do feel like you have, then your description of what it is is flawed, and it might be helpful to return to the question of what you mean.
Also, you may want to ask why is this an interesting question? What would depend on you having free will, or not having it? Why should anyone care?
You might also find it useful to google “compatibilism”.
Why do humans think that they have free will?
What kind of situation would favour humans who thought that they had free will over humans who didn’t?
Will to survive?
No, that’s not the right question, I’m off track.
I’m drawing a complete blank.
What is there in my head that makes it so that I think I have free will?
I keep thinking in circles. I’m trying to differentiate the answer of this question from the answer of the question “Why do I think I have free will?”, but every time I get close there is litterally a giant blank, I don’t think I know enough about how human brains work in the first place in order to answer this question.
Oh, no, here we go:
Why do I think that I don’t know enough about how my mind works to answer this question? I live in it, after all.
Well, I can’t answer the question, that seems like ample proof to me, although it might not be.
I think that I could work out everything I needed to know given enough time, but why start from scratch when other great minds have done the work for you?
Can anyone direct me to some ressources I can use to better understand the internal algorithms of the human mind please?
Well, there is The MIT Encyclopedia of the Cognitive Sciences, but I’ve never tried to actually read any of it...
http://cryptome.org/2013/01/aaron-swartz/MIT-Cognitive-Sciences.pdf
“If we cannot learn to take joy in the merely real, our lives will be empty indeed.”
It’s true… but… why do we read sci-fi books then? Why should we? I don’t think that after reading a novel about intelligent, faster-than-light starships the bus stopping at the bus stop nearby will be as interesting as it used to be when we were watching it on the way to the kindergarten… Or do you think it is? (Without imagining starships in place of buses, of course.)
So what non-existing things should we imagine to be rational (= to win), and how? I hope there will be some words about that in tomorrow’s post, too...
That doesn’t mean that we can’t take joy in what is not merely real, nor that we should be delighted everytime we see the bus stopping at the bus stop.
There are four types of things in the world:
Things that are real and uninteresting.
Things that are real and interesting.
Things that are unreal and uninteresting.
Things that are unreal and interesting.
I assume that no one would invent something unreal and uninteresting, so that leaves us with three categories.
In this article, Eliezer argues that the category real and interesting exists.
He doesn’t say that the two remaining categories don’t exist.
So feel free to enjoy your unreal, interesting sci-fi, and to disregard the real, uninteresting bus stops.
(Not that I’m implying that bus stops and other mundane things can’t be interesting as well, but no one is interested in everything.)
I find that thinking this way gives us a better perspective on a lot of things, like when people say, “People only want what’s bad for them.”
(Um, I can’t figure out how to do bulleted lists. I’ve copied the little asterisk thing directly from the help page, but I still can’t get it to work. Could someone tell me what I’ve done wrong?)
Formatting help: a list must be its own paragraph.
Nominull: I believe Eliezer would rather be called Eliezer...
Ian C.: We observe a lack of predictability at the quantum level. Do quarks have a free will? (Yup a shameless rip-off of Dougs argument, tee-hee! =) Btw. I don’t think you can name any observations that strongly indicate (much less prove, which is essentially impossible anyway) that people have any kind of “free will” that contradicts causality-plus-randomness at the physical level.
How about, Eliezer-sensei?
“I can’t predict the tomorrow’s weather; does that mean atmospheres have free will?”
It’s not the fact that you can’t predict other people’s actions that proves the existence of free will, it’s that you observe your own self making choices. You can introspect and see yourself weighing the options and then picking one.
Frank Hirsch: “I don’t think you can name any observations that strongly indicate (much less prove, which is essentially impossible anyway) that people have any kind of “free will” that contradicts causality-plus-randomness at the physical level.”
More abstract ideas are proven by reference to more fundamental ones, which in turn are proven by direct observation. Seeing ourselves choose is a direct observation (albeit an introspective one). If an abstract theory (such as the whole universe being governed by billiard ball causation) contradicts a direct observation, you don’t say the observation is wrong, you say the theory is.
“It takes a strong mind, a deep honesty, and a deliberate effort to say, at this point, “That which can be destroyed by the truth should be,” and “The scientist hasn’t taken the gnomes away, only taken my delusion away,” ”
The problem, I fear, is that the vast majority of people are simply not that strong of mind, or, to put it another way, they have little regard for intellectual honesty. This isn’t really surprising, because by lying to yourself about certain facts of life, you can make yourself feel better. And feeling happy is what is most important to most people. I see a lot of clever people (Michael Anissimov had a post critiquing Christianity a few days ago) trying to persuade people to drop their irrational beliefs using logical arguments (this applies to all sorts of irrational beliefs—re-incarnation, belief in “souls”, etc). It doesn’t work! Why? Because people don’t want to feel depressed about life!
To make it easier for intellectually honest, rational people to understand why rational argument won’t work, imagine this: I put you in a brain scanner, and tell you that for every true belief I find in you head, I will torture you for a day. Believing important true things—like acknowledging that human life is a material, physical phenomenon—get extra long periods of torture. You get “credit” (torture sessions subtracted off) for adopting a religion, with extra credit given for the really implausible religions, like fundamentalist/creationist/young earth Christianity.
To discover this, replace “gnome” with “god” in the quote above, and go talk to some Christians.
My chess playing software considers options and makes a decision. Does it have free will?
If an abstract theory (such as the whole universe being governed by billiard ball causation) contradicts a direct observation, you don’t say the observation is wrong, you say the theory is.
I defy the data.
Your chess playing software must make the decision that is most likely to win the game, wheras humans don’t have anything to stop us making the bad decision.
Chess playing software runs an algorithm designed to play chess. It may be good at playing chess, but it probably isn’t optimal: remember that until fairly recently top grandmasters could still beat top chess software. Humans run another algorithm designed to pass on genes. It may be good at passing on genes, but it probably isn’t optimal; remember that evolutions are stupid.
Moreover, the algorithm that governs humans behavior is no longer working in the environment in which it evolved, whereas chess playing software has the benefit of only needing to work in the environment for which it was designed. Ask chess playing software to play checkers and you’ll get nonsense.
I’d be surprised if a chess program weren’t easily re-adapted to playing Checkers just by adding rules for the pieces; checkers even has a similar “transformation” rule as pawns in Chess, whereby pieces which reach the opposing side of the board can turn into pieces with different abilities. Backgammon, on the other hand...
The hard parts of make chess and checkers AI would not translate well, like evaluating the strength of a position, and strategies for pruning the search tree.
Your chess playing software must make the decision which is most likely to win the game according to some algorithm (and assuming no computer glitches). Humans have plentiful reasons to make mistakes of kinds that computers don’t, but that doesn’t mean computers making the best possible moves.
Standard disclaimer: Eliezer does great work and writing here.
Useful criticism: Elizier, less foil seeking (strawmen or not here) and more attempts to understand reality and our perceptual/analytical skews from reality. I think foil-seeking is a weakness on your end which to a degree diminishes your utility to us (or at least to me). There are enough polemicists out there that are either providing entertainment or countering less useful models to understanding reality. We don’t need you to counter “anti-reductionists”, or fundamentalists or any other groups, in my opinion, as much as we (I) need you innovating improved conceptual and bias-reducing approaches to understanding reality.
That’s my opinion, anyways.
Doug S.: “My chess playing software considers options and makes a decision. Does it have free will?”
When I wrote you can “introspect yourself weighing the options and picking one,” I didn’t mean those words to be a self-contained proof, but rather a indication of where to look in reality to find the actual proof for oneself. I’m sure this idea of language as a pointer has been covered on this blog before. Yes, I know other things (such as computers) can be described using similar terms, but that is neither here nor there.
Here is how to do it: take some random decision, such as “Should I go to work tomorrow?” and focus on it. For the first few seconds nothing happens while the focus builds, then you suddenly know the answer. But the important thing is the form the answer comes in. It is not simply “Yes” (or “No”) but it is “[I choose] Yes.” The “I choose” might not be explicitly verbal, but the knowledge of it is there. And if you continue to maintain the focus then the reasons behind the decision start to come through. i.e. “Because I want to get paid.”
Once again, these words are not a proof. Just a pointer to something to try, and then decide for yourself.
I tried it, but I can only report what it felt like from the inside.
I choose to believe there is no “free will”.
Doug S., we get the point, nothing that Ian could say would pry you away from your version of reductionism, there’s no need to make any more posts with Fully General Counterarguments. “I defy the data” is a position, but does not serve as an explanation of why you hold that position, or why other people should hold that position as well.
I would agree with reductionism, if phrased as follows:
When entity A can be explained in terms of another entity B, but not vice-versa, it makes sense to say that entity A “has less existence” compared to the fundamental entities that do exist. That is, we can still have A in our models, but we should be aware that it’s only a “cognitive shortcut”, like when a map draws a road as a homogeneous black line instead of showing microscopic detail.
The number of fundamental entities is relatively small, as we live in a lawful universe. If we see a mysterious behavior, our first guess should be that it’s probably a result of the known entities, rather than a new entity. (Occam’s razor)
Reductionism, as a philosophy, doesn’t itself say what these fundamental entities are; they could be particles, or laws of nature, or 31 flavors of ice cream. If every particle were composed of smaller particles, then there would be no “fundamental particle”, but the law that states how this composition occurs would still be fundamental. If we discover tomorrow that unicorns exist and are indivisible (rather than made up of quarks), then this is a huge surprise and requires a rewrite of all known laws of physics, but it does not falsify reductionism because that just means that a “unicorn field” (which seems to couple quite strongly with the Higgs boson) gets added to our list of fundamental entities.
Reductionism is a logical/philosophical rather than an empirical observation, and can’t be falsified as long as Occam’s razor holds.
Eliezer,
I agree that what you attack is a common anti-reductionist argument, but—as you admit—not a particularly mysterian one (except so far as the part of belief being addressed is the conscious aspect of belief). So changing your terms in the original post fixes the problem.
My complaint about you being off-topic was premature, and I apologize for it.
Eliezer: Not be a troll that gets banned from O/B or anything, but … you still didn’t explain how you believe that you’re wearing socks, because you didn’t explain how you recognized socks in the image in your visual cortex (or wherever that step takes place). That is an extremely difficult object recognition problem, and if you really know how you are able to recognize, in images, all the objects that you personally are capable of recognizing (and in your example, that would be not just socks, but your leg, the underlying foot, the floor, the table, etc.), then I will personally attend the ceremonies for all the awards you’re going to collect for solving that problem.
Last time, someone posted a link to a paper claiming to solve character recognition, but that still doesn’t accomplish what Eliezer is capable of when he sees socks.
It’s not the fact that you can’t predict other people’s actions that proves the existence of free will, it’s that you observe your own self making choices. So, you’re saying you don’t assign any of the proposed answers to the homework exercise in Dissolving the Question even a half-decent probability of being correct? That’s interesting. Please explain your reasoning.
Mysterious, inexplicable phenomenon doesn’t fit within any current models. Mysterious answer (cosmological constant, elan vital, phlogiston) is concocted. Mysterious phenomenon is studied and modelled, and eventually pretty soundly understood. Everyone has a good laugh/inquisition and moves on.
Talk about free will till you’re blue in the face if you wish; consciousness happens in the mind, the mind is made of stuff, there is no Easter Bunny
Silas, you seem to have an exaggerated idea of how mysterious visual recognition is to modern neuroscience. (An idea that was probably exaggerated by someone posting Jeff Hawkins’s work in reply, as if Jeff Hawkins were anything more than one guy with a semi-interesting opinion about the general cerebral cortex, and a much larger marketing budget than is usual in science. Nothing to compare to the vast edifice of known visual neuroscience.)
Around a third of the 471 articles in the MIT Encyclopedia of Cognitive Sciences seem to be about vision, although that may just be a subjective impression. Should you be interested in how, specifically, the brain carries out the operations of vision generally and object recognition, you could do worse than to pick up a copy of MITECS and start reading through it—it would give you a good idea of where to look for further information.
There isn’t the tiniest reason to believe it’s magic.
Furthermore, while I couldn’t do it off the top of my head, I have some idea where to look up how to build a standard narrow-AI object-recognition system that could, if you insist, “objectively” (if with poorer accuracy) verify visually through a known algorithm whether I was wearing the objects I call “socks”.
Your objection seems genuinely pointless on multiple grounds and I am confused as to why you make it.
Eliezer: First of all, I didn’t claim it was magic. If you’re confused as to why I bring this up, see the last time I said this:
And why did I claim you did that the distinction was necessary? Here is what you said in that post:
So, you claim that free will wasn’t broken into understandable steps, but belief that you are wearing socks, was.
Because you did not break the “belief that you are wearing socks” into understandable steps, you are holding the claims to different standards, a subtle but correctible kind of confirmation bias.
Yes, there has been impressive work in neuroscience. But image recognition has not been solved and is therefore not yet understood. You reveal your agreement when you use CAPTCHAs to keep out spammers and those CAPTCHAs work.
So, I don’t think you can explain why you believe you are wearing socks until you can explain that step, which no one yet can. Ban me if you like, but I don’t think you can sustain that explanation until your CAPTCHA barrier is broken.
From the Wikipedia article on CAPTCHAs:
Upvoted. Take that, five years ago!
That strikes me as one of the least beneficial research projects that I have ever heard of. I really hope they didn’t publish their methods freely.
I would hope that they did. The immediate benefit of such research is that it will show which features of CAPTCHAs are really easy to circumvent, and therefore it will help people to build stronger CAPTCHAs, and thus to keep out more spammers.
Side benefits in fields such as image recognition are also probable.
also this xkcd comic seems very on topic
The idealistic side of me agrees, but the cynic side knows perfectly well that in this day and age, security through obscurity doesn’t work well for long.
Silas,
I’d appreciate if you explained what you mean here (starting with defining CAPTCHAs, a term I don’t know).
Sebastian Hagen: “So, you’re saying you don’t assign any of the proposed answers to the homework exercise in Dissolving the Question even a half-decent probability of being correct? That’s interesting. Please explain your reasoning.”
Because I believe things are what they are. Therefore if I introspect and see choice, then it really truly is choice. The other article might explain it, but an explanation can not change what a thing is, it can only say why it is.
An example of mind projection fallacy so pure, even I could recognise it. Ian believes “he believes things are what they are”. If Ian actually believed things are what they are, he would possess unobtainable level of rationality and we would do well to use him as an oracle. In reality, Ian believes things are what they seem to be (to him), which is understandable, but far less impressive.
Oops, I missed something in my several previews:
“And why did I claim you did that the distinction was necessary?” should be “And why did I claim you said that the distinction was necessary?”
Quick guys, post so I can get down to 2 in the “recent comments” :-/
Since a comment appeared while I was correcting, I can add a substantive comment to this post:
@Scott Scheule: A CAPTCHA is a test to see that you’re not a computer. On this site, it’s the image containing letters where you have to identify them before your post is accepted.
I mention them because if Eliezer really believes that recognition of objects is understood, that barrier to posting would be completely ineffective because spammers could program bots to pass the tests (instead of just using the other tricks requiring humans).
I don’t think it is reasonable to say the laws of physics are part of the territory. The territory, or at least the closest we can get to it, is our direct experience. Any physical model is a map of the territory that we have created from our experience, some may be more accurate then others, but all are still maps. Scientists didn’t get rid of the haunts and gnomes any more then relativity got rid of Newtonian physics. It just described them more accurately. There is a real difference, though, between these models beyond accuracy, and that is weather or not the haunts have experience. Surely I feel the wind as it blows over my skin, but does the wind feel me passing through itself? Scientific descriptions of the wind make it seems like it does not act with intention, which seems to suggest that it does not have experience, but our understanding of experience is still limited.
FWIW, it took a long time between aquiring an understanding of how the moon orbits the earth and Sputnik.
Silas,
Thank you. That’s a weak argument though. Eliezer could assert that the technology to beat the CAPTCHAs exists and is understood—it’s just too expensive for spammers to afford.
Because you did not break the “belief that you are wearing socks” into understandable steps, you are holding the claims to different standards, a subtle but correctible kind of confirmation bias.
In matters such as these, I consider a cognitive process to be “understood” when you know how to duplicate the relevant features given an unboundedly large but finite amount of computing power.
Yes, there are points to be argued about how you know you “understand” something’s “relevant features”, given that you can’t actually build an exact duplicate, or even build something that “does the same thing” using your available computing power. AIXI, for example, has fooled many people who don’t truly understand the math, and some of those who do understand the math, into thinking that they understand (by the above definition) far more than they actually do.
But that is the one and same definition that I try to apply in all cases, and it is why I am willing to label visual processing of socks “understood”, while still challenging advocates of free will to sketch out what kind of specific mind could, even in principle, have free will.
Many CAPTCHAs have already been broken, so it’s not exactly a theoretical scenario.
As to free will, the first paper that comes to mind is David Hodgson’s A Plain Person’s Free Will.
I have not researched the issue in any great depth, but I’m sure there’s plenty out there worth reading—and a true libertarian account of free will hardly seems impossible, though it may be implausible.
Here’s a list from David Chalmers’s online collection of mind papers.
There have been several articles on Bruce Schneier’s blog in the past year about breaking CAPTCHAs.
Doug S., we get the point, nothing that Ian could say would pry you away from your version of reductionism, there’s no need to make any more posts with Fully General Counterarguments. “I defy the data” is a position, but does not serve as an explanation of why you hold that position, or why other people should hold that position as well.
Sorry. :(
Anyway, my own introspection seems to tell me that, although I can “choose the choice that I want”, my ability to choose the preferences that provide the underlying reasons for the choice are far more limited. For example, it would be extremely difficult for me to consciously change which flavors of ice cream I like. On some level, I feel that I’m similar to that chess playing program; I make decisions, but there’s a level on which the decisions really don’t seem to be “freely chosen.”
That’s where introspection gets me, anyway, and it doesn’t seem incompatible with “reduction to atoms.” Your mileage may vary.
Doug,
It seems to me the introspective evidence is greater for choices spurred on by our desires, than our desires themselves. That is to say, I can’t choose which ice cream flavors I like either—but I can choose when I eat ice cream.
Of course, that could be reducible to atoms—at least it’s conceivable—the behavioral aspects at least, if not the qualia.
Frank Hirsch: “I don’t think you can name any observations that strongly indicate (much less prove, which is essentially impossible anyway) that people have any kind of “free will” that contradicts causality-plus-randomness at the physical level.”
Ian C.: More abstract ideas are proven by reference to more fundamental ones, which in turn are proven by direct observation. Seeing ourselves choose is a direct observation (albeit an introspective one). If an abstract theory (such as the whole universe being governed by billiard ball causation) contradicts a direct observation, you don’t say the observation is wrong, you say the theory is.
Yikes! You are saying that because it seems to you inside your mind that you had freedom of choice, it must automagically be so? Your “observation” is that there seems to be free will. Granted! I make the same observation. But this does not in any way bear on the facts. How do you propose to lend credibility to your central tenet “If you seem to have free will, then you have free will”? To this guy it seemed he was emperor of the USA, but that didn’t make it true. Also, how will you go and physically explain this free will thing? All things we know are either deterministic or random. If you plan to point at randomness and cry “Look! Free will!”, we had better stop here. Or were you thinking about the pineal gland?
Eliezer could assert that the technology to beat the CAPTCHAs exists and is understood
Id does. In fact, most of the commonly used CAPTCHAs can be more reliably decoded by a machine than by a human being.
-- hendrik
All: So far, people can solve individual CAPTCHA generation methods, but the problem I’m referring to is being able to solve any CAPTCHA that a human can. A captcha can be made arbitrarily much more difficult for a computer, while at the same time making it only slightly more difficult for a human. (And of course, there’s the nagging issue of how O/B’s captcha, er, works. “But it doesn’t keep out Silas!”) Moreover, arbitrary object recognition is much more general and difficult than character recognition. Actually achieving a solution to it would have much of the same difficulties as the Turing Test, since the identity of an object can hinge on human-specific contextual knowledge in the picture.
I confess I wasn’t aware of AIXI, but after Googling (you have to give additional keywords for it to show up, and Wikipedia doesn’t mention it, nor Solomonoff induction specifically), it appears to be the algorithm for optimal behavior in interaction with an arbitrary, unknown environment, to satisfy a utility curve. So, this does show how given unbounded computation time, it is possible, via a method we understand, to identify objects.
However, it still doesn’t mean Eliezer is treating the free will/socks cases equally. If he can count the existence of an algorithm (which his brain is not using, given that it completes the problem quickly) that would identify arbitrary objects as proof that he understands the image-recognition step in “believing I’m wearing socks”, I could, just the same, say:
“I understand why I think I have free will. Given unboundedly large but finite computing power, an AIXI program could explain to me what cognitive architecture gives the feeling of free will. Problem solved.”
Frank Hirsch: “You are saying that because it seems to you inside your mind that you had freedom of choice, it must automagically be so?”
I believe the mind is not magical or holy, but a natural occurrence. Therefore, to me, introspection is not automatically an invalid way of gathering knowledge.
‘How do you propose to lend credibility to your central tenet “If you seem to have free will, then you have free will”?’
I’m not deducing (potentially wrongly) from some internal observation that I have free will. The knowledge that I chose is not a conclusion, it is a memory.
If you introspect on yourself making a decision, the process is not (as you would expect): consideration (of pros and cons) → decision → option selected. It is in fact: consideration → ‘will’ yourself to decide → knowledge of option chosen + memory of having chosen it. The knowledge that you chose is not worked out, it is just given to you directly. So their is no scope for you to err.
“Also, how will you go and physically explain this free will thing? All things we know are either deterministic or random.”
I don’t have a physical explanation, just some observations.
Frank Hirsch: How do you propose to lend credibility to your central tenet “If you seem to have free will, then you have free will”?
Ian C.: I’m not deducing (potentially wrongly) from some internal observation that I have free will. The knowledge that I chose is not a conclusion, it is a memory. If you introspect on yourself making a decision, the process is not (as you would expect): consideration (of pros and cons) → decision → option selected. It is in fact: consideration → ‘will’ yourself to decide → knowledge of option chosen + memory of having chosen it. The knowledge that you chose is not worked out, it is just given to you directly. So their is no scope for you to err.
No scope to err? Surely you know that human memory is just about the least reliable source of information you can appeal to? Much of what you seem to remember about your decision process is constructed in hindsight to explain your choice to yourself. There is a nice anecdote about what happens if you take that hindsight away:
In an experiment, psychologist Michael Gazzaniga flashed pictures to the left half of the field of vision of split-brain patients. Being shown the picture of a nude woman, one patient smiles sheepishly. Asked why, she invents — and apparently believes — a plausible explanation: “Oh — that funny machine”. Another split-brain patient has the word “smile” flashed to his nonverbal right hemisphere. He obliges and forces a smile. Asked why, he explains, “This experiment is very funny”.
So much for evidence from introspective memory...
Ian C.,
I’m not deducing (potentially wrongly) from some internal observation that I have free will. The knowledge that I chose is not a conclusion, it is a memory.
To paraphrase/mangle Wittgenstein:
What would the memory have felt like if you only had the illusion of free will?
To be honest, I’m not convinced this is a useful argument. Does the existence (or otherwise) of ‘free will’ have any bearing on our ethics, our actions, or anything at all?
Frank Hirsch: “So much for evidence from introspective memory”
Those experiments are fascinating, but the fact that a damaged brain in a different situation makes up stories is not evidence that a healthy brain is doing so in this situation.
Ben Jones: “To be honest, I’m not convinced this is a useful argument.”
I’m not convinced it’s not a useful argument either. Argument is for when you have made a deductive chain that you want to explain to others. When all you are doing is pointing out something in their field of perception, all you can do is point, and if they deny it, just point again.
The fact is, that choices do come “pre-packaged” with the knowledge that we chose them. They don’t come pre-packaged with some data that we might interpret rightly or wrongly as the knowledge that we chose, they come directly with the actual knowledge.
• Sarah is hypnotized and told to take off her shoes when a book drops on the floor. Fifteen minutes later a book drops, and Sarah quietly slips out of her loafers. “Sarah,”, asks the hypnotist, “why did you take off your shoes?” “Well . . . my feet are hot and tired.”, Sarah replies. “It has been a long day”. • George has electrodes temporarily implanted in the brain region that controls his head movements. When neurosurgeon José Delgado (1973) stimulates the electrode by remote control, George always turns his head. Unaware of the remote stimulation, he offers a reasonable explanation for it: “I’m looking for my slipper.” “I heard a noise.” “I’m restless.” “I was looking under the bed.”
The point is: That’s how the brain works, always. It is only in special circumstances, like the ones described, that the fallaciousness of these “explanations from hindsight” becomes obvious.
This seems related to Dennett’s greedy reductionism. HT Doug S.
Hmmm. I wonder how often criticism of reductionism is really criticism of what Dennett calls greedy reductionism. And I wonder whether demands that reductionism must make room for some instances of emergence aren’t really just requests for a few of Dennett’s cranes.
If I see our relationship as a status contest, and you are doing analysis and are better at it than I am, I might attempt to move the contest away from analysis and onto, say, aesthetics, or professions of faith, or rhetoric, or athleticism, or cooking, or some other area where I feel stronger.
I usually interpret objections like Keats’ (and, more famously if more elliptically, Whitman’s Learn’d Astronomer) as a status move along these lines.
I sometimes refer to this as “choosing to reign in Hell.” If I can’t win at a game worth playing, the temptation to play a game I’m better at rather than accept my loss is enormous.
Of course, if there is no reason to choose one game over another, then this is a perfectly sensible strategy: I get to play a game I can win, and I lose nothing of value by doing so.
On the other hand, if it turns out that there are good reasons to analyze a system rather than, say, hit it with a stick, or worship it, or sing about it… well, in that case I am losing something of value.
In those cases, it is often useful to re-evaluate my original framing of the relationship as a status contest.
KEATS: Explanations of gnomes and rainbows take away the sense of wonder they give me.
YUDKOWSKY: Gnomes aren’t real.
KEATS: You don’t say.
YUDKOWSKY: We should get a sense of wonder from accurate explanations.
KEATS: Speak for yourself.
As a general point about reductionism the essay may stand up well. As criticism of that poem, not so much. I for one enjoy both magical and “merely real” explanations, and see no contradiction in that. The sort of ideas people enjoy are a matter of taste.
“philosophy will clip an angels wings”.… explain that part of the poem please
I would think that it is wise indeed to take joy in the merely real however I believe also that it is not a fault to enjoy things which aren’t real such as stories like ‘HPMOR’ and ‘Luminosity’ as long as these don’t color your perception of the world outside these stories. As long as it doesnt change the area it shouldn’t affect the model but you shouldn’t let its non-existance prevent you from enjoying it.
I should note it here too:
It occurs to me that verbal overshadowing of feelings may be some of what people are complaining of when they consider explaining to constitute explaining away: where a good verbal description pretty much screens off one’s own memories. This is part of the dangerous magic the good art critic wields—and why it’s possibly more dangerous to an artist’s art to read their positive reviews than their negative ones. It’s a mechanism by which the explanation does, in fact, overshadow the feelings. So I have more sympathy for Keats having learnt of verbal overshadowing than I did before.
The disconnect here appears to derive from the fact that reductionists have models of the interactions of particles in their minds, which as a system produce the reality we observe directly. Anti-reductionists fail to see that reductionists are accepting the larger model while saying it is composed of items that are not all the same. Also, many are not ready to be able to have a model of reality in which the tiger is composed of interactions that are unbelievably small and have no particular connection to a tiger. When a hostile anti-reductionist attacks reductionism, a reductionist learn to accept that some people have difficulty seeing a rainbow as anything other than a whole rainbow, and cannot see that the systems that make up a rainbow are not modeled so that they are part of a rainbow. The goal should be to agree to disagree, that reductionists can continue to see their component systems, and anti-reductionists can focus on the system as a whole.
While sometimes disagreements are unresolvable, that should never be the goal.
The goal here is to have true beliefs—an accurate map of the world. While it may be appropriate to use different models in different contexts (since models necessarily leave out some details of the things they’re modeling), we should not disagree about the contents of a well-defined model.
Disagreements are an opportunity to find out where you’re wrong. Ideally, both parties emerge from them in agreement.
But falls it not to the poet, to mourn the deaths of children that never were?
Those wings he saw denuded by gray light, the woof and weft cross her shoulders sundered by shining knives
kobolds lonely in their deeps, and haunts, spun gossamer in windy gaps blown away by a harsh wind of observation
So lonely they, and lonelier now that they never ever were.
i’m on ambien, pardon my sklippetture.
oh jegus fuck what did i write last night. Sorry about that, rational people! Ignore me!
You can delete posts by clicking on the “Retract” icon at the bottom (the one with a slashed circle), clicking “Yes”, reloading the page, and clicking the “Delete” icon.
thanks!
Can someone tell me, or is there a list somewhere, “all the other things that rationalists are supposed to say on such occasions”?
I find that having bits that come to mind automatically in certain situations really helps me to go about thinking in the right way (or at least a way that’s less wrong.)
Could someone please explain to me why this is downvoted?
I’m not trying to be sarcastic or anything, and the comment above was sincere.
I just want to know what I said wrong.
Thank you.
Well, you’re basically asking people to supply you with cached thoughts, and this is not ideal. Even less charitably, you’re asking people to supply you with soldiers, and that’s not great either.
Also, I think newcomers, and possibly just everyone, should refrain from using the word “rationalist.” At this stage, and possibly just all the time, it excessively encourages belief as attire.
Well, I hate to say something against your post here, because I quite agree with it all. Except there is one Mind Projection Fallacy of which I question whether it was done on purpose. The fallacy where you are reducing the poem to it’s parts.
The majority of poetry is metaphor. All of the specific examples in that poem are metaphors for the feeling of majesty. So to the poet, those three examples are quite the same. The poet’s distaste for scientific reduction isn’t that everything is explained away, it’s that explaining something reduces it’s perceived majesty.
Now, to us reductionists, it is the opposite. Explaining something increases it’s perceived majesty. The more explanation required (literally required), the more majestic it is. The difference is a simple alignment of the feeling of majesty. Whether it be aligned to interpretation, or to perception.
So yes, believers in things will likely read the poem and presume that the poet means that rainbows are explained away. Most believers in things certainly react that way, including you. But that was not the intent of the poem (presuming the poet was not a hack). In modern times, the only reason that explanation of the poem is available to us is because we already know that mythical creatures (including ghosts) don’t exist. The only reason that explanation of the poem is available to us is because most non-reductionists are actually reductionists, having a strong, deep belief that many things have been explained away by reduction.
However, back when mythical creatures like gnomes and haunts were imagined, it was not without a reason. Never assume that people are referring to mystical creatures and magic when they talk of their perception of reality. They are simply using metaphor to explain something their brain cannot grasp at the moment of perception. Most of the time, they don’t even know they’re doing it, and so believe the metaphor to be literal. But just because they use the wrong words, that doesn’t mean their perception is false, only that their interpretation of their perception is false. The haunts in the air and the gnomes in the mine are still there, they’re just not called “haunts” and “gnomes”.
So, like the rainbow, haunts and gnomes were not explained away. All three were just explained. What was explained away was the interpretation. What was explained was the perception.
I just realized that this is precisely why I think LessWrong will fail in the end. And why I have been unable to help.
From what I have seen, beyond all of the awesome information on how to use one’s thoughts appropriately, LessWrong suggests that people attempt to interpret things correctly. I strongly disagree with that ideal. Interpreting something correctly is, in the end, just as bad as interpreting something wrongly; because both are equally different from perceiving something.
This is why people call science and reductionism a religion. Interpreting something and assigning it a truth value is trying to assign a truth value to an interpretation. Yeah, that’s an obvious sentence, but what I mean is that interpretations are never true. Sure, interpretations can mimic or look like the truth, but only the original perception is true. And perception is true regardless of how it is interpreted. The difference between common religion and the religion hidden in science is a simple matter of different interpretations. Interpretations that are less wrong are more useful only because it’s easier to extract information of perceptions from them. This is extremely useful, but only as a transition state designed for communication purposes (including communication with oneself).
For those of you who might read this and think “but directly perceiving something is impossible, as all perceptions are filtered and interpreted by the mind.” So? That never stopped me. Try interpreting things in multiple, opposing ways simultaneously. That’s how I started learning how to differentiate between perception and interpretation. Also, try considering that the interpretation doesn’t exist, and so doesn’t actually matter. Eventually, all interpretations become useful, as all have information of the original perception hidden within them. Trying to set one’s mind on interpretations hinders one’s ability to perceive. The less interpretations one believes, the more one is able to perceive. I am speaking from direct experience, and also observation of tens of thousands of conversations, and hundreds of individuals over time.
Please. This is an important step toward sentiency. Hell, it’s the definition of sentience. Please try to be sentient. LessWrong is my greatest hope of a sizable community capable of sentiency. Yes, I am literally begging you to attain sentience. It’s really lonely up here.
You are lonely up there because you are slightly insane (and, alas, in a way that isn’t a sufficiently shared cultural insanity for it to form a group bonding role for you).
Gonk. Gonk.
I stopped reading right here. It sounded the crackpot alarm for me.
The rainbow is still there, I saw one recently )))
So, hi, 8ish years late. I want to make sure I understand. Would this (reductionism) be somewhat like drawing a multi-leveled building of a map? I’m one of those ‘don’t yet fully understand the math articles’ types.