Rationality Quotes June 2013
Another month has passed and here is a new rationality quotes thread. The usual rules are:
Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.
No more than 5 quotes per person per monthly thread, please.
Hofstadter on the necessary strangeness of scientific explanations:
— from the postscript to Heisenberg’s Uncertainty Principle, in Metamagical Themas: Questing for the Essence of Mind and Pattern (his lovely book of essays from his column in Scientific American)
Moliere, Le Malade Imaginere (1673), Act III, sc. iii.
A lesson here is that if you ask “Why X?” then any answer of the form “Because ” is not actually progress toward understanding.
Synonyms are not good for explaining… because there is no explanatory power in them.
I found your post funny… because it amused me.
I upvoted your comment, because I wished for it to have more upvotes.
Sometimes a downvote will lead to more overall upvotes than an upvote would have. Just like you can increase the probability of a sentence being quoted by including a typo, on purpose (try it!). Mind games!
OK, I’m trying it on your comment.
Unfortunately, even if the effect is real, hanging a lantern on it probably neutralizes it.
Shooting the messenger! :-(
Exeunt.
(In all earnestness, it works better with comments for which no downvotes would be expected—unlike mine --, the counter-voting will in my experience often overcompensate the initial downvote. So downvote your friends, but only the high status ones on their best comments! It’s a bit like upvoting by proxy, except the proxy is a fellow LWer you’re secretly puppeteering!)
Is this even possible? How would someone know that a comment has been downvoted once it had been voted back up to 0 points?
Hover your mouse over the “n points” text.
“greeness” → “greenness”
Does this imply that there’s no bottom level, just layer after layer of explanations with each layer being very different from the ones above? If there is a bottom level below which no further explanation is possible, can you tell whether you’ve reached it?
I want to point out that in this post, you were quoting sediment quoting Hofstadter who was referencing Hanson’s quoting of Heisenberg. Pretty sure even Inception didn’t go that deep.
The principle here is that an attribute x of an entity A is not explained by reference to a constituent entity B that has the same property. The strength of an arch is a property of arches, for example, not of the things from which arches are constituted.
That doesn’t imply that theremust be a B in the first place, merely that whether there is or not, referring to B.x in order to explain A.x leaves x unexplained. (Of course, if there is no B, referring to B.x has other problems as well.)
I suspect the “top”/”bottom”/”level” analogy is misleading here. I would be surprised if there were a coherent “bottom level,” actually. But if there is, I suppose the sign that I’ve reached it is that all the observable attributes it has are fully explainable without reference to other “levels,” and all the observable attributes of other “levels” are fully (if impractically) explainable in terms of it.
At any level of description, there are observable attributes of entities that are best explained by reference to other levels of description, but I’m not sure there’s always a clear rank-ordering of those levels.
David Wong, The 5 Ugly Lessons Hiding in Every Superhero Movie
Ah, David Wong. A few movies in the post-9/11 era begin using terrorism and asymmetric warfare as a plot point? Proof that Hollywood no longer favors the underdog. Meanwhile he ignores… Daredevil, Elektra, V for Vendetta, X-Men, Kickass, Punisher, and Captain America, just to name the superhero movies I’ve seen which buck the trend he references, and within the movies he himself mentions, he intentionally glosses over 90% of the plots in order to make his point “stick.” In some cases (James Bond, Sherlock Holmes) he treats the fact that the protagonists win as the proof that they weren’t the underdog at all (something which would hold in reality but not in fiction, and a standard which he -doesn’t- apply when it suits his purpose, a la his comments about the first three Die Hard movies being about an underdog whereas the most recent movie isn’t).
Yeah. Not all that impressed with David Wong. His articles always come across as propaganda, carefully and deliberately choosing what evidence to showcase. And in this case he’s deliberately treating the MST3K Mantra as some kind of propaganda-hiding tool? Really?
These movies don’t get made because Hollywood billionaires don’t want to make movies about underdogs, as he implies—Google “underdog movie”, this trope is still a mainstay of movies. They get made because they sell. To the same people consuming movies like The Chronicles of Riddick or The Matrix Trilogy. Movies which revolve around badass underdogs.
(Not that this directly relates to your quote, but I find David Wong to be consistently so deliberate about producing propaganda out of nothing that I cannot take him seriously as a champion of rationality.)
It is worth pointing out that this page is about quotes, not people, or even articles. I thought the quote was worth upvoting for:
I think it’s because enjoying fiction involves being in a trance, and analyzing the fiction breaks the trance. I suspect that analysis is also a trance, but it’s a different sort of trance.
The term for that is suspension of disbelief.
Any chance you could expand on “analysis is also a trance”?
I don’t know about anyone else, but if I’m analyzing, my internal monologue is the main thing in my consciousness.
Your what?
No, I’m not letting it go this time. I’ve heard people talking about internal monologues before, but I’ve never been quite sure what those are—I’m pretty sure I don’t have one. Could you try to define the term?
Gosh. New item added to my list of “Not everyone does that.”
...I have difficulty imagining what it would be to be like someone who isn’t the little voice in their own head, though. Seriously, who’s posting that comment?
I may be in a somewhat unique position to address this question, as one of the many many many weird transient neurological things that happened to me after my stroke was a period I can best describe as my internal monologue going away.
So I know what it’s like to be the voice in my head, and what it’s like not to be.
And it’s still godawful difficult to describe the difference in words.
One way I can try is this: have you ever experienced the difference between “I know what I’m going to say, and here I am saying it” and “words are coming out of my mouth, and I’m kind of surprised by what I’m hearing myself say”?
If so, I think I can say that losing my “little voice” is similar to that difference.
If not, I suspect the explanation will be just as inaccessible as the phenomenon it purported to explain, but I can try again.
...no, I haven’t. I’m always in the state of “I know what I’m going to say, and here I am saying it” (sometimes modified very soon afterwards by “on second thoughts, that was a very poor way to phrase it and I’ve probably been misunderstood”).
...what? Wow!
I’m dying to know whether we’re stumbling on a difference in the way we think or the way we describe what we think, here. To me, the first state sounds like rehearsing what I’m going to say in my head before I say it, which I only do when I’m racking my brains on eg how to put something tactfully, where the latter sounds like what I do in conversation all the time, which is simply to let the words fall out of my mouth and find out what I’ve said.
My internal monologue is a lot faster than the words can get out of my mouth (when I was younger, I tried to speak as fast as I think, with the result that no-one could understand me; of course, to speak that fast, I needed to drop significant parts of most of the words, which didn’t help). I don’t always plan out every sentence in advance; but thinking about it, I think I do plan out every phrase in advance, relying on the speed of my internal monologue to produce the next phrase before or at worst very shortly after I complete the current phrase. (It often helps to include a brief pause at the end of a phrase in any case). It’s very much a just-in-time thing.
If I’m making a special effort to be tactful, then I’ll produce and consider a full sentence inside my head before saying it out loud.
Incidentally, I’m also a member of Toastmasters, and one thing that Toastmasters has is impromptu speaking, when a person is asked to give a one-to-two minute speech and is told the topic just before stepping up to give the speech. The topic could be anything (I’ve had “common sense”, “stick”, and “nail”, among others). Most people seem to be scared of this, apparently seeing it as an opportunity to stand up and be embarrassed; I find that I enjoy it. I often start an impromptu speech with very little idea of how it’s going to end; I usually make some sort of pun about the topic (I changed ‘common sense’ into a very snooty, upper-crust type of person complaining about commoners with money - ‘common cents’), and often talk more-or-less total nonsense.
But, through the whole speech, I always know what I am saying. I am not surprised by my own words (no matter how surprised other people may be by the idea of ‘common cents’). I don’t think I know how to be surprised at what I am saying. (Of course, my words are not always well-considered, in hindsight; and sometimes I will be surprised at someone else’s interpretation of my words, and be forced to explain that that’s not what I meant)
I’m the same—except occasionally, when I’m ‘flowing’ in conversation, I’ll find that my inner monologue fails to produce what I think it can, and my mouth just halts without input from it
I find that happens to me sometimes when I talk in Afrikaans; my Afrikaans vocabulary is poor enough that I often get halfway through a sentance and find that I can’t remember the word for what I want to say.
It occasionally happens to me in any language. I usually manage to rephrase the sentence on the flight or to replace the word with something generic like “thing” and let the listener figure it out from the context, without much trouble.
Something that occurred to me on this topic; reading has a lot to do with the inner monologue. Writing is, in my view, a code of symbols on a piece of paper (or a screen) which tell the reader what their inner monologue should say. Reading, therefore, is the voluntary (and temporary) replacement of the reader’s internal monologue with an internal monologue supplied and encoded by the author.
At least, that’s what happens when I read. Do other people have the same experience?
Inner monologue test:
I. like. how. when. you. read. this. the. little. voice. in. your. head. takes. pauses..
Does anyone find that the periods don’t make the sentence sound different?
Let’s make it a poll:
When you read NancyLebovitz’s sentence (quoted above) do the periods make it sound different?
[pollid:470]
(If anyone picks any option except ‘Yes’ or ‘No’, could you please elaborate?)
Hypothesis: Since I am more used to read sentences without a full stop after each word than sentences like that, of course I will read the former more quickly—because it takes less effort.
Experiment to test this hypothesis: Ilikehowwhenyoureadthisthelittlevoiceinyourheadspeaksveryquickly.
Result of the experiment: at least for me, my hypothesis is wrong. YMMV.
As far as I can tell, I started reading the test phrase more slowly than normal, then “shifted gears” and sped up, perhaps to faster than normal.
Same here, for both test sentences.
The little voice in my head speaks quickly for that experimental phrase, yes. It should be taking slightly longer to decode—since the information on word borders is missing—which suggests that the voice in my head is doing special effects. I think that that is becausewordslikethis can be used in fiction as the voice of someone who is speaking quickly; so if the voice in my head speeds up when reading it, then that makes the story more immersive.
Hypothesisconfirmedforme.Perhapstoomanyhourslisteningtoaudiobooksatfivetimesspeed. Normalspeedheadvoicejustseemssoslow.
That sounds in my head like the voice in Italian TV ads for medicines reading the disclaimers required (I guess) by law (ultra-fast words, but pauses between sentences of nearly normal length).
I can parse it both ways. Actually, on further experimentation, it appears to be tied directly to my eye-scanning speed! If I force my eyes to scan over the line quickly from left-to-right, I read it without pause; if I read the way I normally do (by staring at the ‘When’ to take a “snapshot” of I, like, how, when, you, and read all at once; then staring at the space between “little” and “voice” to take a snapshot of this, the, little, voice, in, and your all at once, then staring at the “pauses” to take a snapshot of head, takes, and pauses), then the pauses get inserted—but not as normal sentence stops; more like… a clipped robot.
Huh. You read in a different way to what I do; I normally scan the line left-to-right. And I insert the pauses when I do so.
It sounds like a clipped robot to me too.
Yeah, something clicked while I was reading an old encyclopedia sometime around age 7; I remember it quite vividly. My brain started being able to process chunks of text at a time instead of single words, so I could sort of focus on the middle of a short sentence or phrase and read the whole thing at once. I went from reading at about one-quarter conversation speed, to about ten times conversation speed, over the course of a few minutes. I still don’t quite understand what the process was that enabled the change; I just sort of experienced it happening.
One trade-off is that I don’t have full conscious recall of each word when I read things that quickly—but I do tend to be able to pull up a reasonable paraphrasing of the information later if I need to.
I can see both pros and cons to this talent. The pro is obvious; faster reading. The con is that it may cause trouble parsing subtly-worded legal contracts; the sort where one misplaced word may potentially land up with both parties arguing the matter in court. Or anything else where exact wording is important, like preparing a wish for a genie.
Of course, since it seems that you can choose when to use this, um, snapshot reading and when not to, you can gain the full benefit of the pros most of the time while carefully removing the cons in any situation where they become important.
I call that “skimming”, but maybe that’s something else?
Assuming you’re literally talking about subvocalization, it depends on what I’m reading (I do it more with poetry than with academic papers), on how quickly I’m reading (I don’t do that as much when skimming), on whether I know what the author’s voice sounds like (in which case I subvocalize in their voice—which slows me down a great deal if I’m reading stuff by someone who speaks slowly and with a strong foreign accent e.g. Benedict XVI), and possibly on something else I’m overlooking at the moment.
I do not notice that I am subvocalising when I read, even when I am looking for it (I tested this on the wiki page that you linked to). I do notice, however, that it mentions that subvocalising is often not detectable by the person doing the subvocalising.
More specifically, if I place my hand lightly on my throat while reading, I feel no movement of the muscles; and I am able to continue reading while swallowing.
So, no, I don’t think I’m talking about subvocalising. I’m talking about an imaginary voice in my head that narrates my thought processes.
Hmmm… my inner monologue does not tend to speak in the voice of someone whose voice I know. I can get it to speak in other peoples’ voices, or in what I imagine other people’s voices to sound like, if I try to, but it defaults to a sort of neutral gear which, now that I think about it, sounds like a voice but not quite like my (external) voice. Similar, but not the same. (And, of course, the way that I hear my voice when I speak differs from how I hear it when recorded on tape—my inner monologue sounds more like the way I hear my voice, but still somewhat different)
...this is strange. I don’t know who my inner monologue sounds like, if anyone.
Mine usually sounds more or less like I’m whispering.
My inner monologue definitely doesn’t sound like whispering; it’s a voice, speaking normally.
I think I can best describe it by saying that it sounds more like I imagine myself sounding than like I actually sound to myself; but I suspect that’s recursive, i.e. I imagine myself sounding like that because that’s what my inner monologue sounds like.
Does your inner voice sound different depending on your mood or emotional state?
Yes. If my mood or emotional state is sufficiently severe, then my inner voice will sound different; both in choice of phrasing and in tone of voice.
It’s not an audible voice, as such; I think the best way that I can describe it is to say that it’s very much like a memory of a voice, except that it’s generated on-the-fly instead of being, well, remembered. As such, it has most of the properties of an audible voice (except actual audibility) - including such markers as ‘tone of voice’. This tone changes with my emotional state in reasonable ways; that is, if I am sufficiently angry, then my inner voice may take on an angry, menacing tone.
If my emotional state is not sufficiently severe, then I am unable to notice any change in my inner-voice tone. I also note that my spoken voice shows a noticeable change of tone at significantly lower emotional severity than my inner voice does.
I was about to say that it’s the same for me, but then I remember that at least for me actual memories of voices can be very vivid (especially in hypnagogic state or when I’m reading stuff written by that person), whereas my inner voice seldom is. (And memories of voices can also be generated on-the-fly—I can pick a sentence and imagine a bunch of people I know each saying it, even if I can’t remember hearing any of them actually ever saying that sentence.)
Huh. Either my memories of voices are less vivid than yours, or my inner monologue is more vivid. Quite possibly both.
Of course, when I remember someone saying something, it can include information aside from the voice (e.g. where it happened, the surroundings at the time) which is never included in my inner monologue. I consider these details to be seperate from the voice-memory; the voice-memory is merely a part of the whole “what-he-said” memory.
BTW, I think I have one kind of memory for people’s timbre, rate of speech, volume, accent, etc., and one for sequences of phonemes, and when recalling what a person sounded like when saying a given sentence I combine the two on the flight.
My experience is that I generally have some kind of fuzzy idea of what I’m going to say before I say it. When I actually speak, sometimes it comes out as a coherent and streamlined sentence whose contents I figure out as a I speak it. At other times—particularly if I’m feeling nervous, or trying to communicate a complicated concept that I haven’t expressed in speech before—my fuzzy idea seems to disintegrate at the moment I start talking, and even if I had carefully rehearsed a line many times in my mind, I forget most of it. Out comes either what feels to me like an incoherent jumble, or a lot of “umm, no, wait”.
Writing feels a lot easier, possibly because I have the stuff-that-I’ve-already-written right in front of me and I only need to keep the stuff that I’m about to say in memory, instead of also needing to constantly remind myself about what I’ve said so far.
ETA: Here’s an earlier explanation of how writing sometimes feels like to me.
The parts of your brain that generate speech and the part that generate your internal sense-of-self are less integrated than CCC’s. An interesting experiment might be to stop ascribing ownership to your words when you find yourself surprised by them—i.e., instead of framing the phenomenon as “I said that”, frame it as “my brain generated those words”.
Learn to recognize that the parts of your brain that handle text generation and output are no more “you” than the parts of your brain that handle motor reflex control.
EDIT: Is there a problem with this post?
No! The parts of my brain that handle text generation are the only parts that… *slap*… Ow. Nevermind. It seems we have reached an ‘understanding’.
Right!
I mean, I do realize you’re being funny, but pretty much exactly this.
I don’t recommend aphasia as a way of shock-treating this presumption, but I will admit it’s effective. At some point I had the epiphany that my language-generating systems were offline but I was still there; I was still thinking the way I always did, I just wasn’t using language to do it.
Which sounds almost reasonable expressed that way, but it was just about as creepy as the experience of moving my arm around normally while the flesh and bone of my arm lay immobile on the bed.
A good way I’ve found to reach this state is to start to describe a concept in your internal monologue but “cancel” the monologue right at the start—the concept will probably have been already synthesized and will just be hanging around in your mind, undescribed and unspoken but still recognizable.
[edit] Afaict the key step is noticing that you’ve started a monologue, and sort of interrupting yourself mentally.
So, FWIW, after about 20 minutes spent trying to do this I wasn’t in a recognizably different state than I was when I started. I can kind of see what you’re getting at, though.
Right, I mean as a way of realizing that there’s something noticeable going on in your head that precedes the internal monologue. I wrote that comment wrong. Sorry for wasting your time.
Ah! I get you now. (nods) Yeah, that makes sense.
That’s… hm.
I’m not sure I know what you mean.
I’ll experiment with behaving as if I did when I’m not in an airport waiting lounge and see what happens.
I’ve had this happen to me semi-accidentally, the resulting state is extremely unpleasant.
A smash equilibrium.
It’s a bit rude to try to change others’ definition of themselves unasked.
Where does that intersect with “that which can be destroyed by the truth, should be”?
“I’m dying to know whether we’re stumbling on a difference in the way we think or the way we describe what we think, here.” wasn’t asking?
The problem is that “what is part of you” at the interconnectedness-level of the brain is largely a matter of preference, imo; that is, treating it as truth implies taking a more authoritive position than is reasonable. Same goes for 2) - there’s a difference between telling somebody what you think and outright stating that their subjective self-image is factually incorrect.
I appear to be confused.
Are you implying that subjective self-image is something that we should respect rather than analyze?
I think there’s a difference between analysis and authoritive-sounding statements like “X is not actually a part of you, you are wrong about this”, especially when it comes to personal attributes like selfness, especially in a thread demonstrating the folly of the typical-mind assumption.
Interesting. It was not my intent to sound any more authoritative than typical. Are there particular signals that indicate abnormally authoritarian-sounding statements that I should watch out for? And are there protocols that I should be aware of here that determine who is allowed to sound more or less authoritarian than whom, and under what circumstances?
I should have mentioned this earlier, but I did not downvote you so this is somewhat conjectured. In my opinion it’s not a question of who but of topic—specifically, and this holds in a more general sense, you might want to be cautious when correcting people about beliefs that are part of their self-image. Couch it in terms like “I don’t think”, “I believe”, “in my opinion”, “personally speaking”. That’ll make it sound less like you think you know their minds better than they do.
FWIW, I understood you in the first place to be saying that this was a choice, and it was good to be aware of it as a choice, rather than making authoritarian statements about what choice to make.
I’d certainly call them much more significant to my identity than a e.g. my deltoid muscle, or some motor function parts of my brain.
It may be useful to recognize that this is a choice, rather than an innate principle of identity. The parts that speak are just modules, just like the parts that handle motor control. They can (and often do) run autonomously, and then the module that handles generating a coherent narrative stitches together an explanation of why you “decided” to cause whatever they happened to generate.
This sounds like a theory of identity as epiphenomenal homunculus. A module whose job is to sit there weaving a narrative, but which has no effect on anything outside itself (except to make the speech module utter its narrative from time to time). “Mr Volition”, as Greg Egan calls it in one of his stories. Is that your view?
More or less, yes. It does have some effect on things outside itself, of course, in that its ‘narrative’ tends to influence our emotional investment in situations, which in turn influences our reactions.
It seems to me that the Mr. Volition theory suffers from the same logical flaw as p-zombies. How would a non-conscious entity, a p-zombie, come to talk about consciousness? And how does an epiphenomenon come to think it’s in charge, how does it even arrive at the very idea of “being in charge”, if it was never in charge of anything?
An illusion has to be an illusion of something real. Fake gold can exist only because there is such a thing as real gold. There is no such thing as fake mithril, because there is no such thing as real mithril.
By that analogy, then, fake gods can exist only because there is such a thing as real gods; fake ghosts can only exist because there is such a thing as real ghosts; fake magic can only exist because there is such a thing as real magic.
It’s perfectly possible to be ontologically mistaken about the nature of one’s world.
Indeed. There is real agency, so people have imagined really big agents that created and rule the world. People’s consciousness persists, even after the interruptions of sleep, and they imagine it persists even after death. People’s actions appear to happen purely by their intention, and they imagine doing arbitrary things purely by intention. These are the real things that the fakes, pretences, or errors are based on.
But how do the p-zombie and the homunculus even get to the point of having their mistaken ontology?
The p-zombie doesn’t, because the p-zombie is not a logically consistent concept. Imagine if there was a word that meant “four-sided triangle”—that’s the level of absurdity that the ‘p-zombie’ idea represents.
On the other hand, the epiphenomenal consciousness (for which I’ll accept the appelature ‘homunculus’ until a more consistent and accurate one occurs to me) is simply mistaken in that it is drawing too large a boundary in some respects, and too small a boundary in others. It’s drawing a line around certain phenomena and ascribing a causal relationship between those and its own so-called ‘agency’, while excluding others. The algorithm that draws those lines doesn’t have a particularly strong map-territory correlation; it just happens to be one of those evo-psych things that developed and self-reinforced because it worked in the ancestral environment.
Note that I never claimed that “agency” and “volition” are nonexistent on the whole; merely that the vast majority of what people internally consider “agency” and “volition”, aren’t.
EDIT: And I see that you’ve added some to the comment I’m replying to, here. In particular, this stood out:
I don’t believe that “my” consciousness persists after sleep. I believe that a new consciousness generates itself upon waking, and pieces itself together using the memories it has access to as a consequence of being generated by “my” brain; but I don’t think that the creature that will wake up tomorrow is “me” in the same way that I am. I continue to use words like “me” and “I” for two reasons:
Social convenience—it’s damn hard to get along with other hominids without at least pretending to share their cultural assumptions
It is, admittedly, an incredibly persistent illusion. However, it is a logically incoherent illusion, and I have upon occasion pierced it and seen others pierce it, so I’m not entirely inclined to give it ontological reality with p=1.0 anymore.
Do you believe that the creature you are now (as you read this parenthetical expression) is “you” in the same way as the creature you are now (as you read this parenthetical expression)?
If so, on what basis?
Yes(ish), on the basis that the change between me(expr1) and me(expr2) is small enough that assigning them a single consistent identity is more convenient than acknowledging the differences.
But if I’m operating in a more rigorous context, then no; under most circumstances that appear to require epistemological rigor, it seems better to taboo concepts like “I” and “is” altogether.
(nods) Fair enough.
I share something like this attitude, but in normal non-rigorous contexts I treat me-before-sleep and me-after-sleep as equally me in much the same way as you do me(expr1) and me(expr2).
More generally, my non-rigorous standard for “me” is such that all of my remembered states when I wasn’t sleeping, delirious, or younger than 16 or so unambiguously qualify for “me”dom, despite varying rather broadly amongst themselves. This is mostly because the maximum variation along salient parameters among that set of states seems significantly smaller than the minimum variations between that set and the various other sets of states I observe others demonstrating. (If I lived in a community seeded by copies of myself-as-of-five-minutes ago who could transfer memories among one another, I can imagine my notion of “I” changing radically.)
Nice! I like that reasoning.
I personally experience a somewhat less coherent sense of self, and what sense of self I do experience seems particularly maladaptive to my environment, so we definitely seem to have different epistemological and pragmatic goals—but I think we’re applying very similar reasoning to arrive at our premises.
So in the following sentence...
“I am a construction worker”
Can you taboo ‘I’ and “am’ for me?
This body works construction.
Jobs are a particularly egregious case where tabooing “is” seems like a good idea—do you find the idea that people “are” their jobs a particularly useful encapsulation of the human experience? Do you, personally find your self fully encapsulated by the ritualized economic actions you perform?
But if ‘I’ differ day to day, then doesn’t this body differ day to day too?
I am fully and happily encapsulated by my job, though I think I may have the only job where this really possible.
Certainly. How far do you want to go? Maps are not territories, but some maps provide useful representations of territories for certain contexts and purposes.
The danger represented by “I” and “is” come from their tendency to blow away the map-territory relation, and convince the reader that an identity exists between a particular concept and a particular phenomenon.
Is the camel’s nose the same thing as his tail? Are the nose and the tail parts of the same thing? What needs tabooing is “same” and “thing”.
I have also found that process useful (although like ‘I’, there are contexts where it is very cumbersome to get around using them).
Suppose I am standing next to a wall so high that I am left with the subjective impression that it just goes on forever and ever, with no upper bound. Or next to a chasm so deep that I am left with the subjective impression that it’s bottomless.
Would you say these subjective impressions are impossible?
If possible, would you say they aren’t illusory?
My own answer would be that such subjective impressions are both illusory and possible, but that this is not evidence of the existence of such things as real bottomless pits and infinitely tall walls. Rather, they are indications that my imagination is capable of creating synthetic/composite data structures.
Mesh mail “mithril” vest, $335.
Setting aside the question of whether this is fake iron man armor, or a real costume of the fake iron man, or a fake costume designed after the fake iron man portrayed by special effects artists in the movies, I think an illusion can be anything that triggers a category recognition by matching some of the features strongly enough to trigger the recognition, while failing to match on a significant amount of the other features that are harder to detect at first.
That’s not fake mithril, it’s pretend mithril.
To have the recognotion, there must have already been a category to recognise.
A tape recorder is a non-conscious entity. I can get a tape recorder to talk about consciousness quite easily.
Or are you asking how it would decide to talk about consciousness? It’s a bit ambiguous.
I think it’s not an epiphenomenon, it’s just wired in more circuitously than people believe. It has effects; it just doesn’t have some effects that we tend to ascribe to it, like decisionmaking and highlevel thought.
.> How would a non-conscious entity, a p-zombie, come to talk about consciousness?
By functional equivalence. A zombie Chalmers is bound to will utter sentences asserting its possession of qualia, a zombie Dennett will utter sentences denying the same.
The only getout is to claim that it is not really talking at all.
The epiphenomenal homunculus theory claims that there’s nothing but p-zombies, so there are no conscious beings for them to be functionally equivalent to. After all, as the alien that has just materialised on my monitor has pointed out to me, no humans have zardlequeep (approximate transcription), and they don’t go around insisting that they do. They don’t even have the concept to talk about.
The theory that there is nothing but zombies runs into the difficulty of explaining why many of them would believe they are non-zombies. The standard p-zombie argument, that you can have qualia-less functional duplicates of non-zombies does not have that problem.
The theory that there is nothing but zombies runs into the much bigger difficulty of explaining to myself why I’m a zombie. When I poke myself with a needle, I sure as hell have the qualia of pain.
And don’t tell me it’s an illusion—any illusion is a qualia by itself.
Don’t tell me tell Dennett
The standard p-zombie argument still has a problem explaining why p-zombies claim to be conscious. It leaves no role for consciousness in explaining why conscious humans talk of being conscious. It’s a short road (for a philosopher) to then argue that consciousness plays no role, and we’re back with consciousness as either an epiphenomenon or non-existent, and the problem of why—especially when consciousness is conceded to exist, but cause nothing—the non-conscious system claims to be conscious.
Even worse, the question of how the word “conscious” can possibly even refer to this thing that is claimed to be epiphenomenal, since the word can’t have been invented in response to the existence or observations of consciousness (since there aren’t any observations). And in fact there is nothing to allow a human to distinguish between this thing, and every other thing that has never been observed, so in a way the claim that a person is “conscious” is perfectly empty.
ETA: Well, of course one can argue that it is defined intensionally, like “a unicorn is a horse with a single horn extending from its head, and [various magical properties]” which does define a meaningful predicate even if a unicorn has never been seen. But in that case any human’s claim to have a consciousness is perfectly evidence-free, since there are no observations of it with which to verify that it (to the extent that you can even refer to a particular unobservable thing) has the relevant properties.
Yes. Thats the standard epiphenomenalism objection.
Often a bit too short.
I scrawl on a rock “I am conscious.” Is the rock talking about consciousness?
No, you are.
I run a program that randomly outputs strings. One day it outputs the string “I am conscious.” Is the program talking about consciousness? Am I?
No, see nsheppard’s comment.
Maybe I’m being unnecessarily cryptic. My point is that when you say that something is “talking about consciousness,” you’re assigning meaning to what is ultimately a particular sequence of vibrations of the air (or a particular pattern of pigment on a rock, or a particular sequence of ASCII characters on a screen). I don’t need a soul to “talk about souls,” and I don’t need to be conscious to “talk about consciousness”: it just needs to happen to be the case that my mouth emits a particular sequence of vibrations in the air that you’re inclined to interpret in a particular way (but that interpretation is in your map, not the territory).
In other words, I’m trying to dissolve the question you’re asking. Am I making sense?
Not yet. I really think you need to read the GLUT post that nsheppard linked to.
You do need to have those concepts, though, and concepts cannot arise without there being something that gave rise to them. That something may not have all the properties one ascribes to it (e.g. magical powers), but discovering that that one was mistaken about some aspects does not allow one to conclude that there is no such thing. One still has to discover what the right account of it is.
If consciousness is an illusion, what experiences the illusion?
This falls foul of the GAZP v. GLUT thing. It cannot “just happen to be the case”. When you pull out for attention the case where a random process generates something that appears to be about consciousness, out of all the other random strings, you’ve used your own concept of consciousness to do that.
I’ve read GLUT. Have you read The Zombie Preacher of Somerset?
I think so; at least, I have now. (I don’t know why someone would downvote your comment, it wasn’t me.) So, something went wrong in his head, to the point that asking “was he, or was he not, conscious” is too abstract a question to ask. Nowadays, we’d want to do science to someone like that, to try to find out what was physically going on.
Sure, I’m happy with that interpretation.
That is not obvious. You do need to be a langue-user to use language, you do need to know English to communicate in English, and so on. If consciousness involves things like self-reflection and volition, you do need to be conscious to interntionally use language to express your reflections on your own consciousness.
In the same way that a philosophy paper does… yes. Of course, the rock is just a medium for your attempt at communication.
I write a computer program that outputs every possible sequence of 16 characters to a different monitor. Is the monitor which outputs ‘I am conscious’ talking about consciousness in the same way the rock is? Whose attempt at communication is it a medium for?
Your decision to point out the particular monitor displaying this message as an example of something imparts information about your mental state in exactly the same way that your decision to pick a particular sequence of 16 characters out of platonia to engrave on a rock does.
See also: on GLUTs.
The reader’s. Paradolia is a signal-processing system’s attempt to find a signal.
On a long enough timeline, all random noise generators become hidden word puzzles.
Why would we have these modules that seem quite complex, and likely to negatively effect fitness (thinking’s expensive), if they don’t do anything? What are the odds of this becoming a prevalent without a favourable selection pressure?
High, if they happen to be foundational.
Sometimes you get spandrels, and sometimes you get systems built on foundations that are no longer what we would call “adaptive”, but that can’t be removed without crashing systems that are adaptive.
Evo-psych just-so stories are cheap.
Here’s one: it turns out that ascribing consistent identity to nominal entities is a side-effect of one of the most easily constructed implementations of “predict the behavior of my environment.” Predicting the behavior of my environment is enormously useful, so the first mutant to construct this implementation had a huge advantage. Pretty soon everyone was doing it, and competing for who could do it best, and we had foreclosed the evolutionary paths that allowed environmental prediction without identity-ascribing. So the selection pressure for environmental prediction also produced (as an incidental side-effect) selection pressure for identity-ascribing, despite the identity-ascribing itself being basically useless, and here we are.
I have no idea if that story is true or not; I’m not sure what I’d expect to see differentially were it true or false. My point is more that I’m skeptical of “why would our brains do this if it weren’t a useful thing to do?” as a reason for believing that everything my brain does is useful.
(nods) Yeah, OK. Take 2.
It’s also broadly similar to the difference between explicit and implicit knowledge. Have you ever practiced a skill enough that it goes from being something where you hold the “outline” of the skill in explicit memory as you perform it, to being something where you simply perform it without that “outline”? For example, driving to an unfamiliar location and thinking “ok, turn right here, turn left here” vs. just turning in the correct direction at each intersection, or something similar to that?
Yes, I have. Driving is such a skill; when I was first learning to drive, I had to think about driving (”...need to change gear, which was the clutch again? Ordered CBA, so on the left...”). Now that I am more practiced, I can just think about changing gear and change gear, without having to examine my actions in so much detail. Which allows my internal monologue to wonder into other directions.
On a couple of occasions, as a result of this thread, I’ve tried just quietening down my internal monologue—just saying nothing for a bit—and observing my own thought processes. I find that the result is that I pay a lot more attention to audio cues—if I hear a bird in the distance, I picture a bird. There’s associations going on inside my head that I’d never paid much attention to before.
Is this still true under significant influence of alcohol?
I wouldn’t know, I don’t drink alcohol.
Well, if you ever did want to experience what TheOtherDave describes, that might be a good way to induce it.
I’ve found I can quiet my internal monologue if I try. (It’s tricky, though; the monologue starts up again at the slightest provocation—I try to observe my own though processes without the monologue, and as soon as something odd happens, the internal monologue says “That’s odd… ooops.”)
I’m not sure if I can talk without the monologue automatically starting up again, but I’ll try that first.
I wasn’t to add another data point, but I’m not sure the one I got can even be called that: I have no consistent memory on this subject. I am notoriously horrible at luminosity and introspection. When I do try to ask my brain, I receive a model/metaphor based of what I already know for neuroscience which may or may not contain data I couldn’t access otherwise, and which is presented as a machine I can manipulate in the hopes of trying to manipulate the states of distant brains. The machine is clearly based on whatever concepts happen to be primed and the results would probably be completely different in every way if I tried this an hour later. Note that the usage of the word “I” here is inconsistent and ill-defined. This might be related to the fact this brain is self-diagnosed with posible ego-death (in the good way).
Edit: it is also noticed that like seemingly the case with most attempts to introspection, the act of observation strongly and aversely influence the functioning of the relevant circuity, in this case heavily altering my speech-patterns.
Huh. They way you describe attempting introspection is exactly the way our brain behaves when we try to access any personal memories outside of working memory. This doesn’t seem to be as effective as whatever the typical way is, as our personal memory’s notoriously atrocious compared with others.
I don’t seem to have any sort of ego death. Vigil might have something similar, though.
Hmm, this seems related to another datapoint: reportedly, when I’m asked about my current mood and distracts, I answer “I can’t remember”.
A more tenuously related datapoint is that in fiction, I try to design BMIs around emulating having memorized GLUTs.
And some other thing come to think of it: I do have abnormal memory function in a bunch of various ways.
Basically; maybe a much larger chunk of my cognition passes through memory machinery for some reason?
What are GLUTs? I’m guessing you’re not talking about Glucose Transporters.
This seems like a plausible hypothesis. Alternatively, perhaps your working memory is less differentiated from your long-term memory.
Hm. I have the same reaction if I’m asked what I’m thinking about, but I don’t think it’s because my thoughts are running through my long-term memory, so much as my train of thought usually gets flushed out of working memory when other people are talking.
GLUT=Giant Look-Up Table. Basically, implementing multiplication by memorizing the multiplication tables up to 2 147 483 647.
Hmm, that’s an interesting theory. They are not necessarily mutually exclusive.
And no I’m not talking about trying to remember what happened a few seconds ago. I mean direct sensory experiences; as in someone holds p 3 fingers in the darkness and asks “how many fingers am I holding up right now” and I answer “I can’t remember” instead of “I can’t see”.
Giant Look-Up Table
What are BMIs? I’m guessing you’re not talking about body mass indexes.
:-)
Brain machine Interface.
BTW, my internal monologue usually sounds quite different from what I actually say in most casual situations: for example, it uses less dialectal/non-standard language and more technical terms. (IOW, it resembles the way I write more than the way I speak. So, “I know what I’m going to say, and here I am saying it” is my default state when writing, and “words are coming out of my mouth, and I’m kind of surprised by what I’m hearing myself say” is the state I’m most often in when speaking.) Anyone else finds the same?
That’s pretty close to how I operate, except the words are more like the skeletons of the thoughts than the thoughts themselves, stripped of all the internal connotation and imagery that provided 99% of the internal meaning.
Well, which one do you prefer?
Oh, that’s hard. The latter was awful, but of course most of that was due to all the other crap that was going on at the time. If I take my best shot at adjusting for that… well, I am most comfortable being the voice in my head. But not-being the voice in my head has an uncomfortable gloriousness associated with it. I doubt the latter is sustainable, though.
When you’re playing a sport… wait, maybe you don’t… okay, when you’re playing an instrum—hm. Surely there is a kinesthetic skill you occasionally perform, during which your locus of identity is not in your articulatory loop? (If not, fixing that might be high value?) And you can imagine being in states similar to that much of the time? I would imagine intense computer programming sessions would be more kinesthetic than verbal. Comment linked to hints at what my default thinking process is like.
When I’m playing music or martial arts, and I’m doing it well, I’m usually in a state of flow—not exactly self-aware in the way I usually think of it.
When I’m working inside a computer or motorcycle, I think I’m less self-aware, and what I’m aware of is my manipulating actuators, and the objects than I need to manipulate, and what I need to do to them.
When I’m sitting in my armchair, thinking “who am I?” this is almost entirely symbolic, and I feel more self-aware than at the other times.
So, I think having my locus of identity in my articulatory loop is correlated with having a strong sense of identity.
I’m not sure whether my sense of identity would be weaker there, and stronger in a state of kinesthetic flow, if I spent more time sparring than sitting.
I wouldn’t want to identify with the voice in my head. It can only think one thought at a time; it’s slow.
How many things can you think of at once? I’m curious now.
I’m not sure how to answer that question. But when I think verbally I often lose track of the bigger picture of what I’m doing and get bogged down on details or tangents.
I play other people’s voices through my head as I imagine what they would say (or are saying, when I interpret text,) but I don’t have my own voice in my head as an internal monologue, and I think of “myself” as the conductor, which directs all the voices.
What happens when you are not thinking about what anyone else is saying or would say?
I think in terms of ideas and impulses, not voices. I can describe an impulse as if it had been expressed in words, but when it’s going through my head, it’s not.
I’d be kind of surprised if people who have internal monologues need an inner voice telling them “I’m so angry, I feel like throwing something!” in order to recognize that they feel angry and have an urge to throw something. I just recognize urges directly, including ones which are more subtle and don’t need to be expressed externally, without needing to mediate them through language.
It definitely hasn’t been my experience that not thinking in terms of a distinct inner “voice” makes it hard for me to pin down my thoughts; I have a much easier time following my own thought processes than most people I know.
In our case at least, you are correct that we don’t need to vocalize impulses. Emotions and urges seem to run on a different, concurrent modality.
Do ideas and impulses both use the same modality for you?
Maybe not quite the same, but the difference feels smaller than that between impulse and language.
To me, words are what I need to communicate with other people, not something I need to represent complex ideas within my own head.
I can represent a voice in my head if I choose to, but I don’t find much use for it.
Not quite the same thing, but I’ve discovered that “I feel ragged around the edges” is my internal code for “I need B12″.
One part of therapy for some people is giving them a vocabulary for their emotions.
I can recognise that I’m angry without the voice. When I’m angry, the inner voice will often be saying unflattering things about the object of my anger; something along the lines of “Aaaaaargh, this is so frustrating! I wish it would just work like it’s supposed to!” Wordless internal angry growls may also happen.
It’s something like watching a movie. You can see hands typing and words appearing on the screen, but you aren’t precisely thinking them. You can feel lips moving and hear words forming in the air, but you aren’t precisely thinking them. They’re just things your body is doing, like walking. When you walk, you don’t consciously think of each muscle to move, do you? most of the time you don’t even think about putting one foot in front of the other; you just think about where you’re going (if that) and your motor control does the rest.
For some people, verbal articulation works the same way. Words get formed, maybe even in response to other peoples’ words, but it’s not something you’re consciously acting on; those processes are running on their own without conscious input.
I find this very strange.
When I walk, yes, I don’t consciously think of every muscle; but I do decide to walk. I decide my destination, I decide my route. (I may, if distracted, fall by force of habit into a default route; on noticing this, I can immediately override).
So… for someone without the internal monologue… how much do you decide about what you say? Do you just decide what subject to speak about, what opinions to express, and leave the exact phrasing up to the autopilot? Or do you not even decide that—do you sit there and enjoy the taste of icecream while letting the conversation run entirely by itself?
Didn’t think this was going to be my first contribution to LessWrong, but here goes (hi, everybody, I’m Phil!)
I came to what I like to think was a realisation useful to my psychological health a few months ago when I was invited to realise that there is more to me than my inner monologue. That is, I came to understand that identifying myself as only the little voice in my head was not good for me in any sense. For one thing, my body is not part of my inner monologue, ergo I was a fat guy, because I didn’t identify with it and therefore didn’t care what I fed it on. For another, one of the things I explicitly excluded from my identity was the subprocess that talks to people. I had (and still have) an internal monologue, but it was at best only advisory to the talking process, so you can count me as one of the people for whom conversation is not something I’m consciously acting on. Result: I didn’t consider the person people meet and talk to to be “me”, but (as I came to understand), nevertheless I am held responsible for everything he says and does.
My approach to this was somewhat luminous avant (ma lecture de) la lettre: I now construe my identity as consisting of at least two sub-personalities. There is one for my inner monologue, and one for the version of me that people get to meet and talk to. I call them Al and Greg, respectively, so that by giving them names I hopefully remember that neither alone is Phil. So, to answer CCC’s question: Al is Greg’s lawyer, and Greg is Al’s PR man. When I’m alone, I’m mostly Al, cogitating and opining and whatnot to the wall, with the occasional burst of non-verbal input from Greg that amounts to “That’s not going to play in (Peoria|the office|LessWrong comment threads)”. On the other hand, when other people are around, I’m mostly Greg, conversating in ways that Al would never have thought of, and getting closer and closer to an impersonation of Robin Williams depending on prettiness and proximity of the ladies in the room. Al could in theory sit back and let Greg do his thing, but he’s usually too busy facepalming or yelling “SHUT UP SHUT UP SHUT UP SHUT UP” in a way that I can’t hear until I get alone again.
The problem I used to have was that I was all on Al’s side. I’d berate myself (that is, I’d identify with Al berating Greg) incessantly for paranoid interpretations of the way people reacted to what I said, without ever noticing that, y’know what, people do generally seem to like Greg, and Greg is also me.
Single data point but: I can alternate between inner monologue (heard [in somebody else’s voice not mine(!)]) and no monologue (mainly social activity—say stuff then catch myself saying it and keep going) - stuff just happens. When inner monologue is present it seems I’m in real time constructing what I imagine the future to be and then adapt to that. I can feel as if my body moved without moving it, but don’t use it for thinking (mainly kinesthethic imagination or whatever). I can force myself to see images, and, at the fringe, close to sleep, can make up symphonies in my mind, but don’t use them to think.
Who’s speaking the voice in your head? Seems like another layer of abstraction.
Obviously the speaker is the homunculus that makes Eliezer conscious rather than a p-zombie.
A collective of neural hardware collectively calling itself “Baughn”. Everyone gets some input.
I have an internal monologue. It’s a bit like a narrator in my head, narrating my thoughts.
I think—and this is highly speculative on my part—that it’s a sign of thinking mainly with the part of the brain that handles language. Whenever I take one of those questionnaires designed to tell whether I use mainly the left or right side of my brain, I land very heavily on the left side—analytical, linguistic, mathematical. I can use the other side if I want to; but I find it surprisingly easy to become almost a caricature of a left-brain thinker.
My internal monologue quite probably restricts me to (mainly) ideas that are easily expressed in English. Up until now, I could see this as a weakness, but I couldn’t see any easy way around it. (One advantage of the internal monologue, on the other hand, is that I usually find it easy to speak my thoughts out loud; because they’re already in word form)
But now, you tell me that you don’t seem to have an internal monologue. Does this mean that you can easily think of things that are not easily expressed in English?
Well.. I can easily think of things I subsequently have seriously trouble expressing in any language, sure. Occasionally through reflection via visuals (or kinesthetics, or..), but more often not using such modalities at all.
(See sibling post)
Richard Feynman tells the story of how he learned that thinking isn’t only internal monologue.
Okay, visual I can understand. I don’t use it often, but I do use it on occasion. Kinesthetic, I use even less often, but again I can more-or-less imagine how that works. (Incidentally, I also have a lot of trouble catching a thrown object. This may be related.)
But this ‘no modalities at all’… this intrigues me. How does it work?
All I know is some ways in which it doesn’t work.
I can’t speak for Baughn but as for myself, sometimes It feels like I know ahead of time what I’m going to say as my inner voice, and sometimes this results in me not actually bothering to say it.
I went on vacation during this discussion, and completely lost track of it in the process—oops. It’s an interesting question, though. Let me try to answer.
First off, using a sensory modality for the purpose of thinking. That’s something I do, sure enough; for instance, right now I’m “hearing” what I’m saying at the same time as I’m writing it. Occasionally, if I’m unsure of how to phrase something, I’ll quickly loop through a few options; more often, I’ll do that without bothering with the “hearing” part.
When thinking about physical objects, sometimes I’ll imagine them visually. Sometimes I won’t bother.
For planning, etc. I never bother—there’s no modality that seems useful.
That’s not to say I don’t have an experience of thinking. I’m going to explain this in terms of a model of thought[1] that’s been handy for me (because it seems to fit me internally, and also because it’s handy for models in fiction-writing where I’m modifying human minds), but keep in mind that there is a very good chance it’s completely wrong. You might still be able to translate it to something that makes sense to you.
..basically, the workspace model of consciousness combined with a semi-modular brain architecture. That is to say, where the human mind consists of a large number of semi-independent modules, and consciousness is what happens when those modules are all talking to each other using a central workspace. They can also go off and do their own thing, in which case they’re subconscious.
Now, some of the major modules here are sensory. For good reason; being aware of your environment is important. It’s not terribly surprising, then, that the ability to loop information back—feeding internal data into the sensory modules, using their (massive) computational power to massage it—is useful, though it also involves what would be hallucinations if I wasn’t fully aware it’s not real. It’s sufficiently useful that, well, it seems like a lot of people don’t notice there’s anything else going on.
Non-sensory modes of thought, now… sensory modes are frequently useful, but not always. When they aren’t, they’re noise. In that case—and I didn’t quite realise that was going on until now—I’m not just not hallucinating an internal monologue, but in fact entirely disconnecting my senses from my conscious experience. It’s a bit hard to tell, since they’re naturally right there if I check, but I can be extremely easy to surprise at times.
Instead, I have an experience of… everything else. All the modules normally involved with thinking, except the sensory ones. Well, probably not all of them at once, but missing the sensory modules appears to be a sufficiently large outlier that the normal churn becomes insignificant...
Did that help? Hm. Maybe if you think about said “churn”; it’s not like you always use every possible method of thought you’re capable of, at the same time. I’m just including sensory modalities in the list of hot-swappable ones?
...
This is hard.
One more example, I suppose. I mentioned that, while I was writing this, I hallucinated my voice reading it; this appears to be necessary to actually writing. Not for deciding on the meaning I’m trying to get across, but in order to serialise it as English. Not quite sure what’s going on there, since I don’t seem to be doing it ahead of time—I’m doing it word by word.
1: https://docs.google.com/document/d/1yArXzSQUqkSr_eBd6JhIECdUKQoWyUaPHh_qz7S9n54/edit#heading=h.ug167zx6z472 may or may not be useful in figuring out what I’m talking about; it’s a somewhat more long-winded use of the model. It also has enormous macroplot spoilers for the Death Game SAO fanfic, which.. you probably don’t care about.
Okay, let me summarise your statement so as to ensure that I understand it correctly.
In short, you have a number of internal functional modules in the brain; each module has a speciality. There will be, for example, a module for sight; a module for hearing; a module for language, and so on. Your thoughts consist—almost entirely—of these modules exchanging information in some sort of central space.
The modules are, in effect, having a chat.
Now, you can swap these modules out quite a bit. When you’re planning what to type, for example, it seems you run that through your ‘hearing’ module, in order to check that the word choice is correct; you know that this is not something which you are actually hearing, and thus are in no danger of treating it as a hallucination, but as a side effect of this your hearing module isn’t running through the actual input from your ears, and you may be missing something that someone else is saying to you. (I imagine that sufficiently loud or out-of-place noises are still wired directly to your survival subsystem, though, and will get your attention as normal).
But you don’t have to use your hearing module to think with. Or your sight module. You have other modules which can do the thinking, even when those modules have nothing to do. When your sensory modules have nothing to add, you can and do shut them out of the main circuit, ignoring any non-urgent input from those modules.
Your modules communicate by some means which are somehow independent of language, and your thoughts must be translated through your hearing module (which seems to have your language module buried inside it) in order to be described in English.
This is very different to how I think. I have one major module—the language module (not the hearing module, there’s no audio component to this, just a direct language model) which does almost all my thinking. Other modules can be used, but it’s like an occasional illustration in a book—very much not the main medium. (And also like an illustration in that it’s usually visual, though not necessarily limited to two dimensions).
When it comes to my internal thoughts, all modules that are not my language model are unimportant in comparison. I suspect that some modules may be so neglected as to be near nonexistent, and I wonder what those modules could be.
My sensory modules appear to be input-only. I can ignore them, but I can’t seem to consciously run other information into them. (I still dream, which I imagine indicates that I can subconsciously run other information through my sensory modules)
This leaves me with three questions:
Aside from your sensory modules, what other module(s) do you have?
Am I correct in thinking that you still require at least one module in order to think (but that can be any one module)?
When your modules share information, what form does that information take?
I imagine these will be difficult to translate to language, but I am very curious as to what your answers will be.
Your analysis is pretty much spot on.
It’s interesting to me that you say your hearing and language modules are independent. I mean, it’s reasonably obvious that this has to be possible—deaf people do have language—but it’s absolutely impossible for me to separate the two, at least in one direction; I can’t deal with language without ‘hearing’ it.
And I just checked; it doesn’t appear I can multitask and examine non-language sounds while I’m using language, either. For comparison, I absolutely can (re)use e.g. visual modules while I’m writing this, although it gets really messy if I try to do so while remaining conscious of what they’re doing—that’s not actually required, though.
Well… my introspection isn’t really good enough to tell, and it’s really more of a zeroth-approximation model than something I have a lot of confidence in. That said, I suspect the question doesn’t have an answer even in principle; that there’s no clear border between two adjacent subsystems, so it depends on where you want to draw the line. It doesn’t help that some elements of my thinking almost certainly only exist as a property of the communication between other systems, not as a physical piece of meat in itself, and I can’t really tell which is which.
I think if it was just one, I wouldn’t really be conscious of it. But that’s not what you asked, so the answer is “Probably yes”.
I’m very tempted to say “conscious experience”, here, but I have no real basis for that other than a hunch. I’m not sure I can give you a better answer, though. Feelings, visual input (or “hallucinations”), predictions of how people or physical systems will behave, plans—not embedded in any kind of visualization, just raw plans—etc. etc. And before you ask what that’s like, it’s a bit like asking what a Python dictionary feels like.. though emotions aren’t much involved, at that level; those are separate.
The one common theme is that there’s always at least one meta-level of thought associated. Not just “Here’s a plan”, but “Here’s a plan, and oh by the way, here’s what everyone else in the tightly knit community you like to call a brain thinks of the plan. In particular, “memory” here just pattern-matched it to something you read in a novel, which didn’t work, but then again a different segment is pointing out that fictional evidence is fictional.”
...without the words, of course.
So the various ideas get bounced back and forth between various segments of my mind, and that bouncing is what I’m aware of. Never the base idea, but all the thinking about the idea… well, it wouldn’t really make sense to be “aware of the base idea” if I wasn’t thinking about it.
Sight is something else again. It certainly feels like I’m aware of my entire visual field, but I’m at least half convinced that’s an illusion. I’m in a prime position to fool myself about that.
This may be related to the fact that I learnt to read at a very young age; when I read, I run my visual input through my language module; the visual model pre-processes the input to extract the words, which are then run through the language module directly.
At least, that’s what I think is happening.
Running the language module without the hearing module a lot, and from a young age, probably helped quite a bit to seperate the two.
Hmph. Disappointing, but thanks for answering the question.
I think I was hoping for more clearly defined modules than appears to be the case. Still, what’s there is there.
Now, this is interesting. I’m really going to have to go and think about this for a while. You have a kind of continual meta-commentary in your mind, thinking about what you’re thinking, cross-referencing with other stuff… that seems like a useful talent to have.
It also seems that, by concentrating more on the individual modules and less on the inter-module communication, I pretty much entirely missed where most of your thinking happens.
One question comes to mind; you mention ‘raw plans’. You’ve correctly predicted my obvious question—what raw plans feel like—but I still don’t really have much of a sense of it, so I’d like to poke at that a bit if you don’t mind.
So; how are these raw plans organised?
Let us say, for example, that you need to plan… oh, say, to travel to a library, return one set of books, and take out another. Would the plan be a series of steps arranged in order of completion, or a set of subgoals that need to be accomplished in order (subgoal one: find the car keys); or would the plan be simply a label saying ‘LIBRARY PLAN’ that connects to the memory of the last time you went on a similar errand?
As for me, I have a few different ways that I can formulate plans. For a routine errand, my plan consists of the goal (e.g. “I need to go and buy bread”) and a number of habits (which, now that I think about it, hardly impinge on my conscious mind at all; if I think about it, I know where I plan to go to get bread, but the answer’s routine enough that I don’t usually bother). When driving, there are points at which I run a quick self-check (“do I need to buy bread today? Yes? Then I must turn into the shopping centre...”)
For a less routine errand, my plan will consist of a number of steps to follow. These will be arranged in the order I expect to complete them, and I will (barring unexpected developments or the failure of any step) follow the steps in order as specified. If I were to write down the steps on paper, they would appear horrendously under-specified to a neutral observer; but in the privacy of my own head, I know exactly which shop I mean when I simply specify ‘the shop’; both the denotations and connotations intended by every word in my head are there as part of the word.
If the plan is one that I particularly look forward to fulfilling, I may run through it repeatedly, particularly the desirable parts (”...that icecream is going to taste so good...”). This all runs through my language system, of course.
I have a vague memory of having read something that suggested that humans are not aware of their entire visual field, but that there is a common illusion that people are, agreeing with your hypothesis here. I vaguely suspect that it might have been in one of the ‘Science of the Discworld’ books, but I am uncertain.
Obligatory link to Yvain’s article on the topic.
A very high proportion of what I call thinking is me talking to myself. I have some ability to imagine sounds and images, but it’s pretty limited. I’m better with kinesthesia, but that’s mostly for thinking about movement.
What’s your internal experience composed of?
That varies.. quite a lot.
While I’m writing fiction there’ll be dialogue, the characters’ emotions and feelings, visuals of the scenery, point-of-view visuals (often multiple angles at the same time), motor actions, etc. It’s a lot like lucid dreaming, only without the dreaming. Occasionally monologues, yes, but those don’t really count; they’re not mine.
While I’m writing this there is, yes, a monologue. One that’s just-in-time, however; I don’t normally bother to listen to a speech in my head before writing it down. Not for this kind of thing; more often for said fiction, where I’ll do that to better understand how it reads.
Mostly I’m not writing anything, though.
Most of the time, I don’t seem to have any particular internal experience at all. I just do whatever it is I’m doing, and experience that, but unless it’s relatively complex there doesn’t seem to be much call for pre-action reflections. (Well, of course I still feel emotions and such, but.. nothing monologue-like, in any modality. Hope that makes sense.)
A lot of the time I have (am conscious of) thoughts that don’t correspond to any sensor modality whatsoever. I have no idea how I’d explain those.
If I’m working on a computer program.. anything goes, but I’ll typically borrow visual capacity to model graph structures and such. A lot of the modalities I’d use there, I don’t really have words for, and it doesn’t seem worthwhile to try inventing them; doing so usefully would turn this into a novel.
That’s the internal monologue. Mine is also often just-in-time (not always, of course). I can listen to it in my head a whole lot faster than I can talk, type, or write, so sometimes I’ll start out just-in-time at the start of the sentence and then my internal monologue has to regularly wait for the typing/writing/speaking to catch up before I can continue.
For example, in this post, when I clicked the ‘reply’ button I had already planned out the first two sentences of the above post (before the first bracket). The contents of the first bracket were added when I got to the end of the second sentence, and then edited to add the ‘of course’. The next sentence was added in sections, built up and then put down and occasionally re-edited as I went along (things like replacing ‘on occasion’ with ‘sometimes’).
Hmmm. Living in the moment. I’m curious; how would you go about (say) planning for a camping trip? Not so much ‘what would you do’, but ‘how would you think about it’?
Can’t speak for Nancy, but I think I know what she refers to.
Different people have different thought… processes, I guess is the word. My brother’s thought process is, by his description, functional; he assigns parts of his mind tasks, and gets the results back in a stack. (He’s pretty good at multi-tasking, as a result.) My own thought process is, as Nancy specifies, an internal monologue; I’m literally talking to myself. (Although the conversation is only partially English. It’s kind of like… 4Chan. Each “line” of dialogue is associated with an “image” (in some cases each word, depending on the complexity of the concept encoded in it), which is an abstract conceptualization. If you’ve ever read a flow-of-consciousness book, that’s kind of like a low-resolution version of what’s going on in my brain, and, I presume, hers.
I’ve actually discovered at least one other “mode” I can switch my brain into—I call it Visual Mode. Whereas normally my attention is very tunnel vision-ish (I can track only one object reliably), I can expand my consciousness (at the cost of eliminating the flow-of-consciousness that is usually my mind) and be capable of tracking multiple objects in my field of vision. (I cannot, for some reason, actually move my eyes while in this state; it breaks my concentration and returns me to a “normal” mode of thought.) I’m capable of thinking in this state, but oddly, incapable of tracking or remembering what those thoughts are; I can sustain a full conversation which I will not remember, at all, later.
Hm, the obvious question there is: “How do you know you can sustain a full conversation, if you don’t remember it at all later?” (..edit: With other people? Er, right. Somehow I was assuming it was an internal conversation.)
I’ve got some idea what you’re talking about, though—focusing my consciousness entirely on sensory input. More useful outside of cities, and I don’t have any kind of associated amnesia, but it seems similar to how I’d describe the state otherwise.
Neither your brother’s nor your own thought processes otherwise seem to be any kind of match for mine. It’s interesting that there’s this much variation, really.
Otherwise.. see sibling post for more details.
I can do a weaker version of this—basically, by telling my brain to “focus on the entire field of your perception” as if it was a single object. As far as I am aware, it doesn’t do any of the mental effects you describe for me. It’s very relaxing though.
Add one to the sample size. My thought process is also mostly lacking in sensory modality. My thoughts do have a large verbal component, but they are almost exclusively for planning things that I could potentially say or write.
Rather than trying to justify how this works to the others, I will instead ask my own questions: How can words help in creating thoughts? In order to generate a sentence in your head, surely you must already know what you want to say. And if you already know what you have to say, what’s the point of saying it? I presume you cannot jump to the next thought without saying the previous one in full. With my own ability to generate sentences, that would be a crippling handicap.
My thoughts are largely made up of words. Although some internal experimentation has shown that my brain can still work when the internal monologue is silent, I still associate ‘thoughts’ very, very strongly with ‘internal monologue’.
I think that, while thoughts can exist without words, the word make the thoughts easier to remember; thus, the internal monologue is used as part of a ‘write-to-long-term-storage’ function. (I can write images and feelings as well; but words seem to be my default write-mode).
Also, the words—how shall I put this—the words solidify the thought. They turn the thought into something that I can then take and inspect for internal integrity. Something that I can check for errors; something that I can think about, instead of something that I can just think. Images can do the same, but take more working-memory space to hold and are thus harder to inspect as a whole.
I don’t think I’ve ever tried. I can generate sentences fast enough that it’s not a significant delay, though. I suspect that this is simply due to long practice in sentence construction. (Also, if I’m not going to actually say it out loud, I don’t generally bother to correct it if it’s not grammatically correct).
Personally, I can do this to degrees. I can skip verbalizing a concept completely, but it feels like inserting a hiccup into my train of thought (pardon the mixed analogy). I can usually safely skip verbalizing all of it; that is, it feels like I have a mental monologue but upon reflection it went by too fast to actually be spoken language so I assume it was actually some precursor that did not require full auditory representation. I usually only use full monologues when planning conversations in advance or thinking about a hard problem.
As far as I can tell, the process helps me ensure consistency in my thoughts by making my train of thought easier to hold on to and recall, and also enables coherence checking by explicitly feeding my brain’s output back into itself.
Now I’m worrying that I might have been exaggerating. Although you are implicitly describing your thoughts as being verbal, they seem to work in a way similar to mine.
ETA: More information: I still believe I am less verbal than you. In particular, I believe my thoughts become less verbal when thinking about hard problems are than becoming more so as in your case. However, my statement about my verbal thoughts being “almost exclusively for planning things that I could potentially say or write” is a half-truth; A lot of it is more along the lines that sometimes when I have an interesting thought I imagine explaining it to someone else. Some confounding factors:
There is a continuum here from completely nonverbal to having connotations of various words and grammatical structures to being completely verbal. I’m not sure when it should count as having an internal monologue.
Asking myself weather a thought was verbal naturally leads to create a verbalization of it, while not asking myself this creates a danger of not noticing a verbal thought.
I basing this a lot on introspection done while I am thinking about this discussion, which would make my thoughts more verbal.
Wikipedia article. I’m really curious how you would describe your thoughts if you don’t describe them as an internal monologue. Are you more of a visual thinker?
When I think about stuff, often I imagine a voice speaking some of the thoughts. This seems to me to be a common, if not nearly universal, experience.
I only really think using voices. Whenever I read, if I’m not ‘hearing’ the words in my head, nothing stays in.
Do you actually hear the voice? I often have words in my head when I think about things, but there isn’t really an auditory component. It’s just words in a more abstract form.
I wouldn’t say I literally hear the voice; I can easily distinguish it from sounds I’m actually hearing. But the experience is definitely auditory, at least some of the time; I could tell you whether the voice is male or female, what accent they’re speaking in (usually my own), how high or low the voice is, and so on.
I definitely also have non-auditory thoughts as well. Sometimes they’re visual, sometimes they’re spatial, and sometimes they don’t seem to have any sensory-like component at all. (For what it’s worth, visual and spatial thoughts are essential to the way I think about math.)
If you want to poke at this a bit, one way could be to test what sort of interferences disrupt different activities for you, compared to a friend.
I’m thinking of the bit in “Surely you’re joking” where Feynman finds that he can’t talk and maintain a mental counter at the same time, while a friend of his can—because his friend’s mental counter is visual.
Neat. I can do it both ways… actually, I can name at least four different ways of counting:
“Raw” counting, without any sensory component; really just a sense of magnitude. Seems to be a floating-point, with a really small number of bits; I usually lose track of the exact number by, oh, six.
Verbally. Interferes with talking, as you’d expect.
Visually, using actual 2/3D models of whatever I’m counting. No interference, but a strict upper limit, and interferes with seeing—well, usually the other way around. The upper limit still seems to be five-six picture elements, but I can arrange them in various ways to count higher; binary, for starters, but also geometrically or.. various ways.
Visually, using pictures of decimal numbers. That interferes with speaking when updating the number, but otherwise sticks around without any active maintenance, at least so long as I have my eyes closed. I’m still limited to five-six digits, though… either decimal or hexadecimal works. I could probably figure out a more efficient encoding if I worked at it.
I, for one, actually hear the voice. It’s quite clear. Not loud like an actual voice but a “so loud I can’t hear myself think” moment has never literally happened to me since the voice seems more like its on its own track, parallel to my actual hearing. I would never get it confused with actual sounds, though I can’t really separate the hearing it to the making it to be sure of that.
That’s interesting! Because I have definitely had “so loud I can’t hear myself think” moments (even though I don’t literally hear thoughts) - just two days ago, I had to ask somebody to stop talking for a while so that I could focus.
Being distracted is one thing—I mean literally not being able to hear my thoughts in the manner that I might not be able to hear what you said if a jet was taking off nearby. This was to emphasize that even though I perceive them as sounds there is ‘something’ different about them than sounds-from-ears that seems to prevent them from audibly mingling. Loud noises can still make me lose track of what I was thinking and break focus.
Hmm. Now that I think of it, I’m not sure to what extent it was just distraction and to what extent a literal inability to hear my thoughts. Could’ve been exclusively one, or parts of both.
I added more detail in a sibling post, but it can’t be that universal; I practically never do that at all, basically only for thoughts that are destined to actually be spoken. (Or written, etc.)
Actually, I believe I used to do so most of the time (..about twenty years ago, before the age of ten), but then made a concerted effort to stop doing so on the basis that pretending to speak aloud takes more time. Those memories may be inaccurate, though.
It very universal but some people shut down their awareness of the process. It’s like people who don’t say they don’t dream. They just don’t remember it. Most people can’t perceive their own heartbeat. It can take some effort to build awareness.
What’s your internal reaction when someone insults yourself?
You’re claiming that you understand his thought better than he does. That is a severe accusation and is not epistemologically justified. Also, I can’t recall off the top of my head any time somebody insulted me, I think my reaction would depend on the context, but I don’t see why it will involve imagined words.
How do you know that there’s no epistemological justification?
So, how do I know? Empirical experience at NLP seminars. At the beginning plenty of people say that they don’t have an internal dialoge, that they can’t view mental images or that they can’t perceive emotions within their own body.
It’s something that usually get’s fixed in a short amount of time.
Around two month ago I was chatting with a girl who had two voices in her head. One that did big picture thinking and another that did analytic thinking. She herself wasn’t consciously aware that one of the voices came from the left and the other from the right.
After I told her which voice came from which direction, she checked and I was right. I can’t diagnose what Baughn does with internal dialog in the same depth through online conversation but there nothing that stops me from putting forth generally observations about people who believe that they have no internal dialog until they were taught to perceive it.
Yes, you don’t see imagined words. That’s kind of the point of words. You either hear them or don’t hear them. If you try to see them you will fail. If you try to perceive your internal dialog that way you won’t see any internal dialog.
But why did I pick that example? It’s emotional. Being insulted frequently get’s people to reflect on themselves and the other person. They might ask themselves: “Why did he do that?” or answer to themselves “No, he has no basis for making that claim.” In addition judgement is usually done via words.
I’m however not sure whether I can build up enough awareness in Baughn via text based online conversation that he can pick up his mental dialog.
If you don’t have strong internal dialog it doesn’t surprise me that you aren’t good at recalling a type of event that usually goes with strong internal dialog.
Hm~
Those are interesting claims, but I think you misunderstood a little. I do have an internal monologue, sometimes; I just don’t bother to use it, a lot of the time. It depends on circumstances.
You moved in the span of half a year from: “I’m pretty sure I don’t have a internal monologue, I don’t know what the term is supposed to mean.” to “I do have an internal monologue, sometimes”.
That’s basically my point. With a bit of direction there something that you could recognize there to be an internal monologue in your mind.
Of course once you recognize it, you aren’t in the state anymore where you would say: “I’m pretty sure I don’t have a internal monologue.” That’s typical for those kind of issues.
I was basically right with my claim that “I’m pretty sure I don’t have a internal monologue.” is wrong, and did what it took to for you to recognize it. itaibn0 claimed that the claim was etymologically unsubstantiated. It was substantiated and turned out to be right.
Actually, I would have made the same claim half a year ago. The only difference is that I have a different model of what the words “internal monologue” mean—that, and I’ve done some extra modelling and introspection for a novel.
Yes, now you have a mental model that allows you to believe “I do have an internal monologue, sometimes” back then you didn’t. What I did write was intended to create that model in your mind.
To me it seems like it worked. It’s also typical that people backport their mental models into the past when they remember what happened in the past.
How does it get fixed?
First different people use different system with underlying strength. Some people like Tesla can visualize a chair and the chair they visualize get’s perceived by them the same way as a real chair. You don’t get someone who doesn’t think he can visualize pictures to that level of visualization ability by doing a few tricks.
In general you do something that triggers a reaction in someone. You observe the person and when she has the image or has the dialog you stop and tell the person to focus their attention on it. There are cases where that’s enough.
There are also cases where a person has a real reason why they repress a certain way of perception. A person with a strong emotional trauma might have completely stopped relating to emotions within their body to escape the emotional pain. Then it’s necessary for the person to become into a state where they are resourceful enough to face the pain so that they can process it.
A third layer would consist of different suggestion that it’s possible to perceive something new. Both at a conscious level and on a deep metaphoric level.
I feel like I’m floating. Adrenaline rush, the same feeling I used to get when fights were imminent as a kid.
How do you know how you want to respond to the insult? What mental strategy did you use the last time you were insulted?
I just do what I feel like. And my feelings are generally in line with my previous experiences with the other person. If I feel like they’re a reasonable person and generally nice then I feel like giving them the benefit of the doubt, if I feel like they’re a total toss-pot then I’m liable to fire back at them. There’s so much cached thought that’s felt rather than verbalised at that point that it’s pretty much a reaction.
Hijacking this thread to ask if anybody else experiences this—when I watch a movie told from the perspective of a single character or with a strong narrator, my internal monologue/narrative will be in that character’s/narrator’s tone of voice and expression for the next hour or two. Anybody else?
http://fc02.deviantart.net/fs71/i/2010/110/9/8/Good_News_Everyone_by_martynasx.jpg
Did it work for you?
I find that sometimes, after reading for a long time, the verbal components of my thoughts resemble the writing style of what I read.
Sometimes, after reading something with a strong narrative voice, I’ll want to think in the same style, but realize I can’t match it.
Not exactly what you are asking for, but I’ve found that if I spend an extended period of time (usually around a week) heavily interacting with a single person or group of people, I’ll start mentally reading things in their voice(s).
While reading books. Always particular voices for every character. So much so, I can barely sit through adaptations of books I’ve read. And my opinion of a writer always drops a little bit when I meet hjm/her, and the voice in my head just makes more sense for that style.
Sure. Or after listening to a charismatic person for some time.
Maybe the social signaling sensitive unconsciously translate it into “I thought up this unobvious thing about this thing because I am smarter than you”, and then file it off as being an asshole about stuff that’s supposed to be communal fun?
It is not healthy to believe that every curtain hides an Evil Genius (I speak here as a person who lived in the USSR). Given the high failure rate of EVERY human work, I’d say that most secrets in the movie industry have to do with saving bad writing and poor execution with clever marketing and setting up other conflicts people could watch besides the pretty explosions. It’s not about selling Imperialism and Decadance to a country that’s been accused of both practically since its formation(sorry if you’re American and noticed these accusations exist only now in the 21st century), or trying to force people into some new world order-style government where a dictator takes care of every need. Though, I must admit that I wonder about Michael Bay’s agenda sometimes...
Tony Stark isn’t JUST a rich guy with a WMD. He messes up. He fails his friends and loved ones. He is in some way the lowest point in each of our lives, given some nobility. In spite of all those troubles, the fellow stands up and goes on with his life, gets better and tries to improve the world. David Wong seems to have missed the POINT of a couple of movies (how about the message of empowerment-through-determination in Captain America? The fellow must still earn his power as a “runt”), and even worse tries to raise conspiracy theory thinking up as rationality.
So, maybe, the knee-jerk reaction is wise, because overanalizing something made to entertain tends to be somewhat similar to seeing shapes in the clouds. Sometimes, Iron Man is just Iron Man.
You don’t need to believe in intent to spread negative values to analyse that spreading negative values is bad.
Hopefully, the positive values are greater in number than the negative ones, if one is not certain which ones are which—and I see quite a few positive values in recent superhero movies.
Seems to me that the problem is, well, precisely as stated: overthinking. It’s the same problem as with close reading: look too close at a sample of one and you’ll start getting noise, things the author didn’t intend and were ultimately caused by, oh, what he had for breakfast on a Tuesday ten months ago and not some ominous plan.
On the other hand, where do you draw the line between reasonable analysis and overthinking? I mean, you can read into a text things which only your own biases put there in the first place, but on the other hand, the director of Birth of a Nation allegedly didn’t intend to produce a racist film. I’ve argued plenty of times myself that you can clearly go too far, and critics often do, but on the other hand, while the creator determines everything that goes into their work, their intent, as far as they can describe it, is just the rider on the elephant, and the elephant leaves tracks where it pleases.
Well, this is hardly unique to literary critique. If/When we solve the general problem of finding signal in noise we’ll have a rigorous answer; until then we get to guess.
If someone intends to draw an object with three sides, but they don’t know that an object with three sides is a triangle, have they intended to draw a triangle? Whether the answer is yes or no is purely a matter of semantics.
Yes, but the question “should we censure this movie/book because it causes harm to (demographic)” is not a question of semantics.
Well, I really enjoy music, but I made the deliberate choice to not learn about music (in terms of notes, chords, etc.). The reason being that what I get from music is a profound experience, and I was worried that knowledge of music in terms of reductionist structure might change the way I experience hearing music. (Of course some knowledge inevitably seeps in.)
Akin’s Laws of Spacecraft Design are full of amazing quotes. My personal favourite:
(See also an interesting note from HN’s btilly on this law)
http://scienceblogs.com/insolence/2013/06/04/stanislaw-burzynski-versus-the-bbc/#comment-262541
Edward Snowden, the NSA surveillance whistle-blower.
You Are Not So Smart by David McRaney p 55,56, and 58.
Considering the probability that I will encounter such a high-impact fast-acting disaster, and the expected benefit of acting on shallowly thought out gut reaction, I feel no need to remove from myself this bias.
Since you have taken the time to make a comment on this website I presume you get some pleasure from thinking about biases. The next time you are on an airplane perhaps you would find it interesting to work through how you should respond if the plane starts to burn.
Interestingly enough there is some evidence—or at least assertions by people who’ve studied this sort of thing—that doing this sort of problem solving ahead of time tends to reduce the paralysis.
When you get on a plane, go into a restaurant, when you’re wandering down the street or when you go someplace new think about a few common emergencies and just think about how you might respond to them.
Yes, you’re right. In fact, I did think about this situation. I think the best strategy is to enter the brace position recommended in the safety guide and to stay still, while gathering as much information as position and obeying the any person who takes on a leadership role. This sort of reasoning can be useful because it is fun to think about, because it makes for interesting conversation, or because it might reveal an abstract principle that is useful somewhere else. My point is to demonstrate a VOI calculation and to show that although this behavior seems irrational on its own, in the broader context the strategy of being completely unprepared for disaster is a good one. Still, the fact that people act in this particular maladaptive way is interesting, and so I got something out of your quote.
“When two planes collided just above a runway in Tenerife in 1977, a man was stuck, with his wife, in a plane that was slowly being engulfed in flames. He remembered making a special note of the exits, grabbed his wife’s hand, and ran towards one of them. As it happened, he didn’t need to use it, since a portion of the plane had been sheared away. He jumped out, along with his wife and the few people who survived. Many more people should have made it out. Fleeing survivors ran past living, uninjured people who sat in seats literally watching for the minute it took for the flames to reach them.”—http://io9.com/the-frozen-calm-of-normalcy-bias-486764924
Speaking as someone who’s been trough that, I don’t think that the article gives a complete picture. Part of the problem appears to be (particularly by reports from newer generations) in such instaces is the feeling of unreality, as the only times when we tend to see such situations is when we’re sitting comfortably, so a lot of us are essentially conditioned to sit comfortably during such events.
However, this does tend to get better with some experience of such situations.
See, I thought the plane was still in the air. Now I understand that the brace position is useless. This is why “gathering as much information as possible” is part of my plan. Unfortunately, with such a preliminary plan, there’s a good chance I won’t realise this quickly enough and become one of the passive casualties. As I stated earlier, I don’t mind this.
As things one could not mind go, literally dying in a fire seems unlikely to be a good choice.
So does leaving a box with $1,000 in it on the table.
What’s involved here is dying in a fire in a hypothetical situation.
No. Please, just no. This is the worst possible form of fighting the hypothetical. If you’re going to just say “it’s all hypothetical, who cares!” then please do everyone a favor and just don’t even bother to respond. It’s a waste of everyone’s time, and incredibly rude to everyone else who was trying to have a serious discussion with you. If you make a claim, an your reasoning is shown to be inconsistent, the correct response is never to pretend it was all just a big joke the whole time. Either own up to having made a mistake (note: having made a mistake in the past is way higher status than making a mistake now. Saying “I was wrong” is just another way to say “but now I’m right”. You will gain extra respect on this site from noticing your own mistakes as well.) or refute the arguments against your claim (or ask for clarification or things along those lines). If you can’t handle doing either of those then tap out of the conversation. But seriously, taking up everyone’s time with a counter-intuitive claim and then laughing it off when people try to engage you seriously is extremely rude and a waste of everyone’s time, including yours.
You’re completely right. I retract my remark.
And then sometimes I’m reminded why I love this site. Only on LessWrong does a (well-founded) rant about bad form or habits actually end up accomplishing the original goal.
Only on LessWrong would I hope to never see a statement that begins, ‘Only on LessWrong’.
Actually, freezing up is precisely what I-here-in-my-room imagine I-on-a-plane-in-flames would do.
I find this confusing. Ambiguity is paralysing (though in what circumstances the freeze response isn’t stupid, I have no idea), but it’s hard to see what response other than “RUN” this would cause. It’s not like having to find words that’ll placate a hostile human, or reinvent first aid on the fly.
When you’re hoping the saber-tooth tiger won’t notice you.
Against a Dark Background by Iain M. Banks.
I read this as a poetic invocation against utilitarian sacrifices. It seems to me simultaneously wise on a practical level and bankrupt on a theoretical level.
What about the special case of people prepared to be maimed and killed in order to get in someone’s way? I guess it depends whether you share goals with the latter someone.
If I don’t share goals with someone, or more strongly, if I consider their goals evil… then I will see their meta actions differently, because at the end, the meta actions are just a tool for something else. If some people build a perfect superintelligent paperclip maximizer, I will hate the fact that they were able to overcome procrastination, that they succeeded in overcoming their internal conflicts, that they made good strategical decisions about getting money and smart people for their project, etc.
So perhaps the quote could be understood as a complaint against people in the valley of bad rationality. Smart enough to put their plans successfully in action; yet too stupid to understand that their plans will end hurting people. Smart enough to later realize they made a mistake and feel sorry; yet too stupid to realize they shouldn’t make a similar kind of plan with similar kinds of mistakes again.
C.S. Lewis (emphasis my own)
This is because a speaker’s attitude towards an object is not formed by the speaker’s perception of the object; it is entirely arbitrary. Wait, no, that’s not right.
And anyway, the previous use of the term “gentleman” was, in some sense, worse. Because while it had a neutral denotation (“A gentleman is any person who possesses these two qualities”), it had a non-neutral connotation.
That would be true if the word “gentle” meant the same thing then as it does now. Which it didn’t
The word originally comes from the ancient (not modern) meaning of Hebrew goy: nation.
EDIT: the last statement is incorrect, see replies.
From your link: Sense of “gracious, kind” (now obsolete) first recorded late 13c.; that of “mild, tender” is 1550s.
This is, of course, exactly what the halo effect would predict; a word that means “good” in some context should come to mean “good” in other contexts. The same effect explains the euphemism treadmill, as a word that refers to a disfavored group is treated as an insult.
“Gentleman,” “gentle” etc do not come from Hebrew.
Maybe you are thinking about the fact that “gentile” comes from the sense “someone from one of the nations (other than Israel),” just as Hebrew goy originally meant “nation” (including the nation of Israel or any other), and came to mean “someone from one of the (other) nations.”
“Gentile” was formed as a calque from Hebrew.
But none of these come from a Hebrew root. Rather, they all come from the Latin gens, gentis “clan, tribe, people,” thence “nation.” Same root as gene, for that mater.
Right, my bad, it was translated from Hebrew, but does not come directly from it:
You can make it correct but still informative by replacing “originally comes from” with “was originally a calque of”.
So Lewis grants that people really can be brave, honorable, and courteous, but then denies that calling someone so is descriptive?
This passage does’t make any sense.
I suspect his attitude is more along the lines of ‘noise to signal ratio too high.’
Baroque Cycle by Neal Stphenson proves to be a very good, intelligent book series.
Daniel Waterhouse and Colonel Barnes in Solomon’s Gold
Yes, be cause saying ‘gravity’ in fact means the Newton gravitational law. Aristotle had no idea, that e. g. the product of two masses is involved here.
Does Colonel Barnes? If not, he is just repeating a word he has learned to say. Rather like some people today who have learned to say “entanglement”, or “signalling”, or “evolution”, or...
Except in this case he’s actually saying ‘gravity’ in the right context, and besides, it’s not expected of people in general to know Newton’s laws (or general relativity, etc) to know basically how gravity works.
Although I’d like to know what his answer was to the last question…
I will gladly post the rest of the conversation because it reminds me of question I pondered for a while.
After that they started to discuss differences between Newton’s and Leibniz theories. Newton is unable to explain why gravity can go through the earth, like light through a pane of glass. Leibniz takes a more fundamental approach (roughly speaking, he claims that everything consist of cellular automata).
I find this theme of Baroque Cycle fascinating.
I was somewhat haunted by the similar question: in the strict Bayesian sense, notions of “explain” and “predict” are equivalent, but what about Alfred Wegener, father of plate tectonics? His theory of continental drift (in some sense) explained shapes of continents and archaeological data, but was rejected by the mainstream science because of the lack of mechanism of drift.
In some sense, Wegener was able to predict, but unable to explain.
One can easily imagine some weird data easily described by (and predicted by) very simple mathematical formula, but yet I don’t consider this to be explanation. Something lacks here; my curiosity just doesn’t accept bare formulas as answers.
I suspect that this situation arises because of the very small prior probability of formula being true. But is it really?
Stanislaw Lem wrote a short story about this. (I don’t remember its name.)
In the story, English detectives are trying to solve a series of cases where bodies are stolen from morgues and are later discovered abandoned at some distance. There are no further useful clues.
They bring in a scientist, who determines that there is a simple mathematical relationship that relates the times and locations of these incidents. He can predict the next incident. And he says, therefore, that he has “solved” or “explained” the mystery. When asked what actually happens—how the bodies are moved, and why—he simply doesn’t care: perhaps, he suggests, the dead bodies move by themselves—but the important thing, the original question, has been answered. If someone doesn’t understand that a simple equation that makes predictions is a complete answer to a question, that someone simply doesn’t understand science!
Lem does not, of course, intend to give this as his own opinion. The story never answers the “real” mystery of how or why the bodies move; the equation happens to predict that the sequence will soon end anyway.
Amusingly, I read this story, but completely forgot about it. The example here is perfect. Probably I should re-read it.
For those interested: http://en.wikipedia.org/wiki/The_Investigation
I think the situation happens because of bias. Demonstrating an empirical effect to be real takes work. Finding an explanation of an effect also takes work. It’s very seldom in science that both happens at exactly the same time.
Their are a lot of drugs that are designed in a way where we think that the drug works by binding to specific receptors. Those explanations aren’t very predictive for telling you whether a prospective drug works. Once it’s shown that a drug actually works it’s often that we don’t fully understand why it does work.
Interesting.
I imagined a world where Wegener appeared, out of blue, with all that data about geological strata and fossils (nobody noticed any of that before), and declared that it’s all because of continental drift. That was anticlimactic and unsatisfactory.
I imagined a world with a great unsolved mystery: all that data about geological strata and fossils. For a century, nobody is unable to explain it. Then Wegener appeared, and pointed that the shapes of continents are similar, and perhaps it’s all because of continental drift. That was more satisfactory, and I suspect that most of traces of disappointment are due to hindsight bias.
I think that there are several factors causing that:
1) Story-mode thinking
2) Suspicions concerning the unknown person claiming to solve the problem nobody has ever heard of.
3) (now it’s my working hypothesis) The idea that some phenomena are and ‘hard’ to reduce, and some are ‘easy’:
I know that fall of apple can be explained in terms of atoms, reduced to the fundamental interactions. Most of things can. I know that we are unable to explain fundamental interactions yet, so equations-without-understanding are justified.
So, if I learn about some strange phenomenon, I believe that it can be easily explained in terms of atoms. Now suppose that it turned out to be very hard problem, and nobody managed to reduce it to something more fundamental. Now I feel that I should be satisfied with bare equations because making something more is hard. Maybe a century later.
This isn’t complete explanation, but it feels like a step in the right direction.
“For whatever reason, ” seems like it should be a legitimate hypothesis, as much as ”, therefore ”. The former technically being the disjunction of all variations of the latter with possible reasons substituted in.
But, then again, at the point when we are saying “for whatever reason, ”, we are saying that because we haven’t been able to think of the correct explanation yet—that is, because we haven’t been creative enough, a bounded rationality issue. So we’re perhaps not really in a position to evaluate a disjunction of all possible reasons.
“Indeed, Sire, Monsieur Lagrange has, with his usual sagacity, put his finger on the precise difficulty with the hypothesis [of a Creator of the Universe]: it explains everything, but predicts nothing.”
I strikes me his understanding of gravity is on the same level as saying that everything attracts everything else, which is after all not much of a step up on saying that it’s in the nature of water to be attracted to the moon—just a more general phrasing.
You can make more specific predictions if you know that everything attracts everything, and you know more about the laws of planetary motion and so on, and the gravitational constant and the decay rate and so on, but the basic knowledge of gravity by itself doesn’t let you do those things. If your predictions after are the same as your predictions going in can you really claim to understand something better?
Seems to me you need to network ideas and start linking them up to data because you can really start to claim to understand stuff better.
Probably I should’ve added some context to this conversation. One of the themes of Baroque Cycle is that Newton described his gravitational law, but said nothing about why the reality is the way it is. This bugs Daniel, and he rests his hopes upon Leibniz who tries to explain reality on the more fundamental level (monads).
This conversation is “Explain/Worship/Ignore” thing as well as “Teacher’s password” thing.
The reason Newton’s laws are an improvement over Aristotelian “the nature of water is etc.” is that Newton lets you make predictions, while Aristotle does not. You could ask “but WHY does gravity work like so-and-so?”, but that doesn’t change the fact that Newton’s laws let you predict orbits of celestial objects, etc., in advance of seeing them.
That’s certainly the conventional wisdom, but I think the conventional wisdom sells Aristotle and his contemporaries a little short. Sure, speaking in terms of water and air and fire and dirt might look a little silly to us now, but that’s rather superficial: when you get down to the experiments available at the time, Aristotelian physics ran on properties that genuinely were pretty well correlated, and you could in fact use them to make reasonably accurate predictions about behavior you hadn’t seen from the known properties of an object. All kosher from a scientific perspective so far.
There are two big differences I see, though neither implies that Aristotle was telling just-so stories. The first is that Aristotelian physics was mainly a qualitative undertaking, not a quantitative one—the Greeks knew that the properties of objects varied in a mathematically regular way (witness Erastothenes’ clever method of calculating Earth’s circumference), but this wasn’t integrated closely into physical theory. The other has to do with generality: science since Galileo has applied as universally as possible, though some branches reduced faster than others, but the Greeks and their medieval followers were much more willing to ascribe irreducible properties to narrow categories of object. Both end up placing limits on the kinds of inferences you’ll end up making.
That 70s Show
Single bad things happen to you at random. Iterated bad things happen to you because you’re a dumbass. Related: “You are the only common denominator in all of your failed relationships.”
Corollaries: The more of a dumbass you are, the less well you can recognize common features in iterated bad things. So dumbasses are, subjectively speaking, just unlucky.
The corollary is more useful than the theorem:-) If I wish to be less of a dumbass, it helps to know what it looks like from the inside. It looks like bad luck, so my first job is to learn to distinguish bad luck from enemy action. In Eliezer’s specific example that is going to be hard because I need to include myself in my list of potential enemies.
(That’s fair.)
Also, oxygen. (Edit: “You are the only common denominator in all of your failed relationships.” is misleading, hiding all the other common elements.)
What we want to find is the denominator common to all of your failed relationships, but absent from the successful relationships that other people have (the presumed question being “why do all my relationships fail, but Alice, Bob, Carol, etc. have successful ones?”). Oxygen doesn’t fit the bill.
It could also be that Alice, Bob, and Carol’s relationships appear more successful than they are. We do tend to hide our failures when we can.
I’ve heard the failed-relationships quote before, but hadn’t seen it generalized to bad things in general. I like that one. Useful corollary: “Iterated bad things are evidence of a pattern of errors that you need to identify and fix.”
Of course, “bad things”, and even more so “iterated bad things”, have to be viewed relative to expectations, and at the proper level of abstraction. Explanation:
Right level of abstraction
“I punched myself in the face six times in a row, and each time, it hurt. But this is not mere bad luck! I conclude that I am bad at self-face-punching! I must work on my technique, such that I may be able to punch myself in the face without ill effect.” This is the wrong conclusion. The right conclusion is “abstain from self-face-punching”.
Substitute any of the following for “punching self in face”:
Extreme sports
Motorcycle riding
Fad diets
Prayer
Right expectations
“I’ve tried five brands of water, and none of them tasted like chocolate candy! My water-brand-selection algorithm must be flawed. I will have to be even more careful about picking only the fanciest brands of water.” Again this is the wrong conclusion. The right conclusion is “This water is just fine and there was nothing wrong with my choice of brand. I simply shouldn’t have such ridiculous expectations.”
Substitute any of the following for “brands of water” / “taste like chocolate candy”:
Sex partners / knew all the ways to satisfy my needs without me telling them
Computer repair shops / fixed my computer for free after I spilled beer on it, and also retrieved all my data [full disclosure: deep-seated personal gripe]
Diets / enabled me to lose all requisite weight and keep it off forever
Ah, I’ve been in that job. My favorite in the stupid-expectations department was a customer who expected us to lie about the cause of a failure on the work order, so that his insurance company would cover the repair. When we refused, he made his own edits to his copy of the work order....and a few days later brought the machine back (I forget why) and handed us the edited order.
We photocopied it (without telling him) and filed it with our own copy. That was entertaining when the insurance company called.
This can be easily generalized as an algorithm.
Something repeatedly goes wrong
Identify correctly your prior hypothesis
Identify the variables involved
Check/change the variables
Observe the result (apply bayes when needed)
Repeat if necessary
Scientific method applied to everiday life, if you want :)
The thing is, some of the steps are very vague. If you have a bad case of insufficient clue, what’s the cure?
I’m not sure I understood what you mean, but I guess you’re thinking about cases where you can’t have a “perfect experimental setup” to collect information. Well, in this case one should do the best with the information one has (though information can also be collected from other external sources of course). Sometimes there’s simply not enough information to identify with sufficient certainty the best course of action, so you have to go with your best guess (after a risk/reward evaluation, if you want).
Sorry, I wasn’t very clear.
I meant that if you have a deep misunderstanding of what’s going on, as here, what do you do about it?
Well, it’s somewhat hidden in steps 2 and 3. You have to be able to correctly state your hypothesis and to indentify all the possible variables. Consider chocolate water: your hipothesis is “There exist some brands of water that tastes like chocolate candy”. Let’s say for whatever reson you start with a prior probability p for this hypothesis. You then try some brands, find that none tastes like chocolate candy, and should therefore apply bayes and emerge with a lower posterior. What’s much more effective, though, is evaluating the evidence you already have that induced you to believe the original hypothesis. What made you think that water could taste like chocolate? A friend told you? Did it appear in the news? In the more concrete cases:
Sex partners : Why did you expect them to be able to satisfy you without your input? What is your source? Porn movies?
Computer repair shops : Why did you expect people to work for free?
Diets : Have you talked to a professional? Gathered massive anedoctale evidence?
“You are the only common denominator in all of your failed relationships.” != “Why do all my relationships fail?”
Both you and others have relationships, both “failed” and “not-failed” (for some value of failed). The statement “You are the only common denominator in all of your failed relationships” is clearly false, even if comparing to others who have successful ones in search of differentiating factors. The “only” is the problem even then.
The intended formulation, I should think, is “You are the only denominator guaranteed to be common to all of your failed relationships” (which is to say that it might be a contingent fact about your particular set of failed relationships that it has some more common denominators, but for any set of all of any particular person’s failed relationships, that person will always, by definition, be common to them all).
Even this might be false when taken literally… so perhaps we need to qualify it just a bit more:
“You are the only interesting denominator guaranteed to be common to all of your failed relationships.” (i.e. if we consider only those factors along which relationships-in-general differ from each other, i.e. those dimensions in relationship space which we can’t just ignore).
That, I think, is a reasonable, charitable reading of the original quote.
It’s not nitpicking on my side, there are plenty of people who tend to blame themselves for anything going wrong, even when it was outside their control. Maybe they lived in a neighborhood incompatible to themselves, especially pre-social media. Think of ‘nerds’ stranded in classes without peers. Sure, their behavior was involved in the success or failure of their relationships (how could it not have been?). However, a mindset and pseudo-wise aphorisms such as “you are the only common denominator in all of your failed relationships” would be fueling an already destructive fire of gnawing self-doubt with more gasoline.
I agree. This sort of thing...
can be viewed as a case of “wrong level of abstraction” as I alluded to here.
I think what we have here is two possible sources of error, diametrically opposed to each other. Some people refuse to take responsibility for their failures, and it is at them that “you are the only common denominator …” is aimed. Other people blame themselves even when they shouldn’t, as you say. Let us not let one sort of error blind us to the existence of the other.
When it comes to constructing or selecting rationality quotes, we should keep in mind that what we’re often doing is attempting to point out and correct some bias, which means that the relevance of the quote is obviously constrained by whether we have that bias at all, or perhaps have the opposite bias instead.
There is such a thing as bad luck, though perhaps it’s less in play in relationships than in most areas of life.
I think that if you keep having relationships that keep failing in the same way, it’s a stronger signal that if they just fail.
Alternatively, iterated bad things happen because someone is out to get you and messes constantly with what you are trying to do.
Leo Tolstoy, Anna Karenina
The personal is political!
Stepan is a smart chap. He has realized (perhaps unconsciously)
that one’s political views are largely inconsequential,
that it’s nonetheless socially necessary to have some,
that developing popular and coherent political views oneself is expensive,
and so has outsourced them to a liberal paper.
One might compare it to hiring a fashion consultant… except it’s cheap to boot!
-- Terry Prachett, Going Postal
--Soren Kierkegaard, The Sickness Unto Death
That’s an interesting opening comment on regretting choosing to speak more than choosing not to speak. In particular, it brings to mind studies of the elderly’s regrets in life and how most of those are not-having-done’s versus having-done’s. These two aren’t incompatible: if we remain silent 20 times for every time we speak, then we still regret remaining silent more than we regret speaking even if we regret each having-spoken 10 times as much as a not-having-spoken. Still, though, there seems to be some disagreement.
Obviously the fact that it’s translated complicates things, and I don’t know anything about Danish. But I think the first sentence is meant to be a piece of folk wisdom akin to “Better to remain silent and be thought a fool, than to open your mouth and remove all doubt.” That is, he’s not really concerned with the relative proportions of regret, but with the idea that it’s better (safer, shrewder) to keep your counsel than to stake out a position that might be contradicted. In light of the rest of the text, this is the reading of the line that makes the most sense to me: equivocation and bet-hedging in the name of worldly safety are a symptom of the sin of despair. Compare:
Reminds me of standards processes and project proposals that produce ever more elaborate specifications that no-one gets round to implementing.
Norbert Wiener
Razib Khan
Similar thought:
-- Akin’s Laws of Spacecraft Design
“It’s actually hard to see when you’ve fucked up, because you chose all your actions in a good-faith effort and if you were to run through it again you’ll just get the same results. I mean, errors-of-fact you can see when you learn more facts, but errors-of-judgement are judged using the same brain that made the judgement in the first place.”—Collin Street
--SMBC
IDG the punchline...
I wouldn’t call it a punchline, exactly… I mean, it’s not a joke. But in the comic it’s likely a parent and child talking, and the subtext I infer is that parenting is a process of giving one’s children the tools with which to construct superior solutions to life problems.
How I would love to quote you next month. This is pretty much my approach in a sentence.
Thanks!
For me, the real punchline is in the ‘votey image’ you get by hovering over the red dot at the bottom.
Nick Bostrom
It is perhaps worth noting that a similar comment was made by Dennett:
...in 1991 or so.
I remember this as a famous proverb, it may predate Dennett.
Apparently it does… a few minutes of googling turned up a cite to Rodolfo Llinas (1987), who referred to it as “a process paralleled by some human academics upon obtaining university tenure.”
Has the life cycle of the sea squirt ever been notably used to describe something other than the reaction of an academic to tenure?
Hah! Um… hm. A quick perusal of Google results for “sea squirt -tenure” gets me some moderately interesting stuff about their role as high-sensitivity harbingers for certain pollutants, and something about invasive sea-squirt species in harbors. But nothing about their life-cycle per se. I give a tentative “no.”
From the remarkable opening chapter of Consciousness Explained:
--Daniel Dennett
While I agree with the general point that it’s important to consider impossibilities in fact, I’m not quite sure I agree where he’s drawing the line between fact and principle. Does the compressive strength of stainless steel, and the implied limit on the height of a ladder constructed of it, not count as a restriction in principle?
It just takes some imagination. Hollow out both the Earth and the Moon to reduce their gravitational pull; support the ladder with carbon nanotube filaments; stave off collapse by pushing it around with high-efficiency ion impulse engines; etc.
I agree, though, that philosophers often make too much of the distinction between “logically impossible” and “physically impossible.” There’s probably no in principle possible way to hollow out the Earth significantly while retaining its structure; etc.
So basically, build a second ladder out of some other material that’s feasible (unlike steel), and then just tie the steel ladder to it so it doesn’t have to bear any weight.
I think that often “logically possible” means “possible if you don’t think too hard about it”. Which is exactly Dennett’s point in context: the idea that you are a brain in a vat is only conceivable if you don’t think about the computing power that would be necessary for a convincing simulation.
Dreams can be quite convincing simulations that don’t need that much computing power.
The worlds that people who do astral traveling perceive can be quite complex. Complex enough to convince people who engage in that practice that they really are on an astral plane. Does that mean that the people are really on an astral plane and aren’t just imagining it?
The way I like to think about it is that convincingness is a 2-place function—a simulation is convincing to a particular mind/brain. If there’s a reasonably well-defined interface between the mind and the simulation (e.g. the 5 senses and maybe a couple more) then it’s cheating to bypass that interface and make the brain more gullible than normal, for example by introducing chemicals into the vat for that purpose.
From that perspective, dreams are not especially convincing compared to experience while awake, rather dreamers are especially convincable.
Dennett’s point seems to be that a lot of computing power would be needed to make a convincing simulation for a mind as clear-thinking as a reader who was awake. Later in the chapter he talks about other types of hallucinations.
The 5 senses are brain events. There aren’t input channels to the brain. Take taste. How many different tastes of food can you perceive through your taste sense? More than 5. Why? Your brain takes data from nose, tongue and your memory and fits them together to something that you can perceive through your smell sense.
You have no direct access to the data that your nose or tongue sends to your brain through your conscious qualia perception.
If someone is open by receiving suggestions and you give him a hypnotic suggestion that a apple tastes like an orange you can awake him. If he eats the thing he will tell you that the apple is an orange. He might even get angry when someone tells him that the thing isn’t an orange because it obviously tastes like an orange.
You don’t need to introduce any chemicals. Millions of years of evolutions have trained brains to have an extremly high prior for thinking that they aren’t “brains in a vat”.
Doubting your own perception is an incredibly hard cognitive task.
There are experients where an experimentor uses a single electron to trigger a subject to do a particular task like raising his arm. If the experimentor afterwards ask the subject why he raised the arm the subject makes up a story and believes in that story. It takes effort for the leader of an experiment to convince a subject that he made up the story and there was no reason he raised his arm.
I suggest you read the opening chapter of Consciousness Explained. Someone’s posted it online here.
Dennett seems to quote no actual scientific paper in the paragraph or otherwise really know what the brain does.
You don’t need to provide detailed feedback to the brain, Dennett should be well aware that humans have a blind spot in their eyes and the brain makes up information to fill the blind spot.
It’s the same with suggesting a brain in the vat that it’s acting in the real world. The brain makes up the information that’s missing to provide for an experience of being in the real world.
To produce a strong hallucination (as I understand Dennett he means equates strong hallucination with complex hallucination) you might need to have a channel through with you can insert information into the brain but you don’t need to provide every detail. Missing details get made up by the brain.
No, Dennett explicitly denies that the brain makes up information to fill the blind spot. This is central to his thesis. He creates a whole concept called ‘figment’ to mock this notion.
His position is that nothing within the brain’s narrative generators expects, requires, or needs data from the blind spot; hence, in consciousness, the blind spot doesn’t exist. No gaps need to be filled in, any more that HJPEV can be aware that Eliezer has removed a line that he might, counterfactually, have spoken.
For a hallucination to be strong, does not require the hallucination to have great internal complexity. It suffices that the brain happen to not ask too many questions.
That’s a question of definition of strong. But it seems that I read Dennett to charitable for that purpose. He defines it as:
Given that definition, Dennett just seems wrong.
He continues saying:
I know multiple people in real life who report hallucinations of that strength. If you want an online source, the Tulpa forum has plenty of peope who manage to have strong hallucinations of Tulpa’s.
The Tupla way seems to take months or a year. If you have a strongly hypnotically suggestible person a good hypnotist can create such hallucination in less than an hour.
I think I must be misreading you. I’m puzzled that you believe this about hallucinations—that it’s possible for the brain to devote enough processing power to create a “strong” hallucination in the Dennettian sense—but upthread, you seemed to be saying that dreams did not require such processing power. Dreams are surely the canonical example, for people who believe that whole swaths of world-geometry are actually being modelled, rendered and lit inside of their heads? After all, there is nothing else to be occupying the brain’s horsepower; no conflicting signal source.
If I may share with you my own anecdote; when asleep, I often believe myself to be experiencing a fully sensory, qualia-rich environment. But often as I wake, there is an interim moment when I realise—it seems to be revealed—that there never was a dream. There was only a little voice making language-like statements to itself—“now I am over here now I am talking to Bob the scenery is so beautiful how rich my qualia are”.
I think Dennett’s position is just this; that there never was a dream, only a series of answers to spurious questions, which don’t have to be consistent because nothing was awake to demand consistency.
Do you think he’s wrong about dreams, too, or are you saying that waking hallucinations are importantly different? I had a quick look at the Tulpa forum and am unimpressed so far. Could you point to any examples you find particularly compelling?
Ok, so I flat out don’t believe that. If waking consciousness was that unstable, a couple of hours of immersive video gaming would leave me psychotic; and all it would take to see angels would be a mildly-well-delivered Latin Mass, rather than weeks of fasting and self-flagellation.
I’ll go read about it, though.
I don’t think I’ve ever had an experience quite like that. I’ve perhaps had experiences that are transitional between images and propositions—I’m thinking by visualizing a little story to myself, and the images themselves are seamlessly semantic, like I’m on the inside of a novel and the narration is a deep component of the concrete flow of events. But to my knowledge I’ve never felt a sudden revelation that my mental images were ‘only a little voice making language-like statements to itself’, à la Dennett’s suggestion that all experiences are just judgments.
Perhaps we’re conceptualizing the same experience after-the-fact in different ways. Or perhaps we just have different phenomenologies. A lot of people have suggested (sometimes tongue-in-cheek) that Dennett finds his own wilder hypotheses credible because he has an unusually linguistic, abstract, qualitatively impoverished phenomenology. (Personally, I wouldn’t be surprised if that’s a little bit true, but I think it’s a small factor compared to Dennett’s philosophical commitments.)
He is known to be a wine connoisseur. Sidney Shoemaker once asked him why he doesn’t just read the label..
I’ve occasionally had dreams where elements have backstories—I just know something about something in my dream, without having any way of having found it out.
This is common, I think, or at least I’ve seen other people discuss it before ( http://adamcadre.livejournal.com/172934.html ), and it fits my own experience as well. From which I had the rather obvious-in-hindsight insight that the experience of knowledge is itself just another sort of experience, just another type of qualia, just like color or sound.
In dreams knowledge doesn’t need to have an origin-via-discovery, same way that dream images don’t need to originate in our eyes, and dream sounds don’t need to originate in vibrations of our ear drums...
Is this any different from how it feels to know something in waking life, in cases where you’ve forgotten where you learned it?
Probably way too old here, but I had multible experiences relevant to the thread.
Once I had a dream and then, in the dream, I remembered I had dreamt this exact thing before, and wondered if I was dreaming now, and everything looked so real and vivid that I concluded I was not.
I can create a kind of half-dream, where I see random images and moving sequences at most 3 seconds or so long, in succession. I am really dimmed but not sleeping, and I am aware in the back of my head that they are only schematic and vague.
I would say the backstories in dreams are different in that they can be clearly nonsensical. E.g. I hold and look at a glass relief, there is no movement at all, and I know it to be a movie. I know nothing of its content, and I dont believe the image of the relief to be in the movie.
It’s hard to be sure, but I think dream elements have less of a feeling of context for me. On the other hand, is the feeling of context the side effect of having more connections to my web of memories, or is it just another tag?
(nods) Me too. I’ve also had the RPG-esque variation where I’ve had a split awareness of the dream… I am aware of the broader narrative context, but I am also experiencing being a character in the narrative who is not aware. E.g., I know that there’s something interesting behind that door, and I’m walking around the room, but I can’t just go and open that door because I don’t actually know that in my walking-around-the-room capacity.
It is perfectly consistent to both believe that (some people) can have fully realistic mental imagery, and that (most people’s) dreams tend to exhibit sub-realistic mental imagery.
I have one friend who claims to have eidetic mental imagery, and I have no reason to doubt her. Thomas Metzinger discusses in Being No-One the notion of whether the brain can generate fully realistic imagery, and holds that it usually cannot, but notes the existence of eidetic imaginers as an exception to the rule.
Thanks for the cite: sadly, on clicking through, I get a menacing error message in a terrifying language, so evidently you can’t share it that way? You are quite right that it’s consistent. It’s just that it surprised my model, which was saying “if realistic mental imagery is going to happen anywhere, surely it’s going to be dreams, that seems obviously the time-of-least-contention-for-visual-workspace.”
I’m beginning to wonder whether any useful phenomenology at all survives the Typical Mind Fallacy. Right now, if somebody turned up claiming that their inner monologue was made of butterscotch and unaccountably lapsed into Klingon from three to five PM on weekdays, I’d be all “cool story bro”.
Hmmm. Well, I don’t speak Klingon, but I am bilingual (English/Afrikaans); my inner monologue runs in English all the time in general but, after reading this, I decided to try running it in Afrikaans for a bit. Just to see what happens. Now, my Afrikaans is substantially poorer than my English (largely, I suspect, due to lack of practice).
My inner monologue switches languages very quickly on command; however, there are some other interesting differences that happen. First of all, my inner monologue is rather drastically slowed down. I have a definite sense of having to wait for my brain to look up the right word to describe the concept I mean; that is, there is a definite sense that I know what I am thinking before I wrap it in the monologue. (This is absent when my internal monologue is in the default English; possibly because my English monologue is fast enough that I don’t notice the delay). I think that that delay is the first time that I’ve noticed anticipatory thinking in my own head without the monologue.
There’s also grammatical differences between the two languages; an English sentence translated to Afrikaans will come out with a different word order (most of the time). This has its effect on my internal monologue as well; there’s a definite sense of the meanings being delivered to my language centres (or at least to the word-looking-up part thereof) in the order that would be correct for an English sentence, and the language centre having to hold certain meanings in a temporary holding space (or something) until I get to the right part of the sentence.
I also notice that my brain slips easily back into the English monologue; that’s no doubt due mainly to force of habit, and did not come as a surprise.
That’s odd, it works on three different browsers and two different machines for me. I guess there’s some geographical restriction. Here’s a PDF instead then, I was citing what’s page 45 by the book’s page numbering and page 60 by the PDF’s.
Curiously, the first time I clicked the Google Books link, I got the “Yksi sormus hallitsemaan niitä kaikkia...” message (not an exact transcription), but the second time, it let me in.
Agreed
My tulpa, which belongs to a Kardashev 3b civilization (but has its own penpal tulpas higher up) disagrees.
For example, you can construct a gravitational shell around the earth to guard against collapse by compensating the gravity. Use superglue so the wabbits and stones don’t start floating. Edit: This is incorrect, stupid Tulpa. More like Kardashev F!
I think your tulpa is playing tricks on you. A shell around the Earth will have no effect on the interactions of bodies within it, or their interactions with everything outside the shell.
It could counteract the gravitational pull which would cause the surface of a hollow Earth to collapse otherwise. Edit: It would not :-(
A spherically symmetric shell has no effect on the gravitational field inside. It will not pull the surface of a hollow Earth outwards.
You’re correct. There’s other ways to guard against collapse of an empty shell, it’s a similar scenario to guarding against collapse of a Dyson sphere.
Hey, that’s a great idea—lots of little black hole-fueled satellites in low-earth orbit, suspending the crust so it doesn’t collapse in on itself. I think we can build this ladder, after all!
edit: I think this falls prey to the shell theorem if they’re in a geodesic orbit, but not if they’re using constant acceleration to maintain their altitude, and vectoring their exhaust so it doesn’t touch the Earth.
I’m someone who still finds subjective experience mysterious, and I’d like to fix that. Does that book provide a good, gut-level, question-dissolving explanation?
I’ve had that conversation with a few people over the years, and I conclude that it does for some people and not others. The ones for whom it doesn’t generally seem to think of it as a piece of misdirection, in which Dennett answers in great detail a different question than the one that was being asked. (It’s not entirely clear to me what question they think he answers instead.)
That said, it’s a pretty fun read. If the subject interests you, I’d recommend sitting down and writing out as clearly as you can what it is you find mysterious about subjective experience, and then reading the book and seeing if it answers, or at least addresses, that question.
He seems to answer the question of why humans feel and report that they are conscious; why, in fact, they are conscious. But I don’t know how to translate that into an explanation of why I am conscious.
The problem that many people (including myself) feel to be mysterious is qualia. I know indisputably that I have qualia, or subjective experience. But I have no idea why that is, or what that means, or even what it would really mean for things to be otherwise (other than a total lack of experience, as in death).
A perfect and complete explanation of of the behavior of humans, still doesn’t seem to bridge the gap from “objective” to “subjective” experience.
I don’t claim to understand the question. Understanding it would mean having some idea over what possible answers or explanations might be like, and how to judge if they are right or wrong. And I have no idea. But what Dennett writes doesn’t seem to answer the question or dissolve it.
Here’s how I got rid of my gut feeling that qualia are both real and ineffable.
First, phrasing the problem:
Even David Chalmers thinks there are some things about qualia that are effable. Some of the structural properties of experience—for example, why colour qualia can be represented in a 3-dimensional space (colour, hue, and saturation) - might be explained by structural properties of light and the brain, and might be susceptible to third-party investigation.
What he would call ineffable is the intrinsic properties of experience. With regards to colour-space, think of spectrum inversion. When we look at a firetruck, the quale I see is the one you would call “green” if you could access it, but since I learned my colour words by looking at firetrucks, I still call it “red”.
If you think this is coherent, you believe in ineffable qualia: even though our colour-spaces are structurally identical, the “atoms” of experience additionally have intrinsic natures (I’ll call these eg. RED and GREEN) which are non-causal and cannot be objectively discovered.
You can show that ineffable qualia (experiential intrinsic natures, independent of experiential structure) aren’t real by showing that spectrum inversion (changing the intrinsic natures, keeping the structure) is incoherent.
An attempt at a solution:
Take another experiential “spectrum”: pleasure vs. displeasure. Spectrum inversion is harder, I’d say impossible, to take seriously in this case. If someone seeks out P, tells everyone P is wonderful, laughs and smiles when P happens, and even herself believes (by means of mental representations or whatever) that P is pleasant, then it makes no sense to me to imagine P really “ultimately” being UNPLEASANT for her.
Anyway, if pleasure-displeasure can’t be noncausally inverted, then neither can colour-qualia. The three colour-space dimensions aren’t really all you need to represent colour experience. Colour experience doesn’t, and can’t, ever occur isolated from other cognition.
For example: seeing a lot of red puts monkeys on edge. So imagine putting a spectrum-inverted monkey in a (to us) red room, and another in a (to us) green room.
If the monkey in the green (to it, RED’) room gets antsy, or the monkey in the red (to it, GREEN’) room doesn’t, then that means the spectrum-inversion was causal and ineffable qualia don’t exist.
But if the monkey in the green room doesn’t get antsy, or the monkey in the red room does, then it hasn’t been a full spectrum inversion. RED’ without antsiness is not the same quale as RED with antsiness. If all the other experiential spectra remain uninverted, it might even look surprisingly like GREEN. But to make the inversion successfully, you’d have to flip all the other experiential spectra that connect with colour, including antiness vs. serenity, and through that, pleasure vs. displeasure.
This isn’t knockdown, but it convinced me.
I’m not sure pleasure/pain is that useful, because 1) they have such an intuitive link to reaction/function 2) they might be meta-qualities: a similar sensation of pain can be strongly unpleasant, entirely tolerable or even enjoyable depending on other factors.
What you’ve done with colours is combine what feels like a somewhat arbitary/ineffable qualia and declare it inextricable associated with one that has direct behavioural terms involved. Your talk of what’s required to ‘make the inversion succesfully’ is misleading: what if the monkey has GREEN and antsiness rather than RED and antsiness?
It seems intuitive to assume ‘red’ and ‘green’ remain the same in normal conditions: but I’m left totally lost as to what ‘red’ would look like to a creature that could see a far wider or narrower spectrum than the one we can see. Or to that matter to someone with limited colour-blindness. There seems to me to be the Nagel ‘what is it like to be a bat’ problem, and I’ve never understood how that dissolves.
It’s been a long time since I read Dennett, but I was in the camp of ‘not answering the question, while being fascinating around the edges and giving people who think qualia are straightforward pause for thought’. No-one’s ever been able to clearly explain how his arguments work to me, to the point that I suggest that either I or they are fundamentally missing something.
If the hard problem of consciousness has really been solved I’d really like to know!
Consider the following dialog:
A: “Why do containers contain their contents?”
B: “Well, because they are made out of impermeable materials arranged in such a fashion that there is no path between their contents and the rest of the universe.”
A: “Yes, of course, I know that, but why does that lead to containment?”
B: “I don’t quite understand. Are you asking what properties of materials make them impermeable, or what properties of shapes preclude paths between inside and outside? That can get a little technical, but basically it works like this—”
A: “No, no, I understand that stuff. I’ve been studying containment for years; I understand the simple problem of containment quite well. I’m asking about the hard problem of containment: how does containment arise from those merely mechanical things?”
B: “Huh? Those ‘merely mechanical things’ are just what containment is. If there’s no path X can take from inside Y to outside Y, X is contained by Y. What is left to explain?”
A: “That’s an admirable formulation of the hard problem of containment, but it doesn’t solve it.”
How would you reply to A?
There’s nothing left to explain about containment. There’s something left to explain about consc.
Would you expect that reply to convince A?
Or would you just accept that A might go on believing that there’s something important and ineffable left to explain about containment, and there’s not much you can do about it?
Or something else?
If you were a container, you would understand the wonderful feeling of containment, the insatiable longing to contain, the sweet anticipation of the content being loaded, the ultimate reason for containing and other incomparable wonderful and tortuous qualia no non-container can enjoy. Not being one, all you can understand is the mechanics of containment, a pale shadow of the rich and true containing experience.
OK, maybe I’m getting a bit NSFW here...
It is for A to state what the remaining problem actually is. And qualiphiles can do that
D: I can explain how conscious entities respond to their environments, process information and behave. What more is there? C: How it all looks from the inside—the qualia.
That’s funny, David again and the other David arguing about the hard versus the “soft” problem of consciousness. Have you two lost your original?
I think A and B are sticking different terminology on a similar thing. A laments that the “real” problem hasn’t been solved, B points out that it has to the extent that it can be solved. Yet in a way they treat common ground:
A believes there are aspects of the problem of con(tainment|sciousness) that didn’t get explained away by a “mechanistic” model.
B believes that a (probably reductionist) model suffices, “this configuration of matter/energy can be called ‘conscious’” is not fundamentally different from “this configuration of matter/energy can be called ‘a particle’”. If you’re content with such an explanation for the latter, why not the former? …
However, with many Bs I find that even accepting a matter-of-fact workable definition of “these states correspond to consciousness” is used as a stop sign more so than as a starting point.
Just as A insists that further questions exist, so should B, and many of those questions would be quite similar, to the point of practically dissolving the initial difference.
Off of the top of my head: If the experience of qualia is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form? Is it just that the qualia we experience are modulated and processed by virtue of the relevant matter (brain) being in a state which can organize memories, reflect on its experiences etc.?
Anthropic considerations apply: Even if anything had a “value” for “subjective experience”, we would know only about our own, and probably only ascribe that property to similar ‘things’ (other humans or highly developed mammals). But is it just because those can reflect upon that property? Are waterfalls conscious, even if not sentient? “What an algorithm feels like on the inside”—any natural phenomenon is executing algorithms just the same as our neurons and glial cells do. Is it because we can ascribe correspondences between structure in our brain and external structures, i.e. models? We can find the same models within a waterfall, simply by finding another mapping function.
So is it the difference between us and a waterfall that enables the capacity for qualia, something to do with communication, memory, planning? It’s not clear why qualia should depend on “only things that can communicate can experience qualia”, for example. That sounds more like an anthropic concern: Of course we can understand another human relate its qualia experience better than a waterfall could—if it did experience it. Occam’s Razor may prefer “everything can experience” to “only very special configurations of matter can experience”, keeping in mind that the internal structure of a waterfall is just as complex as a human brain.
It seems to be that A is better in tune with the many questions that remain, while B has more of an engineer mindset, a la “I can work with that, what more do I want?”. “Here be dragons” is what follows even the most dissolv-y explanation of qualia, and trying to stay out of those murky waters isn’t a reason to deny their existence.
I can no longer remember if there was actually an active David when I joined, or if I just picked the name on a lark. I frequently introduce myself in real life as “Dave—no, not that Dave, the other one.”
I always assumed that the name was originally to distinguish you from David Gerard.
Sure, I agree that there may be systems that have subjective experience but do not manifest that subjective experience in any way we recognize or understand.
Or, there may not.
In the absence of any suggestion of what might be evidence one way or the other, in the absence of any notion of what I would differentially expect to observe in one condition over the other, I don’t see any value to asking the question. If it makes you feel better if I don’t deny their existence, well, OK, I don’t deny their existence, but I really can’t see why anyone should care one way or the other.
In any case, I don’t agree that the B’s studying conscious experience fail to explore further questions. Quite the contrary, they’ve made some pretty impressive progress in the last five or six decades towards understanding just how the neurobiological substrate of conscious systems actually works. They simply don’t explore the particular questions you’re talking about here.
And it’s not clear to me that the A’s exploring those questions are accomplishing anything.
So, A asks “If containment is a potential side-effect of physical objects, is it configuration-dependent or does everything have it in some raw, unprocessed form?”
How would you reply to A?
My response is something like “We know that certain configurations of physical objects give rise to containment. Sure, it’s not impossible that “unprocessed containment” exists in other systems, and we just haven’t ever noticed it, but why are you even asking that question?”
But I don’t think conscious experience (qualia if you like) have been explained. I think we have some pretty good explanations of how people act, but I don’t see how it pierces through to consciousness as experienced, and linked questions such as ‘what is it like to be a bat?’ or ‘how do I know my green isn’t your red’
It would help if you could sum up the merely mechnical things that are ‘just what consciousness is’ in Dennett’s (or your!) sense. I’ve never been clear on what confident materialists are saying on this: I’m sometimes left with the impression that they’re denying that we have subjective experience, sometimes that they’re saying it’s somehow an inherent quality of other things, sometimes that it’s an incidental byproduct. All of these seem to be problematic to me.
I don’t think it would, actually.
The merely mechanical things that are ‘just what consciousness is’ in Dennett’s sense are the “soft problem of consciousness” in Chalmers’ sense; I don’t expect any amount of summarizing or detailing the former to help anyone feel like the “hard problem of consciousness” has been addressed, any more than I expect any amount of explanation of materials science or topology to help A feel like the “hard problem of containment” has been addressed.
But, since you asked: I’m not denying that we have subjective experiences (nor do I believe Dennett is), and I am saying that those experiences are a consequence of our neurobiology (as I believe Dennett does). If you’re looking for more details of things like how certain patterns of photons trigger increased activation levels of certain neural structures, there are better people to ask than me, but I don’t think that’s what you’re looking for.
As for whether they are an inherent quality or an incidental byproduct of that neurobiology, I’m not sure I even understand the question. Is being a container an inherent quality of being composed of certain materials and having certain shape, or an incidental byproduct? How would I tell?
And: how would you reply to A?
That’s such a broad statement, it could cover some forms of dualism.
Agreed.
I may not remember Chalmer’s soft problem well enough either for reference of that to help!
If experiences are a consequence of our neurobiology, fine. Presumably a consequence that itself has consequences: experiences can be used in causal explanations? But it seems to me that we could explain how a bat uses echolocation without knowing what echolocation looks like (sounds like? feels like?) to a bat. And that we could distinguish how well people distinguish wavelengths of light etc. without knowing what the colour looks like to them.
It seems subjective experience is just being ignored: we could identify that an AI could carry out all sorts of tasks that we associate with consciousness, but I have no idea when we’d say ‘it now has conscious experiences’. Or whether we’d talk about degrees of conscious experience, or whatever. This is obviously ethically quite important, if not that directly pertinent to me, and it bothers me that I can’t respond to it.
With a container, you describe various qualities and that leaves the question ‘can it contain things’: do things stay in it when put there. You’re adding a sort of purpose-based functional classification to a physical object. When we ask ‘is something conscious’, we’re not asking about a function that it can perform. On a similar note, I don’t think we’re trying to reify something (as with the case where we have a sense of objects having ongoing identity, which we then treat as a fundamental thing and end up asking if a ship is the same after you replace every component of it one by one). We’re not chasing some over-abstracted ideal of consciousness, we’re trying to explain an experienced reality.
So to answer A, I’d say ‘there is no fundamental property of ‘containment’. It’s just a word we use to describe one thing surrounded by another in circumstances X and Y. You’re over-idealising a useful functional concept’. The same is not true of consciousness, because it’s not (just) a function.
It might help if you could identify what, in light of a Dennett-type approach, we can identify as conscious or not. I.e. plants, animals, simple computers, top-level computers, theoretical super-computers of various kinds, theoretical complex networks divided across large areas so that each signal from one part to another takes weeks…
I’m splitting up my response to this into several pieces because it got long. Some other stuff:
The process isn’t anything special, but OK, since you ask.
Let’s assert for simplicity that “I” has a relatively straightforward and consistent referent, just to get us off the ground. Given that, I conclude that am at least sometimes capable of subjective experience, because I’ve observed myself subjectively experiencing.
I further observe that my subjective experiences reliably and differentially predict certain behaviors. I do certain things when I experience pain, for example, and different things when I experience pleasure. When I observe other entities (E2) performing those behaviors, that’s evidence that they, too, experience pain and pleasure. Similar reasoning applies to other kinds of subjective experience.
I look for commonalities among E2 and I generalize across those commonalities. I notice certain biological structures are common to E2 and that when I manipulate those structures, I reliably and differentially get changes in the above-referenced behavior. Later, I observe additional entities (E3) that have similar structures; that’s evidence that E3 also demonstrates subjective experience, even though E3 doesn’t behave the way I do.
Later, I build an artificial structure (E4) and I observe that there are certain properties (P1) of E2 which, when I reproduce them in E4 without reproducing other properties (P2), reproduce the behavior of E2. I conclude that P1 is an important part of that behavior, and P2 is not.
I continue this process of observation and inference and continue to draw conclusions based on it. And at some point someone asks “is X conscious?” for various Xes:
If I interpret “conscious” as meaning having subjective experience, then for each X I observe it carefully and look for the kinds of attributes I’ve attributed to subjective experience… behaviors, anatomical structures, formal structures, etc… and compare it to my accumulated knowledge to make a decision.
Isn’t that how you answer such questions as well?
If not, then I’ll ask you the same question: what, in light of whatever non-Dennett-type approach you prefer, can we identify as conscious or not?
Ok, well given that responses to pain/pleasure can equally be explained by more direct evolutionary reasons, I’m not sure that the inference from action to experience is very useful. Why would you ever connect these things with expereince rather than other, more directly measurable things?
But the point is definitely not that I have a magic bullet or easy solution: it’s that I think there’s a real and urgent question—are they conscious—which I don’t see how information about responses etc. can answer. Compare to the cases of containment, or heat, or life—all the urgent questions are already resolved before those issues are even raised.
As I say, the best way I know of to answer “is it conscious?” about X is to compare X to other systems about which I have confidence-levels about its consciousness and look for commonalities and distinctions.
If there are alternative approaches that you think give us more reliable answers, I’d love to hear about them.
I have no reliable answers! And I have low meta-confidence levels (in that it seems clear to me that people and most other creatures are conscious but I have no confidence in why I think this)
If the Dennett position still sees this as a complete bafflement but thinks it will be resolved with the so-called ‘soft’ problem, I have less of an issue than I thought I did. Though I’d still regard the view that the issue will become clear one of hope rather than evidence.
I’m splitting up my response to this into several pieces because it got long. Some other stuff:
I expect so, sure. For example, I report having experiences; one explanation of that (though hardly the only possible one) starts with my actually having experiences and progresses forward in a causal fashion.
Sure, there are many causal explanations of many phenomena, including but not limited to how bats use echolocation, that don’t posit subjective experience as part of their causal chain. For example, humas do all kinds of things without the subjective experience of doing them.
Certainly.
In the examples you give, yes, it is being ignored. So? Lots of things are being ignored in those examples… mass, electrical conductivity, street address, level of fluency in Russian, etc. If these things aren’t necessary to explain the examples, there’s nothing wrong with ignoring these things.
On the other hand, if we look at an example for which experience ought to be part of the causal chain (for example, as I note above, reporting having those experiences), subjective experience is not ignored. X happens, as a consequence of X a subjective experience Y arises, as a consequence of Y a report Z arises, and so forth. (Of course, for some reports we do have explanations that don’t presume Y… e.g., confabulation, automatic writing, etc. But that needn’t be true for all reports. Indeed, it would be surprising if it were.)
“But we don’t know what Xes give rise to the Y of subjective experience, so we don’t fully understand subjective experience!” Well, yes, that’s true. We don’t fully understand fluency in Russian, either. But we don’t go around as a consequence positing some mysterious essence of Russian fluency that resists neurobiological explanation… though two centuries ago, we might have done so. Nor should we. Neither should we posit some mysterious essence of subjective experience.
“But subjective experience is different! I can imagine what a mechanical explanation of Russian fluency would be like, but I can’t imagine what a mechanical explanation of subjective experience would be like.” Sure, I understand that. Two centuries ago, the notion of a mechanical explanation of Russian fluency would raise similar incredulity… how could a machine speak Russian? I’m not sure how I could go about answering such incredulity convincingly, but I don’t thereby conclude that machines can’t speak Russian. My incredulity may be resistant to my reason, but it doesn’t therefore compel or override my reason.
I have a lot of sympathy for this. The most plausible position of reductive materialism is simply that at some future scientific point this will become clear. But this is inevitably a statement of faith, rather than acknoweldging current acheivement. It’s very hard to compare current apparent mysteries to solved mysteries—I do get that. Having said that, I can’t even see what the steps on the way to explaining consciousness would be, and claiming there is not such thing seems not to be an option (unlike ‘life’, ‘free will’ etc.) whereas most other cases you rely on saying that you can’t see how the full extent could be achieved: a machine might speak crap Russian in some circumstances etc.
Also, if a machine can speak Russian, you can check that. I don’t know how we’d check a machine was conscious.
BTW, when I said ‘it seems subjective experience is just being ignored’, I meant ignored in your and Dennett’s arguments, not in specific explanations. I have nothing against analysing things in ways that ignore consciousness, if they work.
I don’t know what the mechanical explanation would look like, either. But I’m sufficiently aware of how ignorant my counterparts two centuries ago would have been of what a mechanical explanation for speaking Russian would look like that I don’t place too much significance on my ignorance.
I agree that testing whether a system is conscious or not is a tricky problem. (This doesn’t just apply to artificial systems.)
Indeed: though artificial systems are more intuitively difficult as we don’t have as clear an intuitive expectation.
You can take an outside view and say ‘this will dissolve like the other mysteries’. I just genuinely find this implausible, if only because you can take steps towards the other mysteries (speaking bad Russian occasionally) and because you have a clear empirical standard (russians). Whereas for consciousness I don’t have any standard for identifying another’s consciousness: I do it only by analogy with myself and by the implausibility of me having an apparently causal element that others who act similarly to me lack.
I agree that the “consciousness-detector” problem is a hard problem. I just can’t think of a better answer than the generalizing-from-commonalities strategy I discussed previously, so that’s the approach I go with. It seems capable of making progress for now.
And I understand that you find it implausible. That said, I suspect that if we solve the “soft” problem of consciousness well enough that a typical human is inclined to treat an artificial system as though it were conscious, it will start to seem more plausible.
Perhaps it will be plausible and incorrect, and we will happily go along treating computers as conscious when they are no such thing. Perhaps we’re currently going along treating dogs and monkeys and 90% of humans as conscious when they are no such thing.
Perhaps not.
Either way, plausibility (or the absence of it) doesn’t really tell us much.
Yes. This is what worries me: I can see more advances making everyone sure that computers are conscious, but my suspicion is that this will not be logical. Take the same processor and I suspect the chances of it being seen as conscious will rise sharply if it’s put in a moving machine, rise sharply again for a humanoid, again for face/voice and again for physically indistinguishable.
The problem with generalising from commonalities is that I have precisely one direct example of consciousness. Although having said that, I don’t find epiphenomenal accounts convincing, so it’s reasonable for me to think that as my statements about qualia seem to follow causally from experiencing said qualia, that other people don’t have a totally separate framework for their statements about qualia. I wouldn’t be that confident though, and it gets harder with artificial consciousness.
Sure. By the same token, if you take me, remove my ability to communicate, and encase me in an opaque cylinder, nobody will recognize me as a being with subjective experience. Or, for that matter, as a being with the ability to construct English sentences.
We are bounded intellects reasoning under uncertainty in a noisy environment. We will get stuff wrong. Sometimes it will be important stuff.
I agree. And, as I said initially, I apply the same reasoning not only to the statements I make in English, but to all manner of behaviors that “seem to rise from my qualia,” as you put it… all of it is evidence in favor of other organisms also having subjective experience, even organisms that don’t speak English.
How confident are you that I possess subjective experience?
Would that confidence rise significantly if we met in person and you verified that I have a typical human body?
Agreed.
Consciousness does seem different in that we can have a better and better understanding of all the various functional elements but that we’re 1) left with a sort of argument from analogy for others having qualia 2) even if we can resolve(1), I can’t see how we can start to know whether my green is your red etc. etc.
I can’t think of many comparable cases: certainly I don’t think containership is comparable. You and I could end up looking at the AI in the moment before it destroys/idealises/both the world and say ‘gosh, I wonder if it’s conscious’. This is nothing like the casuistic ‘but what about this container gives it its containerness’. I think we’re on the same point here, though?
I’m intuitively very confident you’re conscious: and yes, seeing you were human would help (in that one of the easiest ways I can imagine you weren’t conscious is that you’re actually a computer designed to post about things on less wrong. This would also explain why you like Dennett—I’ve always suspected he’s a qualia-less robot too! ;-)
Yes, I agree that we’re much more confused about subjective experience than we are about containership.
We’re also more confused about subjective experience than we are about natural language, about solving math problems, about several other aspects of cognition. We’re not _un_confused about those things, but we’re less confused than we used to be. I expect us to grow still less confused over time.
I disagree about the lack of comparable cases. I agree about containers; that’s just an intuition pump. But the issues that concern you here arise for any theoretical construct for which we have only indirect evidence. The history of science is full of such things. Electrons. Black holes. Many worlds. Fibromyalgia. Phlogiston. Etc.
What makes subjective experience different is not that we lack the ability to perceive it directly; that’s pretty common. What makes it different is that we can perceive it directly in one case, as opposed to the other stuff where we perceive it directly in zero cases.
Of course, it’s also different from many of them in that it matters to our moral reasoning in many cases. I can’t think of a moral decision that depends on whether phlogiston exists, but I can easily think of a moral decision that depends on whether cows have subjective experiences. OTOH, it still isn’t unique; some people make moral decisions that depend on the actuality of theoretical constructs like many worlds and PTSD.
Fair enough. As an intuition pump, for me at least, it’s unhelpful: it gave the impression that you thought that consciousness was merely a label being mistaken for a thing (like ‘life’ as something beyond its parts).
Only having indirect evidence isn’t the problem. For a black hole, I care about the observable functional parts. I wouldn’t be being sucked towards it and being crushed while going ‘but is it really a black hole?’ A black hole is like a container here: what matter are the functional bits that make it up. For consciousness, I care if a robot can reason and can display conscious-type behaviour, but I also care if it can experience and feel.
Many worlds could be comparable if there is evidence that means that there are ‘many worlds’ but people are vague about if these worlds actually exist. And you’re right, this is also a potentially morally relevant point.
Insofar as people infer from the fact of subjective experience that there is some essence of subjective experience that is, as you say, “beyond its parts” (and their patterns of interaction), I do in fact think they are mistaking a label for a thing.
I dunno about essences. The point is that you can observe lots of interactions of neurons and behaviours and be left with an argument from analogy to say “they must be conscious because I am and they are really similar, and the idea that my consciousness is divorced from what I do is just wacky”.
You can observe all the externally observable, measurable things that a black hole or container can do, and then if someone argues about essences you wonder if they’re actually referring to anything: it’s a purely semantic debate. But you can observe all the things a fish, or tree, or advanced computer can do, predict it for all useful purposes, and still not know if it’s conscious. This is bothersome. But it’s not to do with essences, necessarily.
Insofar as people don’t infer something else beyond the parts of a (for example) my body and their pattern of interactions that account for (for example) my subjective experience, I don’t think they are mistaking a label for a thing.
Well, until we know how to identify if something/someone is conscious, it’s all a bit of a mystery: I couldn’t rule out consciousness being some additional thing. I have an inclination to do so because it seems unparsimonious, but that’s it.
I’m splitting up my response to this into several pieces because it got long.
The key bit, IMHO:
And I would agree with you.
“No,” replies A, “you miss the point completely. I don’t ask whether a container can contain things; clearly it can, I observe it doing so. I ask how it contains things. What is the explanation for its demonstrated ability to contain things? Containership is not just a function,” A insists, “though I understand you want to treat it as one. No, containership is a fundamental essence. You can’t simply ignore the hard question of “is X a container?” in favor of thinking about simpler, merely functional questions like “can X contain Y?”. And, while we’re at it,” A coninues, “what makes you think that an artificial container, such as we build all the time, is actually containing anything rather than merely emulating containership? Sure, perhaps we can’t tell the difference, but that doesn’t mean there isn’t a difference.”
I take it you don’t find A’s argument convincing, and neither do I, but it’s not clear to me what either of us could say to A that A would find at all compelling.
Maybe we couldn’t, but A is simply asserting that containership is a concept beyond its parts, whereas I’m appealing directly to experience: the relevance of this is that whether something has experience matters. Ultimately for any case, if others just express bewilderment in your concepts and apparently don’t get what you’re talking about, you can’t prove it’s an issue. But at any rate, most people seem to have subjective experience.
Being conscious isn’t a label I apply to certain conscious-type systems that I deem ‘valuable’ or ‘true’ in some way. Rather, I want to know what systems should be associated with the clearly relevant and important category of ‘conscious’
My thoughts about how I go about associating systems with the expectation of subjective experience are elsewhere and I have nothing new to add to it here.
As regards you and A… I realize that you are appealing directly to experience, whereas A is merely appealing to containment, and I accept that it’s obvious to you that experience is importantly different from containment in a way that makes your position importantly non-analogous to A’s.
I have no response to A that I expect A to find compelling… they simply don’t believe that containership is fully explained by the permeability and topology of containers. And, you know, maybe they’re right… maybe some day someone will come up with a superior explanation of containerhood that depends on some previously unsuspected property of containers and we’ll all be amazed at the realization that containers aren’t what we thought they were. I don’t find it likely, though.
I also have no response to you that I expect you to find compelling. And maybe someday someone will come up with a superior explanation of consciousness that depends on some previously unsuspected property of conscious systems, and I’ll be amazed at the realization that such systems aren’t what I thought they were, and that you were right all along.
Are you saying you don’t experience qualia and find them a bit surprising (in a way you don’t for containerness)? I find it really hard to not see arguments of this kind as a little disingenous: is the issue genuinely not difficult for some people, or is this a rhetorical stance intended to provoke better arguments, or awareness of the weakness of current arguments?
I have subjective experiences. If that’s the same thing as experiencing qualia, then I experience qualia.
I’m not quite sure what you mean by “surprising” here… no, it does not surprise me that I have subjective experiences, I’ve become rather accustomed to it over the years. I frequently find the idea that my subjective experiences are a function of the formal processes my neurobiology implements a challenging idea… is that what you’re asking?
Then again, I frequently find the idea that my memories of my dead father are a function of the formal processes my neurobiology implements a challenging idea as well. What, on your view, am I entitled to infer from that?
Yes, I meant surprising in light of other discoveries/beliefs.
On memory: is it the conscious experience that’s challenging (in which case it’s just a sub-set of the same issue) or do you find the functional aspects of memory challenging? Even though I know almost nothing about how memory works, I can see plausible models and how it could work, unlike consciousness.
Isn’t our objection to A’s position that it doesn’t pay rent in anticipated experience? If one thinks there is a “hard problem of consciousness” such that different answers would cause one to behave differently, then one must take up the burden of identifying what the difference would look like, even if we can’t create a measuring device to find it just now.
If A means that we cannot determine the difference in principle, then there’s nothing we should do differently. If A means that a measuring device does not currently exist, he needs to identify the range of possible outputs of the device.
This may be a situation where that’s a messy question. After all, qualia are experience. I keep expecting experiences, and I keep having experiences. Do experiences have to be publicly verifiable?
If two theories both lead me to anticipate the same experience, the fact that I have that experience isn’t grounds for choosing among them.
So, sure, the fact that I keep having experiences is grounds for preferring a theory of subjective-experience-explaining-but-otherwise-mysterious qualia over a theory that predicts no subjective experience at all, but not necessarily grounds for preferring it to a theory of subjective-experience-explaining-neural-activity.
They don’t necessarily once you start talking about uploads, or the afterlife for that matter.
different answers to the HP would undoubtedly change our behaviour, because they would indicate different classes of entity have feelings impacting morality. Indeed, it is pretty hard to think of anything more impactive.
The measuring device for conscious experience is consciousness, which is the whole problem.
Sure. But in this sense, false believed answers to the HP are no different from true believed answers.… that is, they would both potentially change our behavior the way you describe.
I suspect that’s not what TimS meant.
That is the case for most any belief you hold (unless you mean “in the exact same way”, not as “change behavior”). You may believe there’s a burglar in your house, and that will impact your actions, whether it be false or true. Say you believe that it’s more likely there is a burglar, you are correct in acting upon that belief even if it turns out to incorrect. It’s not AIXI’s fault if it believes in the wrong thing for the right reasons.
In that sense, you can choose an answer for example based on complexity considerations. In the burglar example, the answer you choose (based on data such as crime rate, cat population etc.) can potentially be further experimentally “verified” (the probability increased) as true or false, but even before such verification, your belief can still be strong enough to act upon.
After all, you do act upon your belief that “I am not living in a simulation which will eventually judge and reward me only for the amount of cheerios I’ve eaten”. It also doesn’t lead to different expected experiences at the present time, yet you also choose to act thus as if it were true. Prior based on complexity considerations alone, yet strong enough to act upon. Same when thinking about whether the sun has qualia (“hot hot hot hot hot”).
(Bit of a hybrid fusion answer also meant to refer to our neighboring discussion branch.)
Cheerio!
Yes, I agree with all of this.
Well, in the case of “do landslides have qualia”, Occam’s Razor could be used to assign probabilities just the same as we assign probabilities in the “cheerio simulation” example. So we’ve got methodology, we’ve got impact, enough to adopt a stance on the “psychic unity of the cosmos”, no?
I’m having trouble following you, to be honest.
My best guess is that you’re suggesting that, with respect to systems that do not manifest subjective experience in any way we recognize or understand, Occam’s Razor provides grounds to be more confident that they have subjective experience than that they don’t.
If that’s what you mean, I don’t see why that should be.
If that’s not what you mean, can you rephrase the question?
I think it’s conceivable if not likely that Occam’s Razor would favor or disfavor qualia as a property of more systems than just those that seem to show or communicate them in terms we’re used to. I’m not sure which, but it is a question worth pondering, with an impact on how we view the world, and accessible through established methodology, to a degree.
I’m not advocating assigning a high probability to “landslides have raw experience”, I’m advocating that it’s an important question, the probability of which can be argued. I’m an advocate of the question, not the answer, so to speak. And as such opposed to “I really can’t see why anyone should care one way or the other”.
Ah, I see.
So, I stand by my assertion that in the absence of evidence one way or the other, I really can’t see why anyone should care.
But I agree that to the extent that Occam’s Razor type reasoning provides evidence, that’s a reason to care.
And if it provided strong evidence one way or another (which I don’t think it does, and I’m not sure you do either) that would provide a strong reason to care.
I have evidence in the form of by personal experience of qualia. Granted, I have no way of showing you that evidence, but that doesn’t mean I don’t have it.
Agreed that the ability to share evidence with others is not a necessary condition of having evidence. And to the extent that I consider you a reliable evaluator of (and reporter of) evidence, your report is evidence, and to that extent I have a reason to care.
The point has been made that we should care because qualia have moral implications.
Moral implications of a proposition in the absence of evidence one way or another for that proposition are insufficient to justify caring.
If I actually care about the experiences of minds capable of experiences, I do best to look for evidence for the presence or absence of such experiences.
Failing such evidence, I do best to concentrate my attention elsewhere.
It’s possible to have both a strong reason to care, and weak evidence, ie due the moral hazard dependent on some doubtful proposition. People often adopt precautionary principles in such scenarios.
I don’t think that’s the situation here though. That sounds like a description of this situation: (imagine) we have weak evidence that 1) snakes are sapient, and we grant that 2) sapience is morally significant. Therefore (perhaps) we should avoid wonton harm to snakes.
Part of why this argument might make sense is that (1) and (2) are independent. Our confidence in (2) is not contingent on the small probability that (1) is true: whether or not snakes are sapient, we’re all agreed (lets say) that sapience is morally significant.
On the other hand, the situation with qualia is one where we have weak evidence (suppose) that A) qualia are real, and we grant that B) qualia are morally significant.
The difference here is that (B) is false if (A) is false. So the fact that we have weak evidence for (A) means that we can have no stronger (and likely, we must have yet weaker) evidence for (B).
Does the situation change significantly if “the situation with qualia” is instead framed as A) snakes have qualia and B) qualia are morally significant?
Yes, if the implication of (A) is that we’re agreed on the reality of qualia but are now wondering whether or not snakes have them. No, if (A) is just a specific case of the general question ‘are qualia real?’. My point was probably put in a confusing way: all I mean to say was that Juno seemed to be arguing as if it were possible to be very confident about the moral significance of qualia while being only marginally confident about their reality.
(nods) Makes sense.
What? Are you saying we have weak evidence for even in ourselves?
What I think of the case for qualia is beside the point, I was just commenting on your ‘moral hazard’ argument. There you said that even if we assume that we have only weak evidence for the reality of qualia, we should take the possibility seriously, since we can be confident that qualia are morally significant. I was just pointing out that this argument is made problematic by the fact that our confidence in the moral significance of qualia can be no stronger than our confidence in their reality, and therefore by assumption must be weak.
But of course it can. i can be much more confident in
(P → Q)
than I am in P. For instance, I can be highly confident that if i won the lottery, I could buy a yacht.
I am guessing that Juno_Watt means that strong evidence for our own perception of qualia makes them real enough to seriously consider their moral significance, whether or not they are “objectively real”.
Yes, they often do.
On your view, is there a threshold of doubtfulness of a proposition below which it is justifiable to not devote resources to avoiding the potential moral hazard of that proposition being true, regardless of the magnitude of that moral hazard?
i don’t think ir’s likely my house will catch fire, but i take out fire insurance. OTOH, if i don’t set a lower bound I will be susceptible to pascal’s muggings.
He may have meant something like “Qualiaphobia implies we would have no expereinces at all”. However, that all depends on what you mean by experience. I don’t think the Expected Experience criterion is useful here (or anywhere else)
I realize that non-materialistic “intrinsic qualities” of qualia, which we perceive but which aren’t causes of our behavior, are incoherent. What I don’t fully understand is why have I any qualia at all. Please see my sibling comment.
Tentatively:
If it’s accepted that GREEN and RED are structurally identical, and that in virtue of this they are phenomenologically identical, why think that phenomenology involves anything*, beyond structure, which needs explaining?
I think this is the gist of Dennett’s dissolution attempts. Once you’ve explained why your brain is in a seeing-red brain-state, why this causes a believing-that-there-is-red mental representation, onto a meta-reflection-about-believing-there-is-red functional process, etc., why think there’s anything else?
Phenomenology doesn’t involve anything beyond structure. But my experience seems to.
(nods) Yes, that’s consistent with what I’ve heard others say.
Like you, I don’t understand the question and have no idea of what an answer to it might look like, which is why I say I’m not entirely clear what question you/they claim is being answered. Perhaps it would be more correct to say I’m not clear how it differs from the question you/they want answered.
Mostly I suspect that the belief that there is a second question to be answered that hasn’t been is a strong, pervasive, sincere, compelling confusion, akin to where does the bread go?. But I can’t prove it.
Relatedly: I remember, many years ago, attending a seminar where a philosophy student protested to Dennett that he didn’t feel like the sort of process Dennett described. Dennett replied “How can you tell? Maybe this is exactly what the sort of process I’m describing feels like!”
I recognize that the traditional reply to this is “No! The sort of process Dennett describes doesn’t feel like anything at all! It has no qualia, it has no subjective experience!”
To which my response is mostly “Why should I believe that?” An acceptable alternative seems to be that subjective experience (“qualia”, if you like) is simply a property of certain kinds of computation, just as the ability to predict the future location of a falling object (“prescience”, if you like) is a property of certain kinds of computation.
To which one is of course free to reply “but how could prescience—er, I mean qualia—possibly be an aspect of computation??? It just doesn’t make any sense!!!” And I shrug.
Sure, if I say in English “prescience is an aspect of computation,” that sounds like a really weird thing to say, because “prescience” and “computation” are highly charged words with opposite framings. But if I throw out the English words and think about computing the state of the world at some future time, it doesn’t seem mysterious at all, and such computations have become so standard a part of our lives we no longer give it much thought.
When computations that report their subjective experience become ubiquitous, we will take the computational nature of qualia for granted in much the same way.
Thanks for your reply and engagement.
I agree. We already know what we feel like. Once we know empirically what kind of process we are, we can indeed conclude that “that’s what that kind of process feels like”.
What I don’t understand is why being some kind of process feels like anything at all. Why it seems to myself that I have qualia in the first place.
I do understand why it makes sense for an evolved human to have such beliefs. I don’t know if there is a further question beyond that. As I said, I don’t know what an answer would even look like.
Perhaps I should just accept this and move on. Maybe it’s just the case that “being mystified about qualia” is what the kind of process that humans are is supposed to feel like! As an analogy, humans have religious feelings with apparently dedicated neurological underpinnings. Some humans feel the numinous strongly, and they ask for an answer to the Mystery of God, which to them appears as obvious as any qualia.
However, an answer that would be more satisfactory (if possible) would be an exploration and an explanation of mind-space and its accompanying qualia. Perhaps if I understood the actual causal link from which kind of process I am, to which qualia I have, part of the apparent mystery would go away.
Does being like some other kind of process “feel like” anything? Like what? Would it be meaningful for me to experience something else without becoming something else? Are the qualia of a cat separate from being a cat? Or would I have to have a cat-mind and forget all about being human and verbal and DanArmak to experience the qualia of a cat, at which point I’d be no different than any existing cat, and which I wouldn’t remember on becoming human again?
I agree. To clarify, I believe all of these propositions:
Full materialism
Humans are physical systems that have self-awareness (“consciousness”) and talk about it
That isn’t a separate fact that could be otherwise (p-zombies); it’s highly entangled with how human brains operate
Other beings, completely different physically, would still behave the same if they instantiated the same computation (this is pretty much tautological)
If the computation that is myself is instantiated differently (as in an upload or em), it would still be conscious and report subjective experience (if it didn’t, it would be a very poor emulation!)
If I am precisely cloned, I should anticipate either clone’s experience with 50% probability; but after finding out which clone I am, I would not expect to suddenly “switch” to experiencing being the other clone. I also would not expect to somehow experience being both clones, or anything else. (I’m less sure about this because it’s never happened yet. And I don’t understand quantum mechanics, so I can’t properly appreciate the arguments that say we’re already being split all the time anyway. Nevertheless, I see no sensible alternative, so I still accept this.)
Shouldn’t you anticipate being either clone with 100% probability, since both clones will make that claim and neither can be considered wrong?
What I meant is that some time after the cloning, the clones’ lives would become distinguishable. One of them would experience X, while the other would experience ~X. Then I would anticipate experiencing X with 50% probability.
If they live identical lives forever, then I can anticipate “being either clone” or as I would call it, “not being able to tell which clone I am”.
My first instinctive response is “be wary of theories of personal identity where your future depends on a coin flip”. You’re essentially saying “one of the clones believes that it is your current ‘I’ experiencing ‘X’, and it has a 50% chance of being wrong”. That seems off.
I think to be consistent, you have to anticipate experiencing both X and ~X with 100% probability. The problem is that the way anticipation works with probability depends implicitly on there only being one future self that things can happen to.
No, I’m not saying that.
I’m saying: first both clones believe “anticipate X with 50% probability”. Then one clone experiences X, and the other ~X. After that they know what they experienced, so of course one updates to believe “I experienced X with ~1 probability” and the other “I experienced ~X with ~1 probability”.
I think we need to unpack “experiencing” here.
I anticipate there will be a future state of me, which has experienced X (= remembers experiencing X), with 50% probability.
If X takes nontrivial time, such that one can experience “X is going on now”, then I anticipate ever experiencing that with 50% probability.
But that means there is always (100%) a future state of you that has experienced X, and a separate future state that has always (100%) experienced ~X. I think there’s some similarity here to the problem of probability in a many-worlds universe, except in this case both versions can still interact. I’m not sure how that affects things myself.
You’re right, there’s a contradiction in what I said. Here’s how to resolve it.
At time T=1 there is one of me, and I go to sleep. While I sleep, a clone of me is made and placed in an identical room. At T=2 both clones wake up. At T=3 one clone experiences X. The other doesn’t (and knows that he doesn’t).
So, what should my expected probability for experiencing X be?
At T=3 I know for sure, so it goes to 1 for one clone and 0 for the other.
At T=2, the clones have woken up, but each doesn’t know which he is yet. Therefore each expects X with 50% probability.
At T=1, before going to sleep, there isn’t a single number that is the correct expectation. This isn’t because probability breaks down, but because the concept of “my future experience” breaks down in the presence of clones. Neither 50% nor 100% is right.
50% is wrong for the reason you point out. 100% is also wrong, because X and ~X are symmetrical. Assigning 100% to X means 0% to ~X.
So in the presence of expected future clones, we shouldn’t speak of “what I expect to experience” but “what I expect a clone of mine to experience”—or “all clones”, or “p proportion of clones”.
Suppose I’m ~100% confident that, while we sleep tonight, someone will paint a blue dot on either my forehead or my husband’s but not both. In that case, I am ~50% confident that I will see a blue dot, I am ~100% confident that one of us will see a blue dot, I am ~100% confident that one of us will not see a blue dot.
If someone said that seeing a blue dot and not-seeing a blue dot are symmetrical, so assigning ~100% confidence to “one of us will see a blue dot” means assigning ~0% to “one of us will not see a blue dot”, I would reply that they are deeply confused. The noun phrase “one of us” simply doesn’t behave that way.
In the scenario you describe, the noun phrase “I” doesn’t behave that way either.
I’m ~100% confident that I will experience X, and I’m ~100% confident that I will not experience X.
I really find that subscripts help here.
In your example, you anticipate your own experiences, but not your husband’s experiences. I don’t see how this is analogous to a case of cloning, where you equally anticipate both.
I’m not saying that “[exactly] one of us will see a blue dot” and “[neither] one of us will not see a blue dot” are symmetrical; that would be wrong. What I was saying was that “I will see a blue dot” and “I will not see a blue dot” are symmetrical.
All the terminologies that have been proposed here—by me, and you, and FeepingCreature—are just disagreeing over names, not real-world predictions.
I think the quoted statement is at the very least misleading because it’s semantically different from other grammatically similar constructions. Normally you can’t say “I am ~1 confident that [Y] and also ~1 confident that [~Y]”. So “I” isn’t behaving like an ordinary object. That’s why I think it’s better to be explicit and not talk about “I expect” at all in the presence of clones.
My comment about “symmetrical” was intended to mean the same thing: that when I read the statement “expect X with 100% probability”, I normally parse it as equivalent to “expect ~X with 0% probability”, which would be wrong here. And X and ~X are symmetrical by construction in the sense that every person, at every point in time, should expect X and ~X with the same probability (whether you call it “both 50%” like I do, or “both 100%” like FeepingCreature prefers), until of course a person actually observes either X or ~X.
In my example, my husband and I are two people, anticipating the experience of two people. In your example, I am one person, anticipating the experience of two people. It seems to me that what my husband and I anticipate in my example is analogous to what I anticipate in your example.
But, regardless, I agree that we’re just disagreeing about names, and if you prefer the approach of not talking about “I expect” in such cases, that’s OK with me.
One thing you seem to know but keep forgetting is the distinction between your current state, and recorded memories. Memories use extreme amounts of lossy and biased compression, and some of your confusion seem to come from looking at your current experience while explicitly thinking about this stuff and then generalizing it as something continuous over time and something applicable to a wider range of mental states than it actually is.
Sure, that makes sense.
As far as I know, current understanding of neuroanatomy hasn’t identified the particular circuits responsible for that experience, let alone the mechanism whereby the latter cause the former. (Of course, the same could be said for speaking English.)
But I can certainly see how having such an explanation handy might help if I were experiencing the kind of insistent sense of mysteriousness you describe (for subjective experience or for speaking English).
Hmm, to your knowledge, has the science of neuroanatomy ever discovered any circuits responsible for any experience?
Quick clarifying question: How small does something need to be for you to consider it a “circuit”?
It’s more a matter of discreetness than smallness: I would say I need to be able to identify the loop.
Second clarifying question, then: Can you describe what ‘identifying the loop’ would look like?
Well, I’m not sure. I’m not confident there are any neural circuits, strictly speaking. But I suppose I don’t have anything much more specific than ‘loop’ in mind: it would have to be something like a path that returns to an origin.
In the sense of the experience not happening if that circuit doesn’t work, yes.
In the sense of being able to give a soup-to-nuts story of how events in the world result in a subjective experience that has that specific character, no.
I guess I mean: has the science of neuroanatomy discovered any circuits whatsoever?
I am having trouble knowing how to answer your question, because I’m not sure what you’re asking.
We have identified neural structures that are implicated in various specific things that brains do.
Does that answer your question?
I’m not very up to date on neurobiology, and so when I saw your comment that we had not found the specific circuits for some experience I was surprised by the implication that we had found that there are neural circuits at all. To my knowledge, all we’ve got is fMRI captures showing changes in blood flow which we assume to be correlated in some way with synaptic activity. I wondered if you were using ‘circuit’ literally, or if your intended reference to the oft used brain-computer metaphor. I’m quite interested to know how appropriate that metaphor is.
Ah! Thanks for the clarification. No, I’m using “circuit” entirely metaphorically.
I think it does. It really is a virtuoso work of philosophy, and Dennett helpfully front-loaded it by putting his most astonishing argument in the first chapter. Anecdotally, I was always suspicious of arguments against qualia until I read what Dennett had to say on the subject. He brings in plenty of examples from philosophy, from psychological and scientific experiments, and even from literature to make things nice and concrete, and he really seems to understand the exact ways in which his position is counter-intuitive and makes sure to address the average person’s intuitive objections in a fair and understanding way.
I’ve read some of Dennet’s essays on the subject (though not the book in question), and I found that, for me,his ideas did help to make consciousness a good deal less mysterious. What actually did it for me was doing some of my own reasoning about how a ‘noisy quorum’ model of conscious experience might be structured, and realizing that, when you get right down to it, the fact that I feel as though I have subjective experience isn’t actually that surprising. It’d be hard to design to a human-style system that didn’t have a similar internal behavior that it could talk about.
— W. V. Quine, An Intermittently Philosophical Dictionary (a whimsical and fun read)
Usually I find myself deploying nefarious rhetoric when I believe something on good evidence but have temporarily forgotten the evidence (this is very embarrassing and happens to me a lot).
Scientists are people too. It’s folly to imagine that scientists are less prone to bias and pride than non-scientists.
It’s folly to suppose that they’re not prone at all, but not so foolish to suppose either that their training makes them less biased, or that being less so biased makes people more likely to become scientists.
Ever heard the phrase “Science progresses one funeral at time”? Who do you think coined that phrase? Hint: It wasn’t trash collectors.
If scientists were really as open-minded and ego free as you claim, they wouldn’t spend their lives defending work from their youth.
I think the quote is alluding to capital ‘S’ scientist rather than a particular group of humans. In theory a Scientist’s cause to be correct, while human scientists want to be right.
Yeah, the only thing I don’t like about the quote is that it has an unappealling us-vs.-them quality to it, as if the divide between rational people and irrational were totally clean-cut. Posted it regardless because of the nice turn of phrase at the end.
Of course, when you are trying to get more of “them” to be “us”, it’s worth pointing out what “they” are doing wrong. It’s not like anyone without brain damage is born and destined to be an “unscientific man” for life.
T.K.V. Desikachar
-Akin’s Laws of Spacecraft Design
Robert Cialdini at the blog Bakadesuyo explaining the difference between ethical persuasion and the dark arts
Graffiti on the wall of an Austrian public housing block:
(German original: “Weiße Wände — hohe Mieten”. I’m not actually sure it’s true, but my understanding is that rent in public housing does vary somewhat with quality and it seems plausible that graffiti could enter into it. And to make the implicit explicit, the reason it seems worth posting here is how it challenges the tenants’ — and my — preconceptions: You may think that from a purely selfish POV you should not want graffiti on your house, but it’s quite possible that the benefits to you are higher than the costs.)
This makes sense as helping with a price discrimination scheme which is probably made very complicated legally (if the landlord is a monopolist, then both you and them might prefer that they have a crappy product to offer at low cost, but often it is hard to offer a crappier product for legal reasons) or as a costly signal of poverty (if you are poor you are willing to make your house dirtier in exchange for money—of course most of the costs can also be signaling, since having white walls is a costly signal of wealth). My guess would be these kinds of models are too expressive to have predictive power, but this at least seems like a clean case.
Signaling explanations often seem to have this vaguely counter-intuitive form, e.g. you might think that from a selfish point of view you would want your classes to be more easily graded. But alas...
Well, this is public housing, so the landlord is the government and thus is like to both have monopoly power and not be subject to the same laws as a private landlord.
If I guess correctly the reasons why a government would pass a law against renting excessively crappy houses, I don’t think it would exempt itself from it.
Er… Why? The only reasons for that I can think of are aesthetics (but you can’t ‘should’ that), economic value (but that only applies to landlords, not tenants) and signalling (but people who know what building I live in already know me well, so I can afford countersignalling to them),
Broken Windows? ( - If you live in an aesthetically unpleasing area, then people are more likely to trash the place.)
I often see really ingenious grafitti in Vienna. My favorite was somewhere in the 9. district someone wrote “peace to the huts, war to the palaces” and then someone corrected it to “peace to the huts, and to the palaces”. I found it amusing because it sounded like a grafitti battle between anarchists and catholics.
Wow.
I wonder if the graffiti artist is part of the housing community, or someone with a special interest in political art targeting rent-seekers.
The delete account that has posted below makes a concise and informative contribution if anyones interested in checking it out. I wonder why it’s deleted...
A better house in a better neighbourhood costs more. How is this news?
I believe the implication is “I am doing you a favor by spraying graffiti on your apartment building, because that will cause your rent to decrease.”
I don’t know if this is actually true, but that’s what I take to be the intent.
So it’s utility maximising for renters, assuming they don’t get caught and the time penalty isn’t significant, to deface their property or those in their area so that the property is less attractive to others, assuming they don’t value the original state aesthetics of the property relative to the defaced state more than the price differential?
That is a lot to squeeze from four words. FWIW, they struck me as a snarl of rage against people who have more money than the perpetrators.
As a tenant in such an apartment building I would reply that nice white walls and a nice neighbourhood is the entire point of paying that rent, and that anyone who wants to live in a slum should go and find one, preferably a long way from me.
It’s not just the words you’re squeezing, it’s the medium — the fact that the words are written in graffiti.
I agree that a nice neighborhood is the point of paying rent, and your comment about people who want to live in slums, etc. I’m not sure that graffiti by itself constitutes neighborhood-not-niceness, but of course it’s correlated with lots of other things, and there’s the broken windows theory, etc.
To a poor person, having walls at all is more important than having white walls.
To a poor person, having a car at all is more important than having one with no dents in the panels. I don’t see that as justifying vandalising the cars at a second-hand dealer by night so as to pick one up cheap the next day.
But we’re working from just four words of graffiti here, from an unknown author, and the site where Google led me from the original German text is dead.
This happens in the context of gentrification. In a city like Berlin rents in the cool neighborhoods rise and some people have to leave their neighborhood because of the rising rents.
Putting grafiti on walls is a way to counteract this trend.
At least it is from the point of view of people who want to justify that they are moral when the illegally spray grafiti on the houses of other people
-- Shamus Young
Thanks for the link.
Here’s another good quote:
It depends on why you were walking there in the first place.
I think you were downvoted too hastily. Seriously, imagine that instead of driving very carefully last week, I phoned my destination to say “Sorry I’m going to miss your wedding, but the only route to the venue is next to a cliff!” Would “Great solution!” be an expected or an accurate response?
I don’t think a single downvote is that significant. I’ve seen so many inexplicable downvotes that I would suspect there’s a bot that every time a new comment is posted generates a random number between 0 and 1 and downvotes the comment if the random number is less than 0.05 -- if I could see any reason why anyone would do that.
Now that you’ve suggested it...
It actually depends on how many total votes the post in question has accumulated, and how much karma the user in question has accumulated.
A completely new user doesn’t need to have to worry about anomalous downvotes, because they’re too new to have a reputation. Likewise, a well-established user doesn’t need to worry about anomalous downvotes, because they get lost in the underflow. I’d say somewhere around the 200 mark it can become problematic to one’s reputation; and, of course, one or two consecutive anomalous downvotes that act as the first votes on a given post can easily set a trend to bury that post before anyone has a chance to usefully comment on it.
(If much more than 5% of the comments in a thread/by a user are downvoted, then it’s probably not my hypothetical bot’s fault.)
(OTOH, depending on what kind of connection is there between you and the bride and groom and what kind of people they are, a half-joking “how the hell did you choose to get married here of all places” might have been in order.) :-)
John W. Holt (previously quoted here, but not in a Rationality Quotes thread)
-- Paul Rosenberg
I don’t know if there are short words for this, but seems to me that some people generally assume that “things, left alone, naturally improve” and some people assume that “things, left alone, naturally deteriorate”.
The first option seems like optimism, and the second option seems like pesimism. But there is a catch! In real life, many things have good aspects and bad aspects. Now the person who is “optimistic about the future of things left alone” must find a reason why things are worse than expected. (And vice versa, the person who is “pessimistic about the future of things left alone” must find a reason why things are better.) In both cases, a typical explanation is human intervention. Which means that this kind of optimism is prone to conspiracy theories. (And this kind of pessimism is prone to overestimate the benefits of human actions.)
For example, in education: For a “pessimist about spontaneous future” things are easy—people are born stupid, and schools do a decent job at making them smarter; of course, the process is not perfect. For an “optimist about spontaneous future”, children should be left alone to become geniuses (some quote by Rousseau can be used to support this statement). Now the question is, why do we have a school system, whose only supposed consequence is converting these spontaneous geniuses into ordinary people? And here you go: The society needs sheeps, etc.
Analogically, in politics: For some people, the human nature is scary, and the fact that we can have thousands or even millions of people in the same city, without a genocide happening every night, is a miracle of civilization. For other people, everything bad in the world is caused by some evil conspirators who either don’t care or secretly enjoy human suffering.
This does not mean that there are no conspiracies ever, no evil people, no systems made worse by human tampering. I just wanted to point out that if you expect things to improve spontaneously (which seems like a usual optimism, which is supposedly a good thing), the consequences of your expectations alone, when confronted with reality, can drive you to conspiracy theories.
I don’t think that accurately describes a position of someone like Alex Jones.
You can care about people and still push the fat man over the bridge but then try to keep the fact that you pushed the fat man over the bridge secret because you live in a country where the prevailing Christian values dictate that it’s a sin to push the fat man over the bridge.
There are a bunch of conspiracy theories where there is an actual conflict of values and present elites are just evil according to the moral standards that the person who started the conspiracy theory has.
Take education. If you look at EU educational reform after the Bologna Process there are powerful political forces who want to optimize education to let universities teach skills that are valuable to employeers. On the other hand you do have people on the left who think that universities should teach critical thinking and create a society of individuals who follow the ideals of the Enlightment.
There’s a real conflict of values.
In this specific conflict, I would prefer having two kinds of school—universities and polytechnics—each optimized for one of the purposes, and let the students decide.
Seems to me that conflicts of values are worse when a unified decision has to be made for everyone. (Imagine that people would start insisting that only one subject can be ever taught at schools, and then we would have a conflict of values whether the subject should be English or Math. But that would be just a consequence of a bad decision at meta level.)
But yeah, I can imagine a situation with a conflict of values that cannot be solved by letting everyone pick their choice. And then the powerful people can push their choice, without being open about it.
You do have this in a case like teaching the theory of evolution.
You have plenty of people who are quite passionate but making an unified decision to teach everyone the theory of evolution, including the parents of children who don’t believe in the theory of evolution.
Germany has compulsory schooling. Some fundamental chrisitan don’t want their children in public schools. If you discuss the issue with people who have political power you find that those people don’t want that those children get taught some strange fundamental worldview that includes things like young earth creationism. The want that the children learn the basic paradigm that people in German society follow.
On the other hand I’m not sure whether you can get a motivation like that from reading the newspaper. Everyone who’s involved in the newspaper believes that it’s worth to teach children the theory of evolution so it’s not worth writing a newspaper article about it.
Is it a secret persecution of fundamentalist Christians? The fundamentalist Christian from whom the government takes away the children for “child abuse” because the children don’t go to school feel perscecuted. On the other hand the politician in question don’t really feel like the are persecuting fundamentalist Christians.
The ironic thing about it is that compulsory schooling was introduced in Germany for the stated purpose of turning children into ’good Christians”.
In a case like evolution, do you sincerely believe that the intellectual elite should use their power to push a Texan public school to teach evolution even if the parents of the children and the local board of education don’t want it?
Yeah, when people in power create tools to help them maintain the power, if those tools are universal enough, they will be reused by the people who get the power later.
The trade-offs need to be discussed rationally. The answer would probably be “yes”, but there are some negative side effects. For example, you create a precedent for other elites to push their agenda. (Just like those Christians did with the compulsory education.) Maybe a third option could be found. (Something like: Don’t say what schools have to teach, but make the exams independent on schools. Make the evolutionary knowledge necessary to pass a biology exam. Make it public when students or schools or cities are “failing in biology”.)
Why have governments control exams at all? Have different certifying authorities and employers are free to decide which authorities’ diploma they accept.
That could work! On the other hand, it may set up a situation where a person who is only guilty of being raised in the wrong place may never get a decent job. Wonder what can be done to prevent that as much as possible?
And this differs from the status quo, how?
I was under the impression you wanted to improve things significantly. Hence why I mentioned that issue—and it IS an issue.
My point is that a child’s parents are more likely to make good decisions for the child then education bureaucrats.
That depends on the parents. Yes, many parents (including mine and, presumably, yours) have the best interests of the child at heart, and have the knowledge and ability to be able to serve those interests quite well.
This is not, however, true of all parents. There’s no entrance exam for parenthood. Thus:
Some parents are directly abusive to their children (including: many parents who abuse alcohol and/or drugs)
Some parents are total idiots; even if they have the best interests of the child at heart, they have no idea what to do about it
Some parents are simply too mired in poverty; they can’t afford food for their children, never mind schooling
Some parents are, usually through no fault of their own, dead while their children are still young
Some parents are absent for some reason (possibly an acrimonious divorce? Possibly in order to find employment?)
An education bureaucrat, on the other hand, is a person hand-picked to make decisions for a vast number of children. Ideally, he is picked for his ability to do so; that is, he is not a total idiot, directly abusive, dead, or missing, and he has a reasonable budget to work with. He also has less time to devote to making a decision per child.
That’s like claiming that bicycling is better than driving cars, as long as “driving cars’ includes cases where the cars are missing or broken.
If the parents are missing, dead, abusive, or total idiots (depending on how severe the “total” idiocy is), they can be replaced by adoptive or foster parents. You would need to compare bureaucrats to parents-with-replacement, not to parents-without-replacement, to get a meaningful comparison.
A question: How many people are so attached to being experts at parenting that they would rather see children jobless, unhappy, or dead than educated by experts in a particular field (whether biology or social studies)? Those are the people I worry about, when I imagine a system in which parents/government could decide all the time what their children learn and from what institution. For every parent or official that changes their religion just to get children into the best schools, willing to give up every alliance just to get the tribe’s offspring a better chance at life, and happy to give up their own authority in the name of a growing child’s happiness, there are many, many more who are not so caring and fair, I fear.
Experts in a field are far more likely to want to educate children better BECAUSE the above attachment to beliefs, politics, and authority is not, in their minds, in competition with their care for the children (or, at least, shouldn’t be, if those same things depend upon their knowledge). So, rather than saying we trust business, government, or one’s genetic donors, shouldn’t we be trying to make it so that the best teachers are trusted, period? Or, am I missing the point?
That’s a very odd question because you’re phrasing it as a hypothetical, thus forcing the logical answer to be “yes, being taught by an expert is better than having the child dead”, but you’re giving no real reason to believe the hypothetical is relevant to the real world. If experts could teleport to the moon, should we replace astronauts with them?
If you seriously believe what that is implying, that argument wouldn’t just apply to education. Why shouldn’t we just take away all children at birth (or grow them in the wombs of paid volunteers and prohibit all other childbearing) to have them completely raised by experts, not just educated by them?
Would it benefit the children more than being raised by the parents? Then the answer would be “yes.” Many people throughout history attempted to have their children raised by experts alone, so it is not without precedent, for all its strangeness. Nobles in particular entrusted their children to servants, tutors, and warriors, rather than seek to provide everything needed for a healthy (by their standards) childhood themselves. Caring about one’s offspring may include realizing that one needs lots of help.
By the way, I did not intend to cut off an avenue of exploration, here—merely to point out that the selection processes for business, government, and mating do not have anything to do with getting a better teacher or a person good at deciding what should be taught. If that does destroy some potential solution, I hope you forgive me, and would love to hear of that solution so I may change.
You have an extremely over-idealistic view of how the education bureaucracy (or any bureaucracy for that matter) works.
For evolutionary reasons, parents have a strong desire to do what’s best for their child, bureaucracies on the other hand have all kinds of motivations (especially perpetuating the bureaucracy).
You haven’t dealt with bureaucracy much, have you?
There are a lot of failing school systems with large budgets. Throwing money at a broken system doesn’t give you a working system, it gives you a broken system that wastes even more money.
Evolution is satisfied if at least some of the children live to breed. There are several possible strategies that parents can follow here; having many children and encouraging promiscuity would satisfy evolutionary reasons and likely do so better than having few children and ensuring that they are properly educated. Evolutionary reasons are insufficient to ensure that what happens is good for the children; evolutionary reasons are satisfied by the presence of grandchildren.
Yes. That means that the problems in those systems are not money; the problems in those systems lie elsewhere, and need to be dealt with separately.
...not that much, no. I would kind of expect that, when dealing with someone who will be making decisions that affect vast numbers of children, people will make some effort to consider the long-term effects of such choices. (I realise that, in some cases, this will involve words like ‘indoctrination’; there can be a dark side to long-term planning).
This may be over-idealistic on my part. The way I see it, though, it is not the bureaucrat’s job to be better at making decisions for children than the best parent, or even than the average parent. It is the bureaucrat’s job to create a floor; to ensure that no child is treated worse than a certain level.
It doesn’t (and can’t) work this way in practice. In practice what happens is that there is a disagreement between the bureaucracy and the parents. In that case whose views should prevail? If you answer “the bureaucracy’s” your floor is now also a ceiling, if you answer “the parents’ ” you’ve just gutted your floor. If you want to answer “the parents’ if their average or better and the bureaucracy’s otherwise” then the question becomes whose job is it to make that judgement, and we’re back to the previous two cases.
I am not sure why exactly it does not work this way, but as a matter of fact, it does not. Specifically I am thinking about department of education in Slovakia. As far as I know, it works approximately like this: There are two kinds of people there; elected and unelected.
The elected people (not sure if only the minister, or more people) only care about short-term impression on their voters. They usually promise to “reform the school system” without being more specific, which is always popular, because everyone knows the system is horrible. There is no system behind the changes, it is usually a random drift of “we need one less hour of math, and one more hour of English, because languages are important” and “we need one less hour of English and one more hour of math, because former students can’t do any useful stuff”; plus some new paperwork for teachers.
The unelected people don’t give a shit about anything. They just sit there, take their money, and expect to sit there for the next decades. They have zero experience with teaching, and they don’t care. They just invent more paperwork for teachers, because then the existing paperwork explains why their jobs are necessary (someone must collect all the data, retype it to Excel, and create reports). The minister usually has no time or does not care enough to understand their work, optimize it, and fire those who are not needed. It is very easy for a bureaucrat to create a work for themselves, because paperwork recursively creates more paperwork. These people are not elected, so they don’t fear the votes; and the minister is dependent on their cooperation, so they don’t fear the minister.
Maybe elites that push their agenda have a much better chance keeping their power that don’t? I’m not sure how much setting precedents limit further elites.
Basically you try to make the system more complicated to still get what you want but make people feel less manipulated.
Complicated and intransparent systems lead to conspiracy theories.
Pessimists can also believe that education started out decent and has deteriorated to the point where it’s worse than nothing.
In addition to Armok’s alternatives, there’s also those who believe the tendency is a reversion to the mean (the mean being the mean because it’s a natural equilibrium, perhaps).
And what about those that tend to assume things stay the same/revert to only changing on geological timescales, or those that assume it keeps moving in a linear way?
Matt Taibbi opening paragraph in [Everything Is Rigged The Biggest Price-Fixing Scandal Ever] (http://www.rollingstone.com/politics/news/everything-is-rigged-the-biggest-financial-scandal-yet-20130425#ixzz2W8WJ4Vix)
I smell a rat. Googling Matt Talibi (actually Taibbi) does not suggest that he was ever one of these “skeptics”. It’s a rhetorical flourish, nothing more.
Matt isn’t a mainstream journalist. On the other hand he writes about stuff that you can easily document instead of writing about Rothschilds, Masons and the Illuminati.
He isn’t the kind of person who cares about symbolic issues such as whether the members of the Bohemian grove do mock human sacrifices.
In the post I link to he makes his case by arguing facts.
He may even be right, and the Paul Rosenberg article is lightweight and appears on what looks like a kook web site. But it seems to me that there’s no real difference between their respective conclusions.
Rosenberg writes:
Plenty of big banks did make money by betting on the crisis. There were a lot of cases where banks sold their clients products the banks knew that the products would go south.
Realising that there are important political things that don’t happen in the open is a meaningful conclusion. Matt isn’t in a position where he can make claims for which he doesn’t have to provide evidence.
In 2011 Julian Assange told the press that the US government has an API with the can use to query the data that they like from facebook. On the skeptics stackexchange website there a question whether their’s evidence for Assange claim or whether he makes it up. It doesn’t mention the possibility that Assange just refers to nonpublic information. The orthodox skeptic just rejects claims without public proof.
Two years later we know have decent evidence that the US has that capability via PRISM. In 2011 Julian had the knowledge that it happens because Julian has the kind of connection that you need to get the knowledge but had no way of proving it.
If you know Italian or German Leoluca Orlando racconta la mafia / Leoluca Orlando erzählt die Mafia is a great book that provides a paradigm of how to operate in a political system with conspiracies.
Leoluca Orlando was major in Palermo with is the capital of Sicily and fought there against the Mafia. That means he has good credentials about telling something about how to deal with it.
He starts his book with the sentence:
Throughout the book he says things about the Sicilian Mafia that he can’t prove but that he knows. In his world in which he had to take care to avoid getting murdered by the Mafia and at the same time fight it, that’s just how the game works.
He also makes the point that it’s very important to have politicians who follow a moral codex.
The book end by saying that the new Mafia now consists of people in high finance.
On this site, it’s probably worth clarifying that “evidence” here refers to legally admissible evidence, lest we go down an unnecessary rabbit hole.
From the comments on the article on the jobs for good-looking.
This is a nice calculation with a fairly simple causal diagram. The basic point is that if you think people are repeatedly hired either for their looks or for being a good worker, then among the pool of people who are repeatedly hired, looks and good work are negatively correlated.
That’s called Berkson’s paradox.
“Those who will not reason, are bigots, those who cannot, are fools, and those who dare not, are slaves.” —Lord Byron.
All too often those who are least rational in their best moments are the greatest supporters of using one’s head, if only to avoid too early a demise. I wonder how many years Lord Byron gained from rational thought, and which of the risks he took did he take because he was good at betting...
--Nick Szabo, Falsifiable design: A methodology for evaluting theoretical technologies
If something is purely theoretical you can’t test it in the lab. You need to move beyond theory to start testing how a technology really works.
There cases where a technology might be dangerous and you want to stay some time with the theorethical analysis of the problem. In other cases you don’t want to do falsifiable design but put the technology into reality as soon as possible.
-- R. Hanson
I’m under the impression that all EY / RH quotes are discouraged, as described in this comment tree, which suggests the following rule should be explicitly amended to be broader:
Maimonides
Source? It’s pithy, yet not on the usual quote compilations that I checked.
Sounds like Takamachi Nanoha to me.
That’s more along the lines of, “I will convert my enemies to friends by STARLIGHT BREAKER TO THE FACE”.
Offhand I can’t think of a single well-recorded real-life historical instance where this has ever worked.
Substitute “friends” with “trading partners” and the outlook improves though.
Fair, the British were totally befriending their way through history for a while.
“Befriending” by force? Well, post-WWII Japan worked out pretty well for the United States. As for dealing with would-be enemies by actually befriending them, Alexander Nevsky sucked up to the Mongols and ended up getting a much better deal for Russia than many of the other places the Mongols invaded.
That’s what her reputation turned out like, and what TSAB propaganda likes to claim. It’s not what she actually did. Let me count the befriendings:
Alisa Bannings. The sole “Nanoha-style befriending”: Nanoha punched her to make her stop bothering Suzuka, after which they somehow became friends. No starlight breaker, though.
Alicia. Mostly Alicia was the one beating up Nanoha. It’s true that Nanoha eventually defeated her in a climactic battle, after first sort-of-befriending her along more normal lines; however, Nanoha’s victory in that battle isn’t what finally turned Alicia. That’s down to the actions of her insane, brain-damaged mother.
Vita. Neither motivation nor loyalty ever wavered.
Reinforce. Decided to work with Nanoha after Hayate asked her to. Nanoha’s starlight breaker was helpful for temporarily weakening the defence program, but was not instrumental in the actual motivation change.
Vivio. …do I really need to go there?
Her reputation for converting enemies is not undeserved, but she’s not converting them by defeating them; she’s converting and defeating them. Amusingly, the movies (which are officially TSAB propaganda) show marginal causation where there’s only correlation.
Oh, and explicitly because people have asked me not to, you’re hereby invited to the rizon/#nanoha irc channel. I’m relatively confident you won’t show up, which is good—it has a tendency to distract authors when I do this. :P
Did you confuse Alicia with Fate?
No.
I’m just opinionated on the subject.
MAHOU SHOUJO TRANSHUMANIST NANOHA
“Girl,” whispered Precia. The little golden-haired girl’s eyes were fluttering open, amid the crystal cables connecting the girl’s head to the corpse within its stasis field. “Girl, do you remember me?”
It took the girl some time to speak, and when she did, her voice was weak. “Momma...?”
The memories were there.
The brain pattern was there.
Her daughter was there.
“Momma...?” repeated Alicia, her voice a little stronger. “Why are you crying, Momma? Did something happen? Where are we?”
Precia collapsed across her daughter, weeping, as some part of her began to believe that the long, long task was finally over.
So, in case anyone is still confused about the point of the Quantum Physics Sequence, it was to help future mad scientists love their reconstructed daughters properly :)
An Idiot Plot is any plot that goes away if the characters stop being idiots. A Muggle Plot is any plot which dissolves in the presence of transhumanism and polyamory. That exact form is surprisingly common; e.g. from what I’ve heard, canon!Twilight has two major sources of conflict, Edward’s belief that turning Bella into a vampire will remove her soul, and Bella waffling between Edward and Jacob. I didn’t realize it until Baughn pointed it out, but S1 Nanoha—not that I’ve watched it, but I’ve read fanfictions—counts as a Muggle Plot because the entire story goes away if Precia accepts the pattern theory of identity.
I would find it unhelpful to describe as a “Muggle Plot” any plot that depends on believing one side of an issue where there is serious, legitimate, disagreement.
(Of course, you may argue that there is no serious, legitimate disagreement on theories of identity, if you wish.)
I also find it odd that polyamory counts but not, for instance, plots that fail when you assume other rare preferences of people. Why isn’t a plot that assumes that the main characters are heterosexual considered a Muggle Plot just as much as one which assumes they are monogamous? What about a plot that fails if incest is permitted (Star Wars could certainly have gone very differently.) If a plot assumes that the protagonist likes strawberry ice cream, and it turned out that the same percentage of the population hates strawberry ice cream as is polygamous, would that now be a Muggle Plot too?
I think the idea is not so much “rare preference” as “constrained preference,” where that constraint is not relevant / interesting to the reader. Looking at gay fiction, there’s lots of works in settings where homosexuality is forbidden, and lots of works in settings where homosexuality is accepted. A plot that disappears if you tried to move it to a setting where homosexuality is accepted seems too local; I’ve actually mostly grown tired of reading those because I want them to move on and get to something interesting. I imagine that’s how it feels for a polyamorist to read Bella’s indecision.
To use the ice cream example, imagine trying to read twenty pages on someone in an ice cream shop, agonizing over whether to get chocolate or strawberry. “Just get two scoops already!”
Excellent reply. I’m pretty sure I’d feel the same way if I was reading a story where A wants to be with only B, B wants to be with only A, neither of them want to be with C, but it’s just never occurred to them that monogamy is an option.
Better to say “B wishes A would not sleep with others, A wishes B would not sleep with others, but..”. Monogamy is the state of disallowing other partners, not just not having them.
I’ll accept this definition, but would like a word to describe my marriage in that case.
I’m quite confident that if we ever wanted to open the relationship up to romantic/sexual relationships with third parties, we would have that conversation and negotiate the terms of it, so I’m reluctant to describe us as disallowing other partners. But I currently describe us as monogamous, because, well, we are.
Describing us as polyamorous when neither of us is interested in romantic/sexual relationships with third parties seems as ridiculous as describing a gay man as bisexual because he’s not forbidden to have sex with women.
So how ought I refer to relationships like ours, on your view?
I’d describe that as monogamous. You’re saying that you think you’d be able to negotiate a new rule if circumstances arose, but the current rule is monogamy.
Mm. OK, with that connotation of “disallowing”, I would agree. It’s not the connotation I would expect to ordinarily come to mind in conversation, and in particular your statements about “B wishes A would not sleep with others” emphasized a different understanding of “disallowing” in my mind.
Have you (implicitly or explicitly) promised each other to not have sex with anyone else for the time being (even though the promise is renegotiable)? For example, would it be OK with you if your husband went to (say) a conference abroad and had a one-night stand with someone there without telling you until afterwards? That’d sound as a stronger condition than “B wishes A would not sleep with others”—I wish my grandma didn’t smoke, but given that she’s never promised me not to smoke...
If he had sex with someone without telling me until afterwards, I would be very surprised, and it would suggest that our relationship doesn’t work the way I thought it did. I wouldn’t be OK with that change/revelation, and would need to adjust until I was OK with it.
If he bought a minivan without telling me, all of the above would be true as well.
But it simply isn’t true that I wish he wouldn’t buy a minivan, nor is it true that I wish he wouldn’t sleep with others.
And if he came to me today and said “I want to sleep with so-and-so,” that would be a completely different situation. (Whether I would be OK with it would depend a lot on so-and-so.)
It’s possible that, somewhere in the last 20 years, he promised me he wouldn’t sleep with anyone else. Or, for that matter, buy a minivan. If so, I’ve forgotten (if it was an implicit promise, I might not even have noticed), and it doesn’t matter to me very much either way.
If so, I wouldn’t consider it much of a stretch to call it monogamous.
Nor would I, as I said initially.
What I considered a stretch was accepting ciphergoth’s definition of monogamy, given that my marriage is monogamous, because “We disallow other partners” didn’t seem to accurately describe my monogamous marriage. (Similarly, “We disallow the purchase of minivans” seems equally inaccurate.)
Then came ciphergoth’s clarification that he simply meant by “disallow” that right this moment it isn’t allowed, even though if we expressed interest in changing the rule the rule would change and at that time it would be allowed. That seems like a weird usage of “disallow” to me (consider a dialog like “You aren’t allowed to do X.” “Oh, OK. Can I do X?” “Yeah, sure.”, which is permitted under that usage, for example), but I agreed that under that usage it’s true that we’re not allowed other partners.
I hope that clears things up.
Right, but those are the obvious circumstances where a couple who were not monogamous might become so.
(The more plausible reason being that C is just coercing them both.)
Explaining it as a complaint about a constrained preference does negate the heterosexual example, but I could easily tweak the example a bit: I could still ask why “Muggle Plots” doesn’t include plots that assume a character isn’t bisexual. And my incest example applies without even any tweaks—I’m not pointing out that Star Wars would be different if characters accepted incestuous relationships and no other kind, I’m pointing out that Star Wars would be different if characters accepted incestuous relationships in addition to the ones they do now—that is, if their preference was less constrained. So why is it that a plot that depends on the unacceptability of incest doesn’t count as a Muggle Plot?
Having read the rest of the conversation… I’d say that yes, I have a mild “dammit, aren’t condoms invented in this universe long ago enough to these issues to have gone away?!” to Starwars, but only after reconsidering it in the light of Homestuck. Which by the way, provides an excellent example in the alien Trolls considering both heterosexuality and incest-taboos in the kids to be trite annoyances.
I’m going out on a limb here, and saying that Muggle Plot is not a property of a plot, or even a plot-reader pair, but rather an emotion that can be felt in response to a plot, and which is scalar, with a rough heuristic being that it’s stronger the more salient the option that’d make the plot go away is in whatever communities you participate in.
Why? Remember adaptation executors not fitness maximizers. And if condoms have been around for long enough for people to adapt to them, the first adaptation would be to no longer find condomed sex pleasurable or fulfilling.
I suspect the constraint against incest seems relevant to Eliezer. (The concept as I outlined it is subjective, and I suspect the association with “transhumanism + polyamory” is difficult to pin down without a reference to Eliezer or clusters he’s strongly associated with.)
Because poly evangelism? It certainly seems like something people decide is a good idea rather than some sort of innate preference difference.
But if that were true, I would have to admit that monogomy is probably a bad idea, and that would be sad :(
(shrug) My husband and I live in a largely poly-normative social environment, and are monogamous. We don’t object, we simply aren’t interested. It still makes “oh noes! which lover do I choose! I want them both!” plots seem stupid, though. (“if you want them both, date them both… what’s the difficulty here?”)
So, no, acknowledging that polyamory is something some people decide is a good idea doesn’t force me to “admit” that monogamy is a bad idea.
Admittedly, I’m also not sure why it would be sad if it did.
Because social norms, of course.
Actually, I was pretty tired when I wrote that, but thats what I think I meant.
(I’ll note that most monogomous people whose opinions I hear on this think polyamory is almost always a bad idea, although possibly OK for a rare minority. But if relationships are usually a good idea, and polyamory isn’t usually actively bad, then polyamory=more relationships=good, goes the 1:00 AM logic.)
Re pattern identity theory:
Scott Aaronson in The Ghost in the Quantum Turing Machine.
The first paragraph of the quote is about pattern identity theory. Unfortunately the second paragraph is actually something of a muddling of pattern identity with the separate issue of basing moral/ethical/legal considerations only on the externalities experienced by the survivors. Specifically, making it about ‘depriving the rest of society’ distracts from the (hopefully) primary point that it is the pattern that matters more so than spooky stuff about an instance.
Nice one. Though one could perhaps recover most of the Nanoha storyline by giving Precia Capgras delusion, unless by “transhumanism” you include the assumption that organic disorders would be trivially fixed (albeit I don’t think Precia had anyone around to diagnose her?)
I’m not sure if that would make it more or less tragic.
Right, that’s my standard head-canon on the subject.
Precia was very badly hurt by the accident, and had to leave society because—for some reason—resurrecting Alicia the way she did was severely illegal. As a result, there was no-one around to double-check her conclusions, or spot the brain damage.
My personal head-canon says that Precia, who ought to know better, was afflicted with a particular type of brain damage that prevented her from recognizing her own daughter. She was, effectively, insane.
Given that the cause of both Alicia’s first death and Precia’s insanity were an inadvisable engineering experiment that she is explicitly stated to have been against, this makes Precia a tragic figure in her own right.
Does worrying about that sort of thing suggest that Edward actually has a soul?
BUFFY THE VAMPIRE SLAYER SPOILERS (up to season 4)
Rira gubhtu Fcvxr jnf fbhyyrff ng gur gvzr ur sryy va ybir jvgu Ohssl, V qba’g guvax ur jbhyq jnag gb erzbir ure fbhy, fvapr gung jbhyq shaqnzragnyyl punatr ure.
Bs pbhefr Va gur Ohssl-irefr, univat be abg univat n fbhy unf dhvgr pyrne rssrpgf (ynpxvat n fbhy zrnag lbh prnfr gb unir nal zbenyf, gubhtu lbh pna fgvyy srry ybir gbjneqf crbcyr lbh xabj), naq jr frr n pyrne qvssrerapr orgjrra crefba-jvgu-fbhy if gur fnzr crefba-jvgubhg-fbhy. V qbhog gung’f gur pnfr va gur Gjvyvtug-irefr...
...as the only upvoter, I suspect nobody else got that.
After a bit of googling, I don’t think it’s a quote by Maimonides.
The closest I could find is this passage of the Babilonian Talmud:
That’s because it’s usually attributed to Abe Lincoln, with an exception.
That’s kind of amusing, considering that Lincoln is also famous for destroying his enemies the other way.
He tried the nice way first...
This would seem to further weaken the quote in as much as it is evidence that the tactic doesn’t work.
Just because your enemies will not always be your friends does not mean it is useless to TRY to convert them to be one’s friends. It is, as most things, a bet. One must know, beforehand, if it is WORTH it to try.
I would say it’s a useful quote because it provides an alternative to the usual “smash them as soon as they oppose you” deal going on.
Nevertheless, the statement to which I replied remains evidence against rather than evidence for. You are of course welcome to support the sentiment despite the anecdote in question—such things aren’t typically considered to be strong evidence either way.
It may also be better than the even more common “deal with them as you can, but don’t expect they’ll ever be on your side”.
I don’t know in what context Lincoln said this (if he really said it), but the tactic worked very well for him at the convention in the summer of 1860. (In those days, the conventions would start without people knowing who would be nominated. But often you had an idea, and Lincoln was a long shot.) All of the other candidates then joined Lincoln’s cabinet (his ‘Team of Rivals’).
DId not work in one notable case, to which the quote may or may not have originally been applied.
Of course it doesn’t apply all the time.
Found on the Forbes site a week or so ago. Then I’ve googled it further and found some more occurences. Interestingly the quote is usually attributed to Abraham Lincoln. But he was certainly not the first with this nifty idea,
does anyone know the original source in Maimonides writings?
I’m not sure where this, and the idea is good, but it doesn’t sound like Maimonides. He was extremely willing to declare that those who disagreed with him were drunks, whoremongers and idolators. Rambam would rarely have talked about how his own personal goals anyways. It really isn’t his style. I’m skeptical that this is a genuine quote due to him.
--Joshua Lang, New York Times, June 12, 2013, What Happens to Women Who Are Denied Abortions?
I was also under the impression that the process of giving birth to a child triggers hormonal changes of some kind (involving oxytocin?) in the mother that help induce maternal bonding.
“Reality provides us with facts so romantic that imagination itself could add nothing to them.” —Jules Verne.
The fellow had a brilliant grasp of how to make scientific discovery interesting, and I think people could learn a thing or two from reading his stuff, still.
-Stabby the Raccoon
--W. Timothy Gallwey, Inner Tennis: Playing the Game
Tuco, The Good, The Bad, and the Ugly
A great line, but it’s a dupe.
Ah! Humblest apologies, retracted.
I just watched Oz the Great and Powerful, the big-budget fanfic prequel film to The Wizard of Oz. Hardly a rationalist movie, but there was some nice boosting of science and technology where I didn’t expect it. So here’s the quotation:
(There’s more, but this is all that I could get out of the Internet and my memory.)
I haven’t seen the movie, but that sounds awfully familiar. It doesn’t sound consistent with the Oz books or any of the big-name fanfic out there (Wicked, etc.), but I wonder if it might have shown up in some similar context.
-- Doug McDuff, M.D., and John Little, Body by Science, pp. x-xi
— Mark Salter & Trevor H. Turner, Community Mental Health Care: A practical guide to outdoor psychiatry
Though you can still find subjects who don’t know the outcome, ask them for their predictions, and compare those predictions with subjects who are told the outcome to find the size of the hindsight bias.
--Thomas M Georges, Digital Soul, 2004, p. 14
I don’t have a pithy parallel quote from Korzybski to put alongside this (pithiness was not his style), but the ideas here are exactly in accordance with Korzybski on “elementalism” (treating as separate and distinct entities things that are not, including body vs. mind), over/under defined terms (verbal definitions lacking extensionality), reification of categories, and the rejection of the is of identity.
I don’t know that I’d recommend thinking of body and mind as identical (as in identity theory in phil mind).
The proper relation is probably better thought of as instantiation of a mind by a brain, in a similar way to how transistors instantiate addition and subtraction.
It matters because if you think mind=brain then you may come to some silly philosophical conclusions, like that a mind that does exactly what yours does (in terms of inputs and outputs to the rest of the body) but, say, runs on silicon, is “not the same mind” or “not a real mind.”
Nick Bostrom
If you haven’t seen it I can recommend Stuart Armstrong’s talk at Oxford on the Fermi paradox and Von Neumann probes. Before I saw this I was thinking in a fuzzy way about “colonization waves” of probes going from star to star …
Jim Holt
Would the static look any different if it was 0% though?
Yes, it wouldn’t be peaked at about 3 GHz. Since television only goes up to about 1 GHz, this means more noise at higher channels after accounting for other sources.
There would be less?
Can you actually do this experiment on a modern TV? I know how to change the channels on mine, but I have no idea how you would “tune” it.
Selecting a channel is tuning; each channel has a specific frequency and the TV knows what frequencies the channel numbers stand for. But what you can’t do is tune to a frequency that isn’t assigned to any channel, so you would have to select a channel on which no station in your area is broadcasting.
You would have to be using an analog TV tuner (which is now obsolete, if you’re in the US); digital TV has a much less direct relationship between received radio photons and displayed light photons. On the upside, it’s really easy to find a channel where no station is broadcasting, now :) (though actually, I don’t know what the new allocation of the former analog TV bands is and whether there would be anything broadcasting on them).
(I’ve recently gotten an interest in radio technology; feel free to ask more questions even if you’re just curious.)
This grater.
?
S/he is making a pun of the typo: “what grater proof...” instead of “what greater proof...”. (I don’t find it a very funny pun myself.)
Elvis Presley
One needs the right balance between conversation and action, and overall, it’s probably too much of the latter and too little of the former in this world.
Or more precisely:
..Most actors don’t think enough, and most thinkers don’t act enough. cf. Dunning-Kruger effect.
Extroverts and Introverts typically line up with those two categories quite neatly, and in my observation tend to associate mainly with people of similar temperament (allowing them to avoid much of the pressure to be more balanced they’d find in a less homogenous social circle). I believe that this lack of balanced interaction is the real source of the problem. We need balanced pressure to both act and think competently, but the inherent discomfort makes most people unwilling to voluntarily seek it out (if they even become aware that doing so is beneficial).
I’m not sure I agree in the general case, and I think that among LessWrongers things are certainly unbalanced in the other direction.
-- Albert Einstein
— Amartya Sen, On Economic Inequality, p. vii
From David Shields’ Reality Hunger:
--Oscar Wilde on signalling.
(Architect Melandri to Noemi, the girl he is in love with, who thinks the flood of 1966 was sent as an answer to her prayers)
All my Friend, Act II [roughly translated by me]
This is yet another reason why a God that answers prayers is far, far crueler than an indifferent Azathoth. Imagine the weight of guilt that must settle on a person if they prayed for the wrong thing and God answered!
On another note, that girl must not be very picky, if God has to destroy a whole city to keep her a virgin...(please don’t blast me for this!)
Jon Elster
Even if altruism turns out to be a really subtle form of self-interest, what does it matter? An unwoven rainbow still has all its colors.
Rational distress-minimizers would behave differently from rational atruists. (Real people are somewhere in the middle and seem to tend toward greater altruism and less distress-minimization when taught ‘rationality’ by altruists.)
That could be because rationality decreases the effectiveness of distress minimisation techniques other than altruism.
..because it makes you try to see reality as it is?
In me, it’s also had the effect of reducing empathy. (Helps me not go crazy.)
Well, for me, believing myself to be a type of person I don’t like causes me great cognitive dissonance. The more I know about how I might be fooling myself, the more I have to actually adjust to achieve that belief.
For instance, it used to be enough for me that I treat my in-group well. But once I understood that that was what I was doing, I wasn’t satisfied with it. I now follow a utilitarian ethics that’s much more materially expensive.
Are they being taught ‘rationality’ by altruists or ‘altruism’ by rationalists? Or ‘rational altruism’ by rational altruists?
Shouldn’t the methods of rationality be orthogonal to the goal you are trying to achieve?
Perhaps this training simply focuses attention on the distress to be alleviated by altruism. Learning that your efforts at altruism aren’t very effective might be pretty distressing.
That seems to verge on the trivializing gambit, though.
I guess I don’t see the problem with the trivializing gambit. If it explains altruism without needing to invent a new kind of motivation why not use it?
Why would actual altruism be a “new kind” of motivation? What makes it a “newer kind” than self interest?
I meant that everyone I’ve discussed the subject with believes that self-interest exists as a motivating force, so maybe “additional” would have been a better descriptor than “new.”
Hrm… But “self-interest” is itself a fairly broad category, including many sub categories like emotional state, survival, fulfillment of curiosity, self determination, etc… Seems like it wouldn’t be that hard a step, given the evolutionary pressures there have been toward cooperation and such, for it to be implemented via actually caring about the other person’s well being, instead of it secretly being just a concern for your own. It’d perhaps be simpler to implement that way. It might be partly implemented by the same emotional reinforcement system, but that’s not the same thing as saying that the only think you care about is your own reinforcement system.
Well, the trivializing gambit here would be to just say that “caring about another person” just means that your empathy circuitry causes you to feel pain when you observe someone in an unfortunate situation and so your desire to help is triggered ultimately by the desire to remove this source of distress.
I’m not sure how concern for anothers well being would actually be implemented in a system that only has a mechanism for caring solely about its own well-being (ie how the mechanism would evolve). The push for cooperation probably came about more because we developed the ability to model other the internal states of critters like ourselves so that we could be mount a better offense or defense. The simplest mechanism just being to use a facial expression or posture to cause us to feel a toned down version of what we would normally feel when we had the same expression or posture (you’re looking for information not to literally feel the same thing at the same intensity—when the biggest member of your pack is aggressing at you you probably want the desire to run away or submit to override the empathetic aggression).
It’s worth noting (for me) that this doesn’t diminish the importance of empathy and it doesn’t mean that I don’t really care about others. I think that caring for others is ultimately rooted in self-centeredness but like depth perception is probably a pre-installed circuit in our brains (a type I system) that we can’t really remove totally without radically modifying the hardware. Caring about another person is as much a part of me as being able to recognize their face. The specific mechanism is only important when you’re trying to do something specific with your caring circuits (or trying to figure out how to emulate them).
It may not matter pragmatically but it still matters scientifically. Just as you want to have a correct explanation of rainbows, regardless of whether this explanation has any effects on our aesthetic appreciation of them, so too you want to have a factually accurate account of apparently altruistic behavior, independently of whether this matters from a moral perspective.
Things is about predicting things not about explaning them. If a theory has no additional predictive value than it’s not scientifically valuable.
In this case I don’t the the added predictive value.
There’s the alternative “gambit” of describing it in terms of signaling. There’s the alternative “gambit” of describing it in terms of wanting to live in the best possible universe. There’s the alternative “gambit” of ascribing altruism to the emotional response it invokes in the altruistic individual.
I find the quote false on its face, in addition to being an appeal to distaste.
Careful, there are some tricky conceptual waters here. Strictly, anything I want to do can be ascribed to my emotional response to it, because that’s how nature made us pursue goals. “They did it because of the emotional response it invoked” is roughly analogous to “They did it because their brain made them do it.”
The cynical claim would be that if people could get the emotional high without the altruistic act (say, by taking a pill that made them think they did it), they’d just do that. I don’t think most altruists would, though. There are cynical explanations for that fact, too (“signalling to yourself leads to better signalling to others”) but they begin to lose their air of streetwise wisdom and sound like epicycles.
Are you suggesting emotions are necessary to goal-oriented behavior?
There should be some evidence for that claim; we have people with diminished emotional capacity in wide range of forms. Do individuals with alexithymia demonstrate impaired goal-oriented behaviors?
I think there’s more to emotion as a motive system than the brain as a motive force. People can certainly choose to stop taking certain drugs which induce emotional highs. 10% of people who start taking heroin are able to keep their consumption levels “moderate” or lower, as compared to 90% for something like tobacco, according to one random and hardly authoritative internet site—the precise numbers aren’t terribly important. Perhaps such altruists, like most people, deliberately avoid drugs like heroin for this reason?
-- John Galsworthy
“Why do people worry about mad scientists? It’s the mad engineers you have to watch out for.”—Lochmon
Considering the “mad scientists” keep building stuff, perhaps the question is “Why do people keep calling mad engineers mad scientists?”
Comic
I want to use one of those phrases in conversation. Either grfgvat n znq ulcbgurfvf be znxvat znq bofreingvbaf (spoilers de-rot13ed)
Also I found the creator’s page for the comic http://cowbirdsinlove.com/46
— Miles Vorkosigan, Komarr by Lois McMaster Bujold
— Herbert Simon
Calling different but somewhat related things the same when they are not does not warrant “rationality quote” status.
I acknowledge & respect this criticism, but for two reasons I maintain Simon had a worthwhile insight(!) here that bears on rationality:
Insight, intuition & recognition aren’t quite the same, but they overlap greatly and are closely related.
Simon’s comment, although not literally true, is a fertile hypothesis that not only opens eyeholes into the black boxes of “insight” & “intuition”, but produces useful predictions about how minds solve problems.
I should justify those. Chapter 4 of Simon’s The Sciences of the Artificial, “Remembering and Learning: Memory as Environment for Thought”, is relevant here. It uses chess as a test case:
The seemingly mysterious insights & intuitions of the chessmaster derive from being able to recognize many memorized patterns. This conclusion applies to more than chess; Simon’s footnote points to a champion backgammon-playing program based on pattern recognition, and a couple of pages before that he refers to doctors’ reliance on recognizing many features of diseases to make rapid medical diagnoses.
From what I’ve seen this even holds true in maths & science, where people are raised to the level of geniuses for their insights & intuitions. Here’s cousin_it noticing that Terry Tao’s insights constitute series of incremental, well-understood steps, consistent with Tao generating insights by recognizing familiar features of problems that allow him to exploit memorized logical steps. My conversations with higher ability mathematicians & physicists confirm this; when they talk through a problem, it’s clear that they do better than me by being better at recognizing particular features (such as symmetries, or similarities to problems with a known solution) and applying stock tricks they’ve already memorized to exploit those features. Stepping out of cognitive psychology and into the sociology & history of science, the near ubiquity of multiple discovery in science is more evidence that insight is the result of external cues prompting receptive minds to recognize the applicability of an idea or heuristic to a particular problem.
The reduction of insight & intuition to recognition isn’t wholly watertight, as you note, but the gains from demystifying them by doing the reduction more than outweigh (IMO) the losses incurred by this oversimplification. There are also further gains because the insight-is-intuition-is-recognition hypothesis results in further predictions & explanations:
Prediction: long-term practice is necessary for mastery of a sufficiently complicated domain, because the powerful intuition indicative of mastery requires memorization of many patterns so that one can recognize those patterns.
Prediction: consistently learning new domain-specific patterns (so that one can recognize them later) should, with a very high probability, engender mastery of that domain. (Putting it another way: long-term practice, done correctly, is sufficient for mastery.)
Explanation of why “[i]n a couple of domains [chess and classical music composition] where the matter has been studied, we do know that even the most talented people require approximately a decade to reach top professional proficiency” (TSotA, p. 91).
Prediction: “When a domain reaches a point where the knowledge for skillful professional practice cannot be acquired in a decade, more or less, then several adaptive developments are likely to occur. Specialization will usually increase (as it has, for example, in medicine), and practitioners will make increasing use of books and other external reference aids in their work” (TSotA, p. 92).
Prediction: “It is probably safe to say that the chemist must know as much as a diligent person can learn in about a decade of study” (TSotA, p. 93).
Explanation of Eliezer’s experience with being deep: the people EY spoke to perceived him as deep (i.e. insightful) but EY knew his remarks came from a pre-existing system of intuitions (transhumanism and knowledge of cognitive biases) which allowed him to immediately respond to (or “complete”) patterns as he recognized them.
Explanation of how intensive childhood training produced some famous geniuses and domain experts (the Polgár sisters, William James Sidis, John Stuart Mill, Norbert Wiener).
Prediction: “This accumulation of experience may allow people to behave in ways that are very nearly optimal in situations to which their experience is pertinent, but will be of little help when genuinely novel situations are presented” (“On How to Decide What to Do”, p. 503).
Prediction: one can write a computer program that plays a game or solves a problem by mechanically recognizing relevant features of the input and making cached feature-specific responses.
I know I’ve gone on at length here, but your criticism deserved a comprehensive reply, and I wanted to show I wasn’t just being flippant when I quoted Simon. I agree he was hyperbolic, but I reckon his hyperbole was sufficiently minor & insightful as to be RQ-worthy.
Independent of whether the particular quote is labelled a rationality quote, Simon had an undeniable insight in the linked article and your explanation thereof is superb! To the extent that this level of research, organisation and explanation seems almost wasted on a comment. I’ll look forward to reading your future contributions (be they comments or, if you have a topic worth explaining, posts).
The interview that’s linked with the name is excellent, though. In an AI context (“as far as I [the AI guy] am concerned”), the quote makes more sense.
I’d upvote a link to the article if it were posted in an open thread. I downvote it (and all equally irrational ‘rationalist quotes’) when they are presented as such here.
Yea I sometimes struggle with that: Taken at face value, the quote is of course trivially wrong. However, it can be steelmanned in a few interesting ways. Then again, so can a great many random quotes. If, say, EY posted that quote, people may upvote after thinking of a steelmanned version. Whereas with someone else, fewer readers will bother, and downvote since at a first approximation the statement is wrong. What do, I wonder?
(Example: “If you meet the Buddha on the road, kill him!”—Well downvoted, because killing is wrong! Or upvoted, because e.g. even “you may hold no sacred beliefs” isn’t sacred? Let’s find out.)
“Never let your sense of morals get in the way of doing what’s right.” —Isaac Asimov
All too often, an intuition creates mistakes which rationality must remedy, when one is presented with a complex problem in life. No fault of the intuition, of course—it is merely the product of nature.
Sometimes, rationality creates mistakes which intuition must modify. Rationality, too, is merely a product of nature.
I don’t know the context of the Asimov quote, but it is not clear that the two things he is contrasting match up, in either order, to rationality and intuition.
The problem with Asimov’s advice, is that without context it seems to be telling people to ignore ethical injunctions, which is actually horrendous advice.
A better piece of advise would be “If you find your morals get in the way of doing what’s right, consider that evidence that you’re probably mistaken about the rightness of the action in question.”
Ethical injunctions and morals are similar but not the same thing. Also note that “sense of morals” seems to be referring to intuitions-without-consideration which is different again.
LW jargon. Neither Asimov nor the intended audience would necessarily make that distinction.
Not really once you consider where said intuitions come from.
The jargon introduction was yours, not Asimov’s or mine and your interpretation of his advice to be telling to people to ignore ethical injunctions is uncharitable as a reading of his intent and mistaken as a claim about the how the LW concept applies.
Yes, really. I don’t know what you are basing this ‘consideration’ on.
An example of following Asimov’s advice would be someone with strong moral sense that homosexuality is wrong but a strong egalitarian philosophy choosing to overcome the moral sense and refusing to stone the homosexual to death despite the instinctive and socially reinforced moral revulsion.
Yes, and I was using it to be technical, you seem to be trying to argue that Asimov couldn’t have meant “ethical injunction” since he wrote “morality”.
I didn’t say anything about his intent, I’m talking about how someone told to not let one’s “sense of morals get in the way of doing what’s right” is likely to behave when attempting to act on the advice. As for intent I’m guessing Asimov’s (and apparently yours judging by your example) is to interpret “sense of morals” as [a moral intuition Asimov (or wedrifid) disagrees with] and “doing what’s right” as [a moral intuition Asimov (or wedrifid) agrees with].
I think you are the one mistaken. Remember the point at which an ethical injunction is most important is the point when disobeying it feels like the right thing to do.
Not everyone who speaks about morality automatically sinks down into nonsense and intuition, into the depths of accusations and praise for particular persons, however strange the language they use. Sometimes, speaking about morality means speaking about rationality, surviving and thriving, etc. It may be a mistake to think that Asimov was entirely ignorant of the philosophies this website promotes, given his work in science and the quotes one finds from his interviews, letters, and stories.
I never said anything otherwise. My point was that Asimov was trying to make a distinction between “morality” and “doing what’s right”. The implication being that thinking in terms of the latter will produce better behavior than thinking in terms of the former. My point is that this is not at all the case.
Using a technical term incorrectly then retorting with “LW jargon.” when corrected is either disingenuous or conceivably severely muddled thinking.
I’m saying that he in fact didn’t mean “ethical injunction” in that context and also that his intended audience would not have believed that he was referring to that.
No, not remotely correct. You may note that the example explicitly mentions to two different values held by the actor and describes a particular way of resolving the conflict.
Scott Aaronson in The Ghost in the Quantum Turing Machine.
I would be extremely surprised to learn that there were any unanswerable riddles of any kind.
I cashed out “unanswerable” to “should be dissolved.”
That’s a good thought. I take ‘should be disolved’ to mean that the appropriate attitude towards an apparent question is not to try to answer it on its own terms, but to provide some account that undermines the question. I suppose Aaronson means that given a body of interrelated concepts and questions, philosophical progress amounts to isolating those that can and should be answered on their own terms from those that can’t. On this reading, there are no ‘unanswerable’ questions, only ill-formed ones.
That makes sense to me.
He talks specifically about the concept of free will (emphasis below is mine):
So “unanswerable” does not necessarily mean “should be dissolved”, but rather that it’s not clear what answering such a question “would even mean”. The “breaking-off” process creates questions which can have meaningful answers. The original question may remain “undissolved”, but some relevant interesting questions become answerable.
Hmm, but why should Aaronson restrict himself to understanding the skeptic’s objection in terms of observable concepts (I assume he means something like ‘empirical concepts’)? I mean, we have good reason to operate within empiricism where we can, but it seems to me you’re not allowed to let your methodology screen off a question entirely. That’s bad philosophical practice.
Because that is what “answerable” means to a scientist?
I guess I could just rephrase the question this way: why should Aaronson get to assume he should be able to understand the skeptic’s objection in terms of, say, physics or biology? We have very good reasons to think we should answer things with physics or biology where we can, but we can’t let methodology screen off a question entirely.
Sorry, I don’t understand your rephrasing. Must be the inference gap between a philosopher and a scientist.
I don’t think so, I think I was just unclear. It’s perfectly fine of course for Aaronson to say ‘if I can’t understand part of the problem of free will within a scientific methodology, I’m going to set it aside.’ But it’s not okay for him to say ‘if I can’t understand part of the problem of free will within a scientific methodology, we should all just set it aside as unanswerable’ unless he has some argument to that effect. Hardcore naturalism is awesome, but we don’t get it by assumption.
Hmm, I don’t believe that he is saying anything like that.
True, I agree that philosophers are uniquely equipped to see an “unanswerable” riddle as a whole, having learned the multitude of attempts to attack such a riddle from various directions throughout history. However, I see as one of the more useful tasks a philosopher can do with her unique perspective is what Scott Aaronson suggests: “break off an answerable question”, figure out which branch of the natural science is best equipped to tackle it, and pass it along to the area experts. Pass along and not pretend to solve it, because most philosophers (with rare exceptions) are not area experts and so are not qualified to truly solve the “answerable questions”. The research area can be math, physics, chemistry, linguistics, neuroscience, psychology etc.
Absolutely, we agree on that, though I think the philosophical work doesn’t end there, since area experts are generally ill equipped to evaluate their answer in terms of the original question.
No disagreement there, either. As long as after this evaluation the philosopher in question does not pretend that she helped the scientists to do their job better. If she simply applies the answer received to the original question and carves out another solvable piece of the puzzle to be farmed out to an expert, I have no problem with that.
-- HN’s Vivtek in discussion about nationalism.
The author may “have a point” as they say, but it doesn’t qualify as a rationality quote by my lights; more of a rhetoric quote. One red flag is
Who denies their existence?
I’m pretty sure that what was meant is “innocent victims”. While still a stretch, it would then shift to discussing the meaning of “innocent” vs insinuating that the US military is so inept, it cannot shoot straight and makes up stuff to cover it.
This seems accurate. The quote is a bunch of applause lights and appeals to identity strung together support a political agenda. Sure, I entirely support the particular political agenda in question but just because it is ‘my team’ being supported doesn’t make the process of shouting slogans noteworthy rationality material.
If the religion based shaming line and the “in many cases outright denying the existence of victims of drone strikes” hyperbole were removed or replaced then the quote could have potential.
Principle of charity: “Denial of existence” is to taken as meaning “Don’t think about, don’t care about, don’t act based on, don’t know how many there were” and not “When explicitly asked if drone strikes have victims, say ‘no’.”
There is already a word for “don’t think about, don’t care about, don’t act based on, don’t know how many there were”. That word is “disregarding”, which is used in the original quote. It then adds, a fortiori, “and in many cases outright denying the existence of victims of drone strikes”. In that context, it cannot mean anything but “explicitly say that there are no victims”, and in addition, that this has actually happened in “many” cases.
Hm, good point. I still suspect it’s metaphorical. Then again, in a world where Fox News is currently saying how Edward Snowden may be a Chinese double agent, it may also be literal and truthful.
By “in many cases outright denying the existence of victims of drone strikes”, I think that the author meant “in many cases (i.e., many strikes), outright denying that some of the victims are in fact victims.”
The author is probably referring to the reported policy of considering all military-age males in a strike-zone to be militants (and hence not innocent victims). I take the author to be claiming that (1) non-militant military-age male victims of drone strikes exist in many cases, and (2) the reported policy amounts to “outright denying the existence” of those victims.
That’s how I read it. The claim isn’t that no one was killed by drone strikes, it’s that no one innocent was killed, so there are no victims.
Yes. Furthermore, the “many cases” doesn’t refer to many people who think that there has never been an innocent victim of a drone strike. Rather, the “many cases” refers to the (allegedly) many innocent victims killed whose existence (as innocents) was denied by reclassifying them as militants.
And the reason this hypothesis is so unlikely as to be not worth considering is:
During the Cold War, the US and British governments were shot through with hundreds of double agents for the Soviets, to an almost ludicrous extent (eg. Kim Philby apparently almost became head of MI6 before being unmasked); and of course, due to the end of the Cold War & access to Russian archives, we now have a much better idea of everything that was going on and can claim a reasonable degree of certainty as to who was a double agent and what their activities were.
With those observations in mind: can you name a single one of those double-agents who went public as a leaker as Snowden has done?
If you can name only one or two such people, and if there were, say, hundreds of regular whistleblowers over the Cold War (which seems like a reasonable figure given all the crap like MKULTRA), then the extreme unlikelihood of the Fox hypothesis seems clear...
If America needs a double agent from a hostile foreign power to merely point out to the media that their government may be doing something that some might find questionable, then America’s got far bigger problems than a few spies.
And if hostile government cares more about the democratic civil liberties of Americans than Americans do then there is an even bigger problem. (The actual benefit to China of the particular activity chosen for the ‘double agent’ is negligible.)
Giving charity is fine. However the principle of charity does not extend to obliging that we applaud, share and propose as inspirational guidelines those things that require such charity in order to not be nonsense.
Doesn’t that depend on the amount of work the reader needs to find a charitable reading? And whether the author would completely endorse the charitable reading?
One could probably charitably read a rationalist-friendly message into the public speeches of Napoleon on the nobility of dying in battle, but it likely would require a lot of intellectual contortions, and Napoleon almost certainly would not endorse the result. So we shouldn’t applaud that charitable reading.
But I think the charitable reading of the quote from the OP is straightforward enough that the need to apply the principle of charity is not an independent reason to reject the quote. Simplicio’s rejection could be complete and coherent even if he had applied the principle of charity—essentially, drawing a distinction between “rhetoric” and “rationality principle.”
It might be that the distinction often makes reference to usage of the principle of charity, but that is different from refusing to apply the principle to a rationality quote.
Yes.
A little bit.
It is the case that when I see a quote that is being defended by appeal to the principle of charity I will be more inclined to downvote said quote than if I had not seen such a justification. As best as I can tell this is in accord with the evidence that such statements provide and my preferences about what kind of quotes I encounter in the ‘rationalist quotes’ thread. This is not the same thing as ‘refusing to apply the principle of charity’.
Fair enough. I think the nub of our disagreement is whether the author must endorse the interpretation for it to be considered a “charitable” reading. I think the answer is yes.
If the interpretation is an improvement but the author wouldn’t endorse, I think it is analytically clearer to avoid calling that a “charitable” reading, and instead directly call it steelmanning. There’s no reason to upvote a rationality quote that requires steelmanning (and many, many reasons not to). But if we all know what the author “really meant,” it seems reasonable to upvote based on that meaning.
That said, I recognize that it is very easy to mistakenly identify a charitable reading as the consensus reading (i.e. to steelman when you meant to read charitably).
I agree.
That’s a good distinction and a sufficient (albeit not necessary) cause to call it steelmanning.
In this case “in many cases outright denying the existence of victims of drone strikes” is rather strong and unambiguous language. It is clear that the author is just exaggerating his claims to enhance emotional emphasis but we have to also acknowledge that he went out of his way to say ‘outright denying’ when he could have said something true instead. He ‘really meant’ to speak a falsehood for persuasive effect.
What Eliezer did was translate from the language of political rhetoric into what someone might say if they were making a rationalist quote instead. That’s an excellent thing to do to such rhetoric but if that is required then the quote shouldn’t be in this thread in the first place. Maybe we can have a separate thread for “rationalist translations of inspirational or impressive quotes”. (Given the standard of what people tend to post as rationalist quotes we possibly need one.)
After considering RichardKennaway’s point, I’m coming to realize that Eliezer’s interpretation is not “charitable” because it isn’t clear that the original speaker would endorse Eliezer’s reading.
Since this is what Rationality Quotes has apparently turned into, I’m not sure that the thread type is worth trying to save.
I get the impression from the analysis we have done that I am likely to essentially agree with most of your judgements regarding charitability.
-Dr. Raymond Peat
Subnormality #194
Also from Subnormality: the perils of AI foom.
This seems like a silly identity to have. When does someone who just wants the truth ever act, other than for the purpose of acquiring truth?
Obligatory note so that people don’t get undesired value drift from a particular usage of English.
“Scavenger” is a slippery term. A hyena is a scavenger; that does not mean that a rabbit ought to walk into easy reach of its jaws.
I agree with you; the context from earlier in the strip was about reading a study with evidence pointing to T-rexes being a timid scavenger, and then getting transported back in time and seeing a T-rex acting timid.
Imani Coppola
A nice ideal. It’d be better world than this one if it were true.
Sometimes if it feels like everyone’s being a dick it is actually because you are being not enough of a dick to everyone (at times when you ought to). Ever been to high school? Or, you know, interacted significantly with humans. Or even studied rudimentary game theory with which to establish priors for the likely behaviour of other agents conditional on your own.
The world is not fair. Reject the Just World fallacy.
Sometimes things don’t work because you chose bad things (or people) to work with. If something isn’t working either do it differently or do something else entirely that is better.
Personal responsibility is great, and rejecting ‘victim’ thinking is beneficial. But self delusion is not required and is not (always) beneficial.
Since as lukeprog writes one of the methods for becoming happier is to “Develop the habit of gratitude” here is a quote of stuff to be thankful for: ”
The taxes I pay because it means that I am employed
The clothes that fit a little too snug because it means I have enough to eat
My shadow who watches me work because it means I am out in the sunshine
A lawn that has to be mowed, windows that have to be washed, and gutters that need fixing because it means I have a home
The spot I find at the far end of the parking lot because it means I am capable of walking
All the complaining I hear about our government because it means we have the freedom of speech
The lady behind me in church who sings off key because it means that I can hear
The huge pile of laundry and ironing because it means my loved ones are nearby
The alarm that goes off in the early morning because it means that I’m alive”
I would still have enough to eat if my clothes fit, I would still have a home if my lawn were self-mowing, I would still be able to hear if she sang more tunefully, I would still be alive if I didn’t set my alarm, etc. Taking advantage of these sorts of moments as opportunities to practice gratitude is a fine practice, but it’s far better to practice gratitude for the thing I actually want (enough to eat, a home, hearing, life, etc.) than for the indicators of it I’d prefer to be rid of.
The goal is to turn something that would otherwise cause you distress into a tool of your own happiness. When something bad happens to you seek a legitimate reason of why it’s a sign of something positive in your life.
The idea that we try to optimize happiness in the sense you imply is a simplification. Blissful ignorance provides happiness, but most people don’t consider it a worthy goal. Yet this suggestion is basically “try to achieve blissful ignorance, rather than not liking bad things”. It does not follow that because X is not possible without Y, and Y is good, therefore X is good. Trying to believe that X is good on these grounds is some variation of willful blindness and blissful ignorance.
Happiness is a state of mind, not a condition of the territory.
True by tautology.
I completely agree. But the following is correct:
X is not possible without Y, and Y makes me happy, therefore when I encounter X, I as a rational person who seeks useful emotions and wishes to raise my level of happiness, would benefit from being able to use the relationship between X and Y to raise my happiness even if my brain would lower my happiness if it encountered X and didn’t consider the relationship between X and Y.
No rational person (at least no rational person without extremely atypical priorities) “wishes to raise his level of happiness”. Few people think that an ideal state for them to be in would be to be drugged into perfect happiness. This suggestion is basically drugging yourself into happiness without the drugs, but keeping the salient aspect of drugs—namely, that the happiness has no connection with there being a desirable situation in the outside world.
You may be thinking your priorities are more typical than they are. A straight forward utilitarian might think its a reasonable view / goal. There are lots of people out there.
As a more general point rationality doesn’t speak to end goals, it speaks to achieving those goals. See orthogonality hypothesis.
People who are depressed can quite reasonably want to raise their level of happiness—their baseline is below what makes sense for their situation.
There’s a difference between wanting to raise one’s level of happiness and wanting to raise it as high as possible.
I didn’t mean to imply that a rational person should be willing to pay any possible price to raise his happiness.
Drugs reduce the amount of concern you have for the real world. Taking greater notice of necessary relationships between observations increases the amount of concern you have for the real world.
I’m fairly certain that’s not how you’re supposed to develop a habit of gratitude. It’s not about doublithinking yourself into believing you like things that you dislike; it’s to help you notice more things you like.
I’ve been doing a gratitude journal. I write three short notes from the last day where I was thankful for something a person did (eg, saving me a brownie or something). Then I take the one that makes me happiest and write a 1 paragraph description of what occurred, how I felt, and such that writing the paragraph makes me relive the moment. Then I write out a note (that is usually later transcribed) to a past person in my gratitude journal.
When I think of that person or think back to that day, I’m immediately able to recall any nice things they did that I wrote down. Also, as I go through my life, I’m constantly looking for things to be thankful for, and notice and remember them more easily.
If you do something like in the quote, it seem more likely that you’ll remember negative things (that you pretend to be positive). It goes against the point of the exercise.
Here’s another way to do gratitude wrong: thinking about the good things turns into “this is what can be lost”.
That just doesn’t sound appropriate. It’s as if you’re saying, the alarm means I have to live through another day which I’ll hate, but it’s still better than not living at all, and that’s the best thing I can find to be happy about every morning!
You might as well say: I’m glad I’m sick, because that means I’m not dead yet.
If you hate every day, then you need to make some changes to your life. Finding a job that you enjoy might be a good first step.
The point of the entire post was to be thankful for things that you normally think of as annoying.
Here is another quote by Borges of stuff to be thankful for. English.
Pain is good, it tells you you’re still alive.
All in all though, I’d rather have the alive w/out the pain. At least as far as I know.
That depends on precisely what is meant by living without pain.
Head is an achin’ and knees are abraded
Plates in my neck and stitches updated
Toes are a cracking and Tendons inflamed
These are a few of my favorite pains
But yes, the author of those books is mostly correct, there’s some kinds of pain that serve as a useful warning function. Those are good and we should be grateful.
Others are artifacts of historical stupidity. I’ve learned those lessons and reminding me of them is useless.
Then why do you keep ignoring them?
It also means that you are in church.
Lawns are not required to have a home. Especially mowing them isn’t. Windows don’t need constant cleaning.
It’s possible to wake up without an alarm.
The Art of Thinking Clearly by Rolf Dobelli, p. 33.
I prefer to quantify my lack of information and call it a prior. Then it’s even better than wrong information!
The numerical value of the prior itself doesn’t tell how much information—or lack thereof—is incorporated into the prior.
What’s a simple way to state how certain you are about a prior, i.e. how stable it is against large updates based on new information? Error bars or something related don’t necessarily do the job—you might be very sure that the true Pr (EDIT: that was poorly phrased, probability is in the mind etc., what was meant is the eventual Pr you end up with once you’ve hypothetically parsed all possible information, the limit) is between 0.3 and 0.5, i.e. new information will rarely result in a posterior outside that range, even if the size of the range (wrongly) suggests that the prior is based on little information. Is there something more intuitive than Pr(0.3<Pr(A)<0.5) = high?
Part 1:
The idea of having a “true probability” can be extremely misleading. If I flip a coin but don’t look at it, I may call it a 50% probability of tails, but reality is sitting right there in my hand with probability 100%. The probability is not in the external world—the coin is already heads or tails. The probability is just 50% because I haven’t looked at the coin yet.
What sometimes confuses people is that there can be things in the world that we often think of as probabilities, and those can have a true value. For example, if I have an urn with 30 black balls and 70 white balls, and I pull a ball from the urn, I’ll get a black ball about 30 times out of 100. This isn’t “because the true probability is 30%”—that’s an explanation that just points to a new fundamental property to explain. It’s because the urn is 30% black balls, and I hadn’t looked at where all the balls were yet.
Using probabilities is an admission of ignorance, of incomplete information. You don’t assign the coin a probability because it’s magically probabilistic, you use probabilities because you haven’t looked at the coin yet. There’s no “true probability” sitting out there in the world waiting for you to discover it, there’s only a coin that’s either heads or tails. And sometimes there are urns with different mixtures of balls, though of course if you can look inside the urn it’s easy to pick the ball you want.
Part 2:
Okay, so there’s no “externally objective, realio trulio probability” to compare our priors to, so how about asking how much our probability will move after we get the next bit of information?
Let’s use some examples. Say I’m taking a poll. And I want to know what the probability is that people will vote for the Purple Party. So I ask 10 people. Now, 10 is a pretty small sample size, but say 3 out of 10 will vote for the purple party. So I estimate that the probability is a little more than 3⁄10. Now, the next additional person I ask will cause me to change my probability by about 10% of its current value. But after I poll 1000 people, asking the next person barely changes my probability estimate. Stability!
This actually works pretty well.
If you wanted to split up your hypothesis space about the poll results into mutually exclusive and exhaustive pieces (which is generally a good idea), you would have a million different hypotheses, because there are a million (well, 1,000,001) different possible numbers of Purple Party supporters. So for example there would be separate hypotheses for 300,000 Purple Party supporters vs. 300,001. Giving each of these hypotheses their own probability is sufficient to talk about the kind of stability you want. If the probabilities are concentrated on a few possible numbers, then your poll is really stable.
And a good thing that it works out, because the probabilities of those million hypotheses are all of the information you have about this poll!
Note that this happens without any mention of “true probability.” We chose those million hypotheses because there are realio trulio a million different possible answers. A narrow distribution over these hypotheses represents certainty not about some true probability, but about the number of actual people out in the actual world, wearing actual purple.
So thank goodness a probability distribution over the external possibilities is all ya’ need, because it’s all ya’ got in this case.
Thanks, the “true probability” phrasing was misleading, I should’ve reread my comment before submitting. Probability is in the mind etc., what I referred to was “the probability you’d eventually end up with, having incorporated all relevant information, the limit”, which is still in your mind, but as close to “true” as you’ll get.
So you can of course say Pr(Box is empty | I saw it’s empty) = x and Pr(Box is empty | I saw it’s empty and I got to examine its inner surfaces with my hand) = y, then list all similar hypothesis about the box being empty conditioned on various experiments, and compare x, y etc. to get a notion of the stability of your prior.
However, such a listing is quite tedious, and countably infinite as well, even if it’s the only full representation of your box-is-empty belief conditioned on all possible information.
The point was that “my prior about the box being empty is low / high / whatever” doesn’t give any information about whether you’ve just guesstimated it—or—whether you’re very sure about your value and will likely discount (for the most part) any new information showing the contrary as being a fluke, or a trick. A magician seemingly countering gravity with a levitation trick only marginally lowers your prior how gravity works.
Now when you actually talk to someone, you’ll often convey priors about many things, but less often how stable you deem those priors to be. This dice is probably loaded … the ‘probably’ refers to your prior, but it does not refer to how fast that prior could change. Maybe it’s a dice a friend who’s gathering loaded dice is presenting to you, so if you check it you’ll be quickly convinced if it’s not loaded. Maybe it’s your trusted loaded dice from childhood which you’ve used thousands of times, and if it doesn’t appear to be loaded on the next few throws, you’ll still consider it to be loaded.
Yet in both cases you’d say “the dice is probably loaded”. How do you usefully convey the extra information about the stability of your prior? “The dice is probably loaded, and my belief in that isn’t likely to change” so to speak? Not a theoretical definition of stability—only listing all your beliefs can represent those—but, as in the grandparent—a simple and intuitive way of conveying that important extra information about stability, and a plea to start conveying that information.
Relevant resource: Probability is subjectively objective.
I believe this is a model space problem. We’re looking at a toy bayesian reasoner that can be easily modeled in a human mind, predicting how it will update its hypotheses about dice in response to evidence like the same number coming up too often. Our toy bayesian, of course, assigns probability 0 to encountering evidence like “my trusted expert friend says it’s loaded,” so that wouldn’t change its probabilities at all. But that’s not a flaw in bayesian reasoning; it’s a flaw in the kind of bayesian reasoner that can be easily modeled in a human mind.
This doesn’t demonstrate that human reasoning that works doesn’t have a bayesian core. E.g., I don’t know how I would update my probabilities about a die being loaded if, say, my left arm turned into a purple tentacle and started singing “La Bamba.” But it does show that even an ideal reasoner can’t always out-predict a computationally limited one; if the computationally limited one has access to a much better prior, and/or a whole lot more evidence.
Error bars usually indicate a Gaussian distribution, not a flat one. If you said P=0.4 +- 0.03, that indicates that your probability of the final probability estimate ending up outside the 0.3-0.5 range is less than a percent. This seems to meet your requirements.
If that doesn’t suffice, it seems that you need a full probability distribution, specifying the probability of every P-value.
Describing probabilities in terms of a mean and an approximate standard deviation, perhaps? Low standard deviation would translate to high certainty.
--Scott Adams, I Want My Cheese
Reading through some AI literature, I stumbled upon a nicely concise statement of the core of decision theory, from Lindley (1985):
Of course, maximizing expected utility has its own absurd consequences (e.g. Pascal’s Mugging), so decision theory is not yet “finished.”
Timothy Burke
Orson Scott Card, Ender’s Shadow
Against a Dark Background by Iain M. Banks. The context differs, but it reminded me of the folks working to eliminate death.
I should perhaps explain that perceived connection. I see it in two pieces.
One is a counterpart to Joy in the Merely Real. Just because something is commonplace does not mean it is not wonderful. Just because something is commonplace does not mean it is not horrible. The end of each conscious life is a distinct tragedy, even if it happens 100 times per minute. Every one counts.
The other is a case against rationalization. Looking for a greater meaning or epic poetry in death ignores the basic problem that it is bad. A million deaths is a million tragedies, not a statistic. Shut up and multiply. We all come from cultures that spent millennia developing rationalizations for the inevitability of death. If a solution is possible, and possible within our lifetimes, the proper response is to find it rather than growing effusive about “a great and tragic beauty.”
(And, of course, how do you avoid it?)
-- Patton Oswalt
Geulincx, from his own annotations to his Ethics (1665):
Seth Roberts, ‘Something is better than nothing’, Nutrition, vol. 23, no. 11 (November, 2007), p. 912
---fuzzyfuzzyfungus
Collectability might only be self-refuting for mass-produced items.
A more detailed history of Beanie Babies
Cynicism about price guides
It also has an aspect of self-fulfilling prophecy.
Which one applies depends on how easy it is to make new instances of the collectable in question.
Stephen Baxter, Evolution
Application to rationality? An exceptionally poetic reminder that tragedies endlessly repeated are still tragedies, each time.
David Wallace
(cont. below)
Downvoted, unread. This is the place for quotes, not essays. (And if you object that there’s no rule about the size of the quotes, I’ll downvote you again)
This pre-emptive chastisement seems unnecessary. My egalitarian instinct objects to the social move it represents.
I’m intrigued by this comment. Can you say more about what leads you to make it, given your usual expressed attitude towards appeals to egalitarian instincts?
I made it because my instincts warned me that the more forthright declaration “That was unnecessarily dickish, silence fool!” would not be well received. The reason I had the desire to express sentiment at all was due to the use of an unnecessary threat.
By way of illustration, consider if I had said to you (publicly and in an aggressive tone) “If I ever catch you beating your husband I’m going to report you to the police!”. That would be a rather odd thing for me to say because I haven’t seen evidence of you beating your husband. Me making the threat insinuates that you are likely to beat your husband and also places me in a position of dominance over you such that I can determine your well-being conditional on you complying with my desires. If you in fact were to engage in domestic violence then it would be appropriate for me to use social force against you but since you haven’t (p > 0.95) and aren’t likely to it would be bizarre if I started throwing such threats around.
I’m not sure what you mean. Explain and/or give an example of such an expression? My model of me rather strongly feels the egalitarian instinct and is a vocal albeit conditional supporter of it. Perhaps the instances of appeals to egalitarian instinct that you have in mind are those that I consider to be misleading or disingenuous appeals to the egalitarian instinct to achieve personal social goals? I can imagine myself opposing such instances particularly vehemently.
Yeah, that seems plausible.
True.
My intuitions respond to “And if you object that there’s no rule about the size of the quotes, I’ll downvote you again” and “If I ever catch you beating your husband I’m going to report you to the police!” in radically different ways. Not entirely sure why.
For my part, my social intuitions respond differently to those cases for two major reasons, as far as I can tell. First, I seem to have a higher expectation that someone will respond to an explanation of a downvote with a legalistic objection than that they will beat their spouses, so responding to the possibility of the former seems more justified. In fact, the former seems entirely plausible, while the second seems unlikely. Second, being accused of the latter seems much more serious than being accused of the former, and require correspondingly stronger justification.
All of which seems reasonable to me on reflection.
All of that said, my personal preference is typically for fewer explicit surface signals of challenge in discussion. For example, if I were concerned that someone might respond to the above that actually, averaged over all of humanity, spouse-beating is far more common than legalistic objections, I’d be far more likely to say something like “(This is of course a function of the community I’m in; in different communities my expectations would differ.)” than something like “And if you reply that spouse-beating is actually more common than legalistic objections, I will downvote you.” Similarly, if I anticipated a challenge that nobody is accusing anyone of anything, I might add a qualifier like “(pre-emptively hypothetically)” to “accused,” rather than explicitly suggest the challenge and then counter it.
But I acknowledge that this is a personal preference, not a community norm, and I acknowledge that sometimes making potential challenges explicit and responding to them has better long-term consequences than covert subversion of those challenges.
I think that what’s happened is that my brain took “(And if you object that there’s no rule about the size of the quotes, I’ll downvote you again)” to be equivalent to “(Yes, I know that there’s no rule about the size of the quotes, but still)” with the mock threat added for stylistic/dry humour effect (possibly as a result of me having stuff like this in the past).
Also, someone considering the possibility that an objection will be made to their own comment is self-deprecating in a way that someone considering the possibility that a random person will abuse their spouse isn’t.
Downed for being unnecessarily violent and confrontational to someone who wasn’t doing anything worthy of such a response....
Upvoted because you ACTUALLY GAVE A REASON why you downvoted, providing the OP with useful feedback.
You may have missed the idea of a quote here.
--Selections from John Stuart Mill’s The Subjection of Women (1869). It’s probably best read in its entirety; an amazing work ahead of its time, but written in a wall-of-text style that’s difficult to abridge for quotes.
(Optional exercise: apply Mill’s points on the sociopolitical situation of women in the 19th century to the situation of children today.)
[1] And by “some” Mill likely means Carlyle.
Are you trying to provide a reductio ad absurdum of Mill’s argument, or do you honestly favor treating 5-year olds as legal adults?
Ehh.. Todays children are often subject to much more limited familial authority than were 19th century women. It is for example illegal to use physical force on them in a great many places.
How comes six people downvoted this? While I can think of a few relevant differences between women then and children today, it’s not so obvious to me that they’d be so obvious to everybody as to justify unexplained downvotes.
My guess is extended pattern-matching on Eugine Nier’s typical post content, along with a huge helping of annoyance with the excluded middle.
“This is a reductio ad absurdum of Mill’s argument”
“you honestly favor treating 5-year olds as legal adults”.
Are these honestly the only two possible readings of the original post? If not, is it more likely—based on past history of all parties—to assume that Eugine Nier honestly could not conceive of a third option, or merely that a rhetorical tactic was being employed to make their opponent look bad?
Based on what is most likely occurring (evaluated, of course, differently by each person reading), is this post a flower or a weed?
Then you tend the garden.
Well, based to Multiheaded’s previous posts I wouldn’t be too surprised if he favored treating 5-year olds as legal adults. It’s possible he wants us to notice the difference between women and children and see why Mill’s argument doesn’t apply in the later case, but I find this unlikely given Multiheaded’s commenting history. In any case, this is the logical conclusion of his argument as stated, I was merely pointing this out. Of course, If you regard pointing out the implications of someone else’s argument as a dishonest rhetorical tactic, I see why you’d object.
Oh, hello! I wondered why my karma was starting to go down again. Welcome back!
I am downvoting this and all future complaining about Eugine that is not provoked by immediate context. Too many (ie. about a third) of your comments (and even posts) are attempts to shame people who chose to downvote you. In addition, instances like this one that are snide and sarcastic are particularly distasteful.
I incidentally suggest that giving Eugine just cause (as well as additional emotional incentive) to downvote you is unlikely to be the optimal strategy for reducing the number of downvotes you receive.
On a more empathetic note I know that the task of maintaining the moral high ground and mobilizing the tribe to take action to change the behaviour of a rival is a delicate and difficult one and often a cause for frustration and even disillusionment. A possibility you may consider is that you could accept the minor status hit for excessive complaining but take great care to make sure that each individual complaint is as graceful and inoffensive to third parties (such as myself) as possible. If you resist the urge to insert those oh so tempting additional barbs then you will likely find that you have far more leeway in terms of how much complaining people will accept from you and are more likely to receive the support of said third parties’ egalitarian instincts.
Note: The preceding paragraph is purely instrumental advice and should not be interpreted as normative endorsement (or dis-endorsement) of that particular “Grey-Arts” strategy. (But I would at least give unqualified normative endorsement of replacing “complaining + bitchiness” with “complaining + tact” in most cases.)
nod unfortunately, I am terrible at these sorts of plays. Thank you for your criticism, and I’ll attempt to behave more gracefully in the future.
EDIT: I’m going to go ahead and trigger your downvotes, now, because reviewing the situation, I feel like I need to speak in my own defense.
I consistently lose fourty to fifty karma over the course of a few minutes, once every few days. Posts which have no possible reason why someone would downvote them get downvoted. And I do not, as you put it, “shame people who chose to downvote me”. I mostly ask for an explanation why I got downvoted, so that I can improve. The ONLY time I have explicitly tried to shame someone who downvoted me was Eugine, and only after spending a very long time examining the situation and coming to the conclusion (p > 0.95) that Eugine was downvoting EVERYTHING I say, just because.
If you feel that that deserves further retributive downvoting, you are free to perform it to your heart’s content; I am powerless to stop you.
That sounds overconfident.
— Robert A. Heinlein, Time Enough for Love
Not explicitly, precisely because it is the norm. But it records a great many times when minorities have been wrong.
Yes.
Yup.
The colour of the sky? What direction a rock goes if you drop it from near the ground?
Lin Chi
I’m sure I could interpret a rationalist message from that quote, in the same way that I could derive a reasonable moral system based solely on the Book of Revelations. But that doesn’t imply that my reading is intended by the author, or a plausible reading of the text.
In this case it does seem plausible that a rationalist message was intended.
Maybe the real issue is that it takes background knoweldge to know what the quote means within Buddhism? Without that background knowledge the sentence doesn’t convey much meaning.
The Buddha
God
:)
I’ve heard this quoted a lot, but I can’t find the original source.
I’m surprised to find anything on the source of a joke, but this thread suggests it originated some time in the 1960s.
My impression was that Desrtopa was making an atheist jest.
(In the Recent Comments sidebar, this looked like:
which is rather different!)
I saw that, and it ruins the joke a bit. Sigh.
FWIW, I really like Nietzsche.
Prepend ‘God’ with hyphens. “Nietzsche is dead. --God” works with or without the line break.
That wouldn’t be the kind of thing Buddha used to say.
If you find the truth, continue the search for it regardless.
Forget about arriving at the truth, rather practice the methods that brings you closer to truths.
The intended meaning has something to do with the Buddhist concept that the practice of Buddhism (basically meditation) is the realization of Buddhahood, and instead of accepting any Buddha you meet, you must simply continue your practice.
By the way, Sam Harris wrote an essay starting with this quote, called ‘Killing the Buddha’.
http://www.samharris.org/site/full_text/killing-the-buddha/
Didn’t we do this last month ?
Apparently the Buddha has reincarnated, so we need to kill him again. It’s like playing the World of Warcraft.
Alternatively:
— Kahlil Gibran
This seems empirically false.
It almost certainly is, but does that matter? It is a slogan for any time when the powers that be are diminished by the truth.
Today we kneel only to hypocrisy.
Yes, it matters if we are deluding ourselves into thinking ourselves better than we are. False self-gratification prevents us from actually improving.