Yea, I was quite surprised to find that Quirrell believes in continuity of consciousness as being a fundamental problem, since it really is just an illusion to begin with (though you could argue the illusion itself is worthwhile). Surely you could just kill yourself the moment your horcrux does its job if you’re worried about your other self living on? But maybe he doesn’t know that scientifically there’s no such thing as identity. Or maybe he’s lying. Personally, I would be MUCH more concerned about the fact that the horcrux implants memories, but does not replace personality. But for some reason Quirrel does not mention that as the obvious drawback.
(I was also surprised that Eliezer seems to buy in the obviously false notion that “the opposite of love is indifference”)
the obviously false notion that “the opposite of love is indifference”
Perhaps the word “opposite” is not the best one, but I think it’s about this: in some metric, loving people and hating people is closer to each other than either of them is to the paperclip maximizer’s attitude towards humans. In HPMOR universe, a magical paperclip maximizer could shoot AK like a machine gun. Instead of replacing one emotion with another emotion, it’s replacing one emotion with an absence of an emotion.
Instinctively, people sometimes prefer to be hated than to be ignored. For example, children trying to draw attention to themselves by behaving badly. There is some “recognition” in hate, that indifference lacks.
But maybe he doesn’t know that scientifically there’s no such thing as identity.
What do you mean with the term “scientifically” in that sentence? If I put identity into Google Scholar I’m fairly sure I fill find a bunch of papers in respectable scientific journals that use the term.
(I was also surprised that Eliezer seems to buy in the obviously false notion that “the opposite of love is indifference”)
“Obviously” is a fairly strong word. It makes some sense to label the negation of any emotion a emotionless state.
Unfriendly AI doesn’t hate humans but is indifferent.
What do you mean with the term “scientifically” in that sentence? If I put identity into Google Scholar I’m fairly sure I fill find a bunch of papers in respectable scientific journals that use the term.
I mean that if you have two carbon atoms floating around in the universe, and the next instance you swap their locations but keep everything else the same, there is no scientific way in which you could say that anything has changed.
Combine this with humans being just a collection of atoms, and you have no meaningful way to say that an identical copy of you is “not really you”. Also, ‘continuity of consciousness’ is just a specific sensation that this specific clump of atoms has at each point in time, except for all the times when it does not exist because the clump is ‘sleeping’. So Quirrel’s objection seems to have no merit (could be I’m missing something though).
“Obviously” is a fairly strong word. It makes some sense to label the negation of any emotion a emotionless state. Unfriendly AI doesn’t hate humans but is indifferent.
Yes, there is an insight to be had there, I will acknowledge that much.
However, to say that the opposite of a friendly AI is a paper clip maximiser is stupid. The opposite of an AI which wants to help you is very obviously an AI which wants to hurt you. Which is why the whole “AK version 2 riddle” just doesn’t work. The Patronus goes from “not thinking about death” (version 1) to “Valuing life over death” (version 2). The killing curse goes from “valuing death over life” (version 1) to “not caring about life” (version 2). You can visualise the whole thing as a line measuring just the one integer, namely “life-death preference”:
Value death over life (-1) ---- don’t think about it either way (0) ----- Value life over death (+1)
The patronus gets a boost by moving from 0 to +1. The killing curse gets a boost by moving from −1 to 0. That makes no sense. Why would the killing curse, which is powered by the exact opposite of the patronus, receive a boost in power by moving in the same direct as the Patronus which values life over death?
Only fake wisdom can get ridiculous results like this.
The patronus gets a boost by moving from 0 to +1. The killing curse gets a boost by moving from −1 to 0. That makes no sense. Why would the killing curse, which is powered by the exact opposite of the patronus, receive a boost in power by moving in the same direct as the Patronus which values life over death?
I parsed it as follows: the Killing Curse isn’t powered by death in the same way that the Patronus draws power from life, but it does require the caster not to value the life of an opponent. Hatred enables this, but it’s limited: it has to be intense, sustained hatred, and probably only hatred of a certain kind, since it takes some doing for neurologically typical humans to hate someone enough to literally want them dead. Indifference to life works just as well and lacks the limitations, but that’s probably an option generally available only to, shall we say, a certain unusual personality type.
Ideology might interact with this in interesting ways, though. I don’t know whether Death Eaters would count as being motivated by hate or indifference by the standards of the spell; my model of J.K. Rowling says “hate”, while my model of Eliezer says “indifference”.
Yes, that ideology is precisely what bothers me. Eliezer has a bone to pick with death so he declares death to be the ultimate enemy. Dementors now represent death instead of depression, patronus now uses life magic, and a spell that is based on hate is now based on emptiness. It’s all twisted to make it fit the theme, and it feels forced. Especially when there’s a riddle and the answer is ‘Eliezer’s password’.
I don’t know if MoR influenced the movies, but Deathly Hallows 1 or 2 showed an image of Death looking like the movie’s image of Dementors. It seems to me like a natural inference.
Isn’t that because the only static element of a dementor’s appearance is its black, concealing cloak, and that overlaps neatly with the Grim Reaper portrayal of death?
You say that like Rowling had no choice but to use this well-known image for Dementors. Also, they’re supposed to look somewhat like corpses underneath.
What are you trying to argue in the great-grandparent? What am I supposed to take from the black cloaks, aside from the fact that it makes Dementors look like Death? I can imagine that perhaps Rowling chose this appearance because it allowed a frightening reveal later on. But that reveal uses the words “rotting”, “death” and “deathly”. On our first sight of a Dementor she also compares it to something “dead” and “decayed”. She did this because fear of death seems near as universal as you can get. Dementors’ most feared ability, destruction of the soul, has the same explanation.
The parallels that MoR!Harry sees are real, and they exist because death is (widely held to be) bad.
“don’t think about it either way” does not necessarily mean indifference, it means reverting to default behaviour.
Humans are (mostly) pro-social animals with empathy and would not crush another human who just happens to be in their way—in that they differ from a falling rock. In fact, that’s the point of hate, it overrides the built-in safeguards to allow for harmful action. According to this view, to genuinely not give a damn about someone’s life is a step further. Obviously.
The thing about built-in default behaviour given by evolution is that it will not trigger in some cases.
“Unreliable elements were subjected to an alternative justice process”—subjected by who? What does an “alternative justice process” do? With enough static noun phrases, you can keep anything unpleasant from actually happening.
or HPMoR Ch.48
Your brain imagines a single bird struggling in an oil pond, and that image creates some amount of emotion that determines your willingness to pay. But no one can visualize even two thousand of anything, so the quantity just gets thrown straight out the window.
or HPMoR Ch.87
Because the way people are built, Hermione, the way people are built to feel inside [...] is that they hurt when they see their friends hurting. Someone inside their circle of concern, a member of their own tribe. That feeling has an off-switch, an off-switch labeled ‘enemy’ or ‘foreigner’ or sometimes just ‘stranger’. That’s how people are, if they don’t learn otherwise.
My point with that is, it’s completely in line with what Eliezer usually talks about, so you know it’s a perspective he holds, not just rationalization.
For completeness’ sake,
Not like certain people living in certain countries, who were, it was said, as human as anyone else; who were said to be sapient beings, worth more than any mere unicorn. But who nonetheless wouldn’t be allowed to live in Muggle Britain. On that score, at least, no Muggle had the right to look a wizard in the eye. Magical Britain might discriminate against Muggleborns, but at least it allowed them inside so they could be spat upon in person.
still feels off. Oh, wait, I know! Maybe Harry is being Stupid here. Or Eliezer is being a Bad Writer. Again.
1) Saying you can’t tell after the fact whether something occured is not the same as saying it never occured. The fact that we can’t experimentally determine if two carbon atoms have distinct identity is not, repeat not the same as saying that they don’t have separate identity. Maybe they do. You just can’t tell.
2) That has nothing to do with continuity of consciousness. Assume the existence of a perfect matter replicator. What do you expect to happen when you make a copy of yourself? Do you expect to suddenly find yourself inside of the copy? Let’s say that regardless of what you expect at that point, you end up in your same body as before, the old one not the new one. What do you expect to experience then, if you killed yourself? This has nothing, nothing to do with statements about quantum identity and equivalence of configuration spaces. It is about separating the concept of a representation of me, from an instance of that representation which is me. I expect to experience only what the instance of the representation which is currently typing this words will experience as it evolves into the future. If an exact copy of me was made at any time, that’d be pretty awesome. It’d be like having a truly identical twin. But it wouldn’t me me, and if this instance died, I wouldn’t expect to live on experiencing what the copy of me experiences.
3) Sleeping is a total non-sequiter. Do you expect that your brain is 100% shut off and disarticulated into individual neurons when you are in a sleeping state? No? That’s right—just because you don’t have memories, doesn’t mean you didn’t exist while sleep. You just didn’t form memories at the time.
1) As far as I understand it, atoms don’t have a specific ‘location’, there are only probabilities for where that atom might be at any given time. Given that it is silly to speak of individual atoms. Even if I misunderstood that part, it is still the case that two entities which have no discernible difference in principle are the same, as a matter of simple logic.
2) Asking “which body do you wake up in” is a wrong question. It is meaningless because there is no testable difference depending on your answer, it is not falsifiable even in principle. The simple fact is that if you copy Sophronius, you then have 2 Sophronius waking up later, each experiencing the sensation of being the original. Asking whose sensation is “real” is meaningless.
3) It is not a non-sequitur. Sleep interrupts your continuity of self. Therefore, if your existence depends on uninterrupted continuity of self, sleep would mean you die every night.
I notice that you keep using concepts like “you”, “I” and “self” in your defence of a unique identity. I suggest you try removing those concepts or any other that presupposes unique identity. If you cannot do that then you are simply begging the question.
1) Saying you can’t tell after the fact whether something occured is not the same as saying it never occured. The fact that we can’t experimentally determine if two carbon atoms have distinct identity is not, repeat not the same as saying that they don’t have separate identity. Maybe they do. You just can’t tell.
The linked article by Elizer Yudkowsky is straight up wrong for the following reasons:
(1) Eliezer’s understanding of the physics here is bunk. I’m actually a trained physist. He is not. But bonus points to you if you reject this argument because you shouldn’t accept my authority any more than you should accept his. I assume you read Griffiths’ Quantum Mechanics or a similar introductory book and came to your own conclusions?
(2) Specifically the experimental result Eliezer quotes has to do with how we calculate probabilities for quantum mechanical events. There are an infinitely many ways one could calculate probabilities—math describes the universe, it doesn’t constrain it. But if you do so naively, you end up with one answer if you treat “P1 at L1, P2 at L2” as a different state than “P1 at L2, P2 at L1″ than if you treat them as the same state. Experimental results show that the latter probabilities are correct. One interpretation is that P1 and P2 are the same particle, so the state is “P at L1, P at L2”. That’s one interpretation. Another perfectly valid interpretation is that “Particle of type
at L1, Particle of type
at L2″ is the actual state—that is to say that the particles keep their identity but identity doesn’t factor into the probabalistic calculus. That’s why the term used by phsyisits is distinguishable rather than identity. These particles are indistinguishable, but that does not mean they are identical. That would be an unwaranted inference.
(3) All of that is a moot point, because it doesn’t match up at all with what we are talking about: the continuity of self as it relates to human minds. Calculating probabilities about particles in boxes tells us nothing about whether I would expect to wake up in a computer after a destructive upload, or how that relates to a personal desire to cheat death. I don’t care about the particles making up my mind: I care about sustaining the never stopping information processing system which gives rise to my subjective experiance. It does not obviously follow that if my mind state were perfectly saved before I was shot in the head, and then at some distant point in the future a brain configured exactly like mine was created, that I would subjectively experience living on in the future. Not anymore than it makes sense to say that my recently deceased aunt lives on in my mother, her identical twin.
I assume you read Griffiths’ Quantum Mechanics or a similar introductory book and came to your own conclusions?
FWIW, I have a master’s degree in physics and I’m working to get a PhD (though in a subfield not closely related to the basics of QM; I’d trust say Scott Aaronson over myself even though he’s not a physicist).
Another perfectly valid interpretation is that “Particle of type
at L1, Particle of type
at L2″ is the actual state—that is to say that the particles keep their identity but identity doesn’t factor into the probabalistic calculus.
FWIW, I have a master’s degree in physics and I’m working to get a PhD.
Awesome. Please forgive my undeserved snark.
What do you mean by identity?
Honestly I’m not sure. I only envoke the concept of identity in response to nonsense arguments appearing on LessWrong. Normally when I say ‘identity’ i mean the concept of ‘self’ which is the whatever-it-is which experiences my perceptions, thoughts, inner monologues, etc, or whatever it is that gives rise to the experience of me. How this relates to distinguishability of particles in quantum mechanics, I don’t know.. which is kinda the point. When calculating probabilities, you treat two states as the same if they are indistinguishable … how this gets warped into explaining what I’d expect to experience while undergoing a destructive upload is beyond me.
Also, ‘continuity of consciousness’ is just a specific sensation that this specific clump of atoms has at each point in time
Or not. Memories are genuinely lost, if someone makes a Horcrux and then dies some years later. Moreover, according to the Defense Professor in snake form, the maker’s personality could also change due to influence from the (two) victim(s). The result need not act like the maker at time of casting would act if placed in a new environment.
Surely you could just kill yourself the moment your horcrux does its job if you’re worried about your other self living on?
What would be the point? The goal of the horcrux isn’t to transfer into another body you like better than your current one, it’s to be a backup against accidentally dying.
Did you not read that section at all? If you lose all knowledge of powerful spellcasting, a) you lose your ability to continue to be immortal after this iteration, b) you lose your ability to defend yourself against enemies who haven’t lost their ability to cast interdicted spells. The second one is really important when the process for immortality is one that inherently makes a lot of enemies! He specifically mentioned that dark wizards that tried use that technique to come back were easily defeated afterward.
If you’re on your deathbed, sure. But Horcruxing is not costless. If you have a significant projected lifespan left, and you want ACTUAL immortality, your odds are probably better NOT doing a risky dark ritual that also encourages people to come and kill you.
Could that explain why Hat&Cloak seems to be a clever manipulator who works in utmost secrecy? (They really are weak, and survive only by hiding in the shadows.) We never see them indicated as using anything more complex than an Obliviate or disguise spell, AFAIK, which any reasonably competent adult wizard would be able to pull off.
This seems a big part of why I don’t think Baba Yaga is still alive. The best in-story reason I can think of to consider the theory at all lies in the idea that (if Horcruxes are easier to make than I thought) some Dark figure of legend should still be alive. This argument seems weak if the spell doesn’t give you much advantage. Also, Quirrell’s claim here fits what we know about the Interdict. (I guess the question is whether the Horcrux spell falls under the Interdict!)
Yes, but Dumbledore probably can’t create an Horcrux. The Defense Professor claims the known description is wrong, which could make the theft a piece of misdirection. This is another possible way around the Interdict; publish a fake version of the spell which hints at the truth.
It is a wrong question, because reality is never that simple and clear cut and no rationalist should expect it to be. And as with all wrong questions, the thing you should do to resolve the confusion is to take a step back and ask yourself what is actually happening in factual terms:
A more accurate way to describe emotion, much like personality, is in terms of multiple dimensions. One dimension is intensity of emotion. Another dimension is the type of experience it offers. Love and hate both have strong intensity and in that sense they are similar, but they are totally opposite in the way they make you feel. They are also totally opposite in terms of the effect it has on your preferences: Thinking well vs. thinking poorly of someone (ignoring the fact that there are multiple types of hate and love, and the 9999 other added complexities).
Ordinary people notice that hate and love are totally the opposite in several meaningful ways, and say as much. Then along comes a contrarian who wants to show how clever he is, and he picks up on the one way that love and hate are similar and which can make them go well together: The intensity of emotion towards someone or something. And so the contrarian states that really love and hate are the same and indifference is the opposite of both (somehow), which can cause people who aren’t any good at mapping complex subjects along multiple axi in their head to throw out their useful heuristic and award status to the contrarian for his fake wisdom.
I’m a bit disappointed that Eliezer fell for the number one danger of rationalists everywhere: Too much eagerness to throw out common sense in favour of cleverness.
(Eliezer if you are reading this: You are awesome and HPMOR is awesome. Please keep writing it and don’t get discouraged by this criticism)
I’m surprised how strongly you’re reacting to this, given that you seem to be aware that the whole “emotions having opposites” system is really just a word game anyway.
Why is it important that you prioritise the “effect on preferences” axis and Eliezer prioritises the “intensity” axis, except insofar as it is a bit embarrassing to see an intelligent person presenting one of these as wisdom? Perhaps Eliezer simply considers apathy to be a more dangerous affliction than hatred, and is thus trying to shift his readers’ priorities accordingly. Insofar as there are far more people in the world moved to inaction through apathy than there are people moved to wrong action through hatred, perhaps there’s something to that.
Hm, I didn’t think I was reacting that strongly… If I was, it’s probably because I am frustrated in general by people’s inability to just take a step back and look at an issue for what it actually is, instead of superimposing their own favourite views on top of reality. I remember I recently got frustrated by some of the most rational people I know claiming that sun burn was caused by literal heat from the sun instead of UV light. Once they formed the hypothesis, they could only look at the issue through the ‘eyes’ of that view. And I see the same mistake made on Less Wrong all the time. I guess it’s just frustrating to see EY do the same thing. I don’t get why everyone, even practising rationalists, find this most elementary skill so hard to master.
It’s the most basic rationalist skill there is, in my opinion, but for some reason it’s not much talked about here. I call it “thinking like the universe” as opposed to “thinking like a human”. It means you remove yourself from the picture, you forget all about your favourite views and you stop caring about the implications of your answer since those should not impact the truth of the matter, and describe the situation in purely factual terms. You don’t follow any specific chain of logic towards finding an answer: You instead allow the answer to naturally flow from the facts.
It means you don’t ask “which facts argue in favour of my view and which against?”, but “what are the facts?” It means you don’t ask “What is my hypothesis?”, you ask “which hypotheses flow naturally from the facts?” It means you don’t ask “What do I believe?” but “what would an intelligent person believe given these facts?” It means you don’t ask “which hypothesis do I believe is true?”, but “how does the probability mass naturally divide itself over competing hypotheses based on the evidence?” It means you don’t ask “How can I test this hypothesis?” but “Which test would maximally distinguish between competing hypotheses?” It means you never ever ask who has the “burden of proof”.
And so on and so forth. I see it as the most fundamental skill because it allows you to ask the right questions, and if you start with the wrong question it really doesn’t matter what you do with it afterwards.
The primary thing I seem to do is to remind myself to care about the right things. I am irrelevant. My emotions are irrelevant. Truth is not influenced by what I want to be true. I am frequently amazed by the degree with which my emotions are influenced by subconscious beliefs. For example I notice that the people who make me most angry when they’re irrational are the ones I respect the most. People who get offended usually believe at some level that they are entitled to being offended. People who are bad at getting to the truth of a matter usually care more about how they feel than about what is actually true. (This is related to the fundamental optimization problem: The truth will always sound less truthful than the most truthful sounding falsehood.) Noticing that kind of thing is often more effective than trying to control emotions the hard way.
Secondly, you want to pay attention to your thoughts as much as possible. This is just meditation, really. If you become conscious of your thoughts, you gain a degree of control over them. Notice what you think, when you think it, and why. If a question makes you angry, don’t just suppress the anger, ask yourself why.
For the rest it’s just about cultivating a habit of asking the right questions. Never ask yourself what you think, since the universe doesn’t care what you think. Instead say “Velorien believes X: How much does this increase the probability of X?”.
The truth will always sound less truthful than the most truthful sounding falsehood.
This needs to be on posters and T-shirts if it isn’t already. Is it a well-known principle?
Thank you for the explanation. This overall idea (of the relationship between belief and reality, and the fact that it only goes one way) is in itself not new to me, but your perspective on it is, and I hope it will help me develop my ability to think objectively.
Also thanks for the music video. Shame I can’t upvote you multiple times.
This needs to be on posters and T-shirts if it isn’t already. Is it a well-known principle?
Sadly not. I keep meaning to post an article about this, but it’s really hard to write an article about a complex subject in such a way that people really get it (especially if the reader has little patience/charity), so I keep putting it off until I have the time to make it perfect. I have some time this weekend though, so maybe...
I think the Fundamental Optimization Problem is the biggest problem humanity has right now and it explains everything that’s wrong with society: It represents the fact that doing what’s good will always feel less good than doing what feels good, people who optimize for altruism will always be seen as more selfish than people who optimize for being seen as altruistic, the people who get in power will always be the ones whose skills are optimized for getting in power and not for knowing what to do once they get there, and people who yell about truth the most are the biggest liars. It’s also why “no good deed goes unpunished”. Despite what Yoda claims, the dark side really is stronger.
Unfortunately there’s no good post about this on LW AFAIK, but Yvain’s post about Moloch is related and is really good (and really long).
Also thanks for the music video. Shame I can’t upvote you multiple times.
people’s inability to just take a step back and look at an issue for what it actually is, instead of superimposing their own favourite views on top of reality.
I think that people who fully possess such a skill are usually described as “have achieved enlightenment” and, um, are rare :-) The skill doesn’t look “elementary” to me.
Heheh, fair point. I guess a better way of putting it is that people fail to even bother to try this in the first place, or heck even acknowledge that this is important to begin with.
I cannot count the number of times I see someone try to answer a question by coming up with an explanation and then defending it, and utterly failing to graps that that’s not how you answer a question. (In fact, I may be misremembering but I think you do this a lot, Lumifer.)
I see someone try to answer a question by coming up with an explanation and then defending it
The appropriateness of that probably depends on what kind of question it is...
I think my hackles got raised by the claim that your perception is “what it actually is”—and that’s a remarkably strong claim. It probably works better phrased like something along the lines of “trying to take your ego and preconceived notions out of the picture”.
The appropriateness of that probably depends on what kind of question it is...
I guess it is slightly more acceptable if it’s a binary question. But even so it’s terrible epistimology, since you are giving undue attention to a hypothesis just because it’s the first one you came up with.
An equally awful method of doing things: Reading through someone’s post and trying to find anything wrong with it. If you find anything --> post criticism, if you don’t find anything --> accept conclusion. It’s SOP even on Less Wrong, and it’s not totally stupid but it’s really not what rationalists are supposed to do.
I think my hackles got raised by the claim that your perception is “what it actually is”—and that’s a remarkably strong claim. It probably works better phrased like something along the lines of “trying to take your ego and preconceived notions out of the picture”.
Yes, that is a big part of it, but it’s more than that. It means you stop seeing things from one specific point of view. Think of how confused people get about issues like free will. Only once you stop thinking about the issue from the perspective of an agent and ask what is actually happening from the perspective of the universe can you resolve the confusion.
Or, if you want to see some great examples of people who get this wrong all the time, go to the James Randi forums. There’s a whole host of people there who will say things during discussions like “Well it’s your claim so you have the burden of proof. I am perfectly happy to change my mind if you show me proof that I’m wrong.” and who think that this makes them rationalists. Good grief.
Any links to egregious examples? :-)
I have spent some time going through your posts but I couldn’t really find any egregious examples. Maybe I got you confused with someone else. I did notice that where politics were involved you’re overly prone to talking about “the left” even though the universe does not think in terms of “left” or “right”. But of course that’s not exactly unique to you.
One other instance I found:
Otherwise, I still think you’re confused between the model class and the model complexity (= degrees of freedom), but we’ve set out our positions and it’s fine that we continue to disagree.
It’s not a huge deal but I personally would not classify ideas as belonging to people, for the reasons described earlier.
In practice I think “X has the burden of proof” generally means something similar to “The position X is advancing has a rather low prior probability, so substantial evidence would be needed to make it credible, and in particular if X wants us to believe it then s/he would be well advised to offer substantial evidence.” Which, yes, involves confusion between an idea and the people who hold it, and might encourage an argument-as-conflict view of things that can work out really badly—but it’s still a convenient short phrase, reasonably well understood by many people, that (fuzzily) denotes something it’s often useful to say.
So, yeah, issuing such challenges in such terms is a sign of imperfect enlightenment and certainly doesn’t make the one who does it a rationalist in any useful sense. But I don’t see it as such a bad sign as I think you do.
Yea, the concept of burden of proof can be a useful social convention, but that’s all it is. The thing is that taking a sceptical position and waiting for someone to proof you wrong is the opposite of what a sceptic should do. If you ever see two ‘sceptics’ both taking turns postinf ‘you have the burden of proof’, ‘no you have the burden of proof!’… You’ll see what i mean. Actual rationality isn’t supposed to be easy.
I guess it is slightly more acceptable if it’s a binary question.
No, that’s not what I had in mind. For example, there are questions which explicitly ask for an explanation and answering them with an explanation is fine. Or, say, there are questions which are wrong (as a question) so you answer them with an explanation of why they don’t make sense.
It means you stop seeing things from one specific point of view.
I don’t think you can. Or, rather, I think you can see things from multiple specific point of views, but you cannot see them without any point of view. Yes, I understand you talk about looking at things “from the perspective of the universe” but this expression is meaningless to me.
“I am perfectly happy to change my mind if you show me proof that I’m wrong.”
That may or may not be a reasonable position to take. Let me illustrate how it can be reasonable: people often talk in shortcuts. The sentence quoted could be a shortcut expression for “I have evaluated the evidence for and against X and have come to the conclusion Y. You are claiming that Y is wrong, but your claim by itself is not evidence. Please provide me with actual evidence and then I will update my beliefs”.
even though the universe does not think in terms of “left” or “right”
But humans do and I’m talking to humans, not to the universe.
A more general point—you said in another post
I am irrelevant. My emotions are irrelevant. Truth is not influenced by what I want to be true.
This is true when you are evaluating the physical reality. But it is NOT true when you are evaluating the social reality—it IS influenced by emotions and what people want to be true.
but I personally would not classify ideas as belonging to people
I suppose “elementary” in the sense of “fundamental” or “simple” or “not relying on other skills before you can learn it”, rather than in the sense of “easy” or “widespread”.
Contrast literacy. Being to read and write one’s own language is elementary. It can be grasped by a small child, and has no prerequisites other than vision, reasonable motor control and not having certain specific brain dysfunctions. Yet one does not have to cast one’s mind that far back through history to reach the days in which this skill was reserved for an educated minority, and most people managed to live their whole lives without picking it up.
Yea, I was quite surprised to find that Quirrell believes in continuity of consciousness as being a fundamental problem, since it really is just an illusion to begin with (though you could argue the illusion itself is worthwhile). Surely you could just kill yourself the moment your horcrux does its job if you’re worried about your other self living on? But maybe he doesn’t know that scientifically there’s no such thing as identity. Or maybe he’s lying. Personally, I would be MUCH more concerned about the fact that the horcrux implants memories, but does not replace personality. But for some reason Quirrel does not mention that as the obvious drawback.
(I was also surprised that Eliezer seems to buy in the obviously false notion that “the opposite of love is indifference”)
Perhaps the word “opposite” is not the best one, but I think it’s about this: in some metric, loving people and hating people is closer to each other than either of them is to the paperclip maximizer’s attitude towards humans. In HPMOR universe, a magical paperclip maximizer could shoot AK like a machine gun. Instead of replacing one emotion with another emotion, it’s replacing one emotion with an absence of an emotion.
Instinctively, people sometimes prefer to be hated than to be ignored. For example, children trying to draw attention to themselves by behaving badly. There is some “recognition” in hate, that indifference lacks.
Relevant.
Please warn when you are linking to a post with an unmarked major spoiler for another novel (or two).
What do you mean with the term “scientifically” in that sentence? If I put identity into Google Scholar I’m fairly sure I fill find a bunch of papers in respectable scientific journals that use the term.
“Obviously” is a fairly strong word. It makes some sense to label the negation of any emotion a emotionless state. Unfriendly AI doesn’t hate humans but is indifferent.
I mean that if you have two carbon atoms floating around in the universe, and the next instance you swap their locations but keep everything else the same, there is no scientific way in which you could say that anything has changed.
Combine this with humans being just a collection of atoms, and you have no meaningful way to say that an identical copy of you is “not really you”. Also, ‘continuity of consciousness’ is just a specific sensation that this specific clump of atoms has at each point in time, except for all the times when it does not exist because the clump is ‘sleeping’. So Quirrel’s objection seems to have no merit (could be I’m missing something though).
Yes, there is an insight to be had there, I will acknowledge that much.
However, to say that the opposite of a friendly AI is a paper clip maximiser is stupid. The opposite of an AI which wants to help you is very obviously an AI which wants to hurt you. Which is why the whole “AK version 2 riddle” just doesn’t work. The Patronus goes from “not thinking about death” (version 1) to “Valuing life over death” (version 2). The killing curse goes from “valuing death over life” (version 1) to “not caring about life” (version 2). You can visualise the whole thing as a line measuring just the one integer, namely “life-death preference”:
Value death over life (-1) ---- don’t think about it either way (0) ----- Value life over death (+1)
The patronus gets a boost by moving from 0 to +1. The killing curse gets a boost by moving from −1 to 0. That makes no sense. Why would the killing curse, which is powered by the exact opposite of the patronus, receive a boost in power by moving in the same direct as the Patronus which values life over death?
Only fake wisdom can get ridiculous results like this.
I parsed it as follows: the Killing Curse isn’t powered by death in the same way that the Patronus draws power from life, but it does require the caster not to value the life of an opponent. Hatred enables this, but it’s limited: it has to be intense, sustained hatred, and probably only hatred of a certain kind, since it takes some doing for neurologically typical humans to hate someone enough to literally want them dead. Indifference to life works just as well and lacks the limitations, but that’s probably an option generally available only to, shall we say, a certain unusual personality type.
Ideology might interact with this in interesting ways, though. I don’t know whether Death Eaters would count as being motivated by hate or indifference by the standards of the spell; my model of J.K. Rowling says “hate”, while my model of Eliezer says “indifference”.
Yes, that ideology is precisely what bothers me. Eliezer has a bone to pick with death so he declares death to be the ultimate enemy. Dementors now represent death instead of depression, patronus now uses life magic, and a spell that is based on hate is now based on emptiness. It’s all twisted to make it fit the theme, and it feels forced. Especially when there’s a riddle and the answer is ‘Eliezer’s password’.
I don’t know if MoR influenced the movies, but Deathly Hallows 1 or 2 showed an image of Death looking like the movie’s image of Dementors. It seems to me like a natural inference.
Isn’t that because the only static element of a dementor’s appearance is its black, concealing cloak, and that overlaps neatly with the Grim Reaper portrayal of death?
You say that like Rowling had no choice but to use this well-known image for Dementors. Also, they’re supposed to look somewhat like corpses underneath.
I increasingly feel like I’ve lost track of what you’re trying to argue here. Would you mind recapitulating it for me?
What are you trying to argue in the great-grandparent? What am I supposed to take from the black cloaks, aside from the fact that it makes Dementors look like Death? I can imagine that perhaps Rowling chose this appearance because it allowed a frightening reveal later on. But that reveal uses the words “rotting”, “death” and “deathly”. On our first sight of a Dementor she also compares it to something “dead” and “decayed”. She did this because fear of death seems near as universal as you can get. Dementors’ most feared ability, destruction of the soul, has the same explanation.
The parallels that MoR!Harry sees are real, and they exist because death is (widely held to be) bad.
“don’t think about it either way” does not necessarily mean indifference, it means reverting to default behaviour.
Humans are (mostly) pro-social animals with empathy and would not crush another human who just happens to be in their way—in that they differ from a falling rock. In fact, that’s the point of hate, it overrides the built-in safeguards to allow for harmful action. According to this view, to genuinely not give a damn about someone’s life is a step further. Obviously.
The thing about built-in default behaviour given by evolution is that it will not trigger in some cases.
Rationality and the English Language
or HPMoR Ch.48
or HPMoR Ch.87
My point with that is, it’s completely in line with what Eliezer usually talks about, so you know it’s a perspective he holds, not just rationalization.
For completeness’ sake,
still feels off. Oh, wait, I know! Maybe Harry is being Stupid here. Or Eliezer is being a Bad Writer. Again.
Yes you are missing a few things.
1) Saying you can’t tell after the fact whether something occured is not the same as saying it never occured. The fact that we can’t experimentally determine if two carbon atoms have distinct identity is not, repeat not the same as saying that they don’t have separate identity. Maybe they do. You just can’t tell.
2) That has nothing to do with continuity of consciousness. Assume the existence of a perfect matter replicator. What do you expect to happen when you make a copy of yourself? Do you expect to suddenly find yourself inside of the copy? Let’s say that regardless of what you expect at that point, you end up in your same body as before, the old one not the new one. What do you expect to experience then, if you killed yourself? This has nothing, nothing to do with statements about quantum identity and equivalence of configuration spaces. It is about separating the concept of a representation of me, from an instance of that representation which is me. I expect to experience only what the instance of the representation which is currently typing this words will experience as it evolves into the future. If an exact copy of me was made at any time, that’d be pretty awesome. It’d be like having a truly identical twin. But it wouldn’t me me, and if this instance died, I wouldn’t expect to live on experiencing what the copy of me experiences.
3) Sleeping is a total non-sequiter. Do you expect that your brain is 100% shut off and disarticulated into individual neurons when you are in a sleeping state? No? That’s right—just because you don’t have memories, doesn’t mean you didn’t exist while sleep. You just didn’t form memories at the time.
1) As far as I understand it, atoms don’t have a specific ‘location’, there are only probabilities for where that atom might be at any given time. Given that it is silly to speak of individual atoms. Even if I misunderstood that part, it is still the case that two entities which have no discernible difference in principle are the same, as a matter of simple logic.
2) Asking “which body do you wake up in” is a wrong question. It is meaningless because there is no testable difference depending on your answer, it is not falsifiable even in principle. The simple fact is that if you copy Sophronius, you then have 2 Sophronius waking up later, each experiencing the sensation of being the original. Asking whose sensation is “real” is meaningless.
3) It is not a non-sequitur. Sleep interrupts your continuity of self. Therefore, if your existence depends on uninterrupted continuity of self, sleep would mean you die every night.
I notice that you keep using concepts like “you”, “I” and “self” in your defence of a unique identity. I suggest you try removing those concepts or any other that presupposes unique identity. If you cannot do that then you are simply begging the question.
Well...
The linked article by Elizer Yudkowsky is straight up wrong for the following reasons:
(1) Eliezer’s understanding of the physics here is bunk. I’m actually a trained physist. He is not. But bonus points to you if you reject this argument because you shouldn’t accept my authority any more than you should accept his. I assume you read Griffiths’ Quantum Mechanics or a similar introductory book and came to your own conclusions?
(2) Specifically the experimental result Eliezer quotes has to do with how we calculate probabilities for quantum mechanical events. There are an infinitely many ways one could calculate probabilities—math describes the universe, it doesn’t constrain it. But if you do so naively, you end up with one answer if you treat “P1 at L1, P2 at L2” as a different state than “P1 at L2, P2 at L1″ than if you treat them as the same state. Experimental results show that the latter probabilities are correct. One interpretation is that P1 and P2 are the same particle, so the state is “P at L1, P at L2”. That’s one interpretation. Another perfectly valid interpretation is that “Particle of type
at L1, Particle of type
at L2″ is the actual state—that is to say that the particles keep their identity but identity doesn’t factor into the probabalistic calculus. That’s why the term used by phsyisits is distinguishable rather than identity. These particles are indistinguishable, but that does not mean they are identical. That would be an unwaranted inference.
(3) All of that is a moot point, because it doesn’t match up at all with what we are talking about: the continuity of self as it relates to human minds. Calculating probabilities about particles in boxes tells us nothing about whether I would expect to wake up in a computer after a destructive upload, or how that relates to a personal desire to cheat death. I don’t care about the particles making up my mind: I care about sustaining the never stopping information processing system which gives rise to my subjective experiance. It does not obviously follow that if my mind state were perfectly saved before I was shot in the head, and then at some distant point in the future a brain configured exactly like mine was created, that I would subjectively experience living on in the future. Not anymore than it makes sense to say that my recently deceased aunt lives on in my mother, her identical twin.
FWIW, I have a master’s degree in physics and I’m working to get a PhD (though in a subfield not closely related to the basics of QM; I’d trust say Scott Aaronson over myself even though he’s not a physicist).
What do you mean by identity?
Awesome. Please forgive my undeserved snark.
Honestly I’m not sure. I only envoke the concept of identity in response to nonsense arguments appearing on LessWrong. Normally when I say ‘identity’ i mean the concept of ‘self’ which is the whatever-it-is which experiences my perceptions, thoughts, inner monologues, etc, or whatever it is that gives rise to the experience of me. How this relates to distinguishability of particles in quantum mechanics, I don’t know.. which is kinda the point. When calculating probabilities, you treat two states as the same if they are indistinguishable … how this gets warped into explaining what I’d expect to experience while undergoing a destructive upload is beyond me.
Or not. Memories are genuinely lost, if someone makes a Horcrux and then dies some years later. Moreover, according to the Defense Professor in snake form, the maker’s personality could also change due to influence from the (two) victim(s). The result need not act like the maker at time of casting would act if placed in a new environment.
See also major’s point.
What would be the point? The goal of the horcrux isn’t to transfer into another body you like better than your current one, it’s to be a backup against accidentally dying.
It’s not at all obvious that continuity of consciousness is an illusion. If you have a real proof of that I’d love to hear it.
The continuity of consciousness is one thing but the horcrux doesn’t even give continuity of KNOWLEDGE thanks to merlin
That’s not an issue when it comes to acquiring immortality though. I mean, if you lost all knowledge of algebra, would you say that means you “died”?
Did you not read that section at all? If you lose all knowledge of powerful spellcasting, a) you lose your ability to continue to be immortal after this iteration, b) you lose your ability to defend yourself against enemies who haven’t lost their ability to cast interdicted spells. The second one is really important when the process for immortality is one that inherently makes a lot of enemies! He specifically mentioned that dark wizards that tried use that technique to come back were easily defeated afterward.
That’s irrelevant when you’re considering whether or not to use the horcrux at all and the alternative is being dead.
If you’re on your deathbed, sure. But Horcruxing is not costless. If you have a significant projected lifespan left, and you want ACTUAL immortality, your odds are probably better NOT doing a risky dark ritual that also encourages people to come and kill you.
Could that explain why Hat&Cloak seems to be a clever manipulator who works in utmost secrecy? (They really are weak, and survive only by hiding in the shadows.) We never see them indicated as using anything more complex than an Obliviate or disguise spell, AFAIK, which any reasonably competent adult wizard would be able to pull off.
This seems a big part of why I don’t think Baba Yaga is still alive. The best in-story reason I can think of to consider the theory at all lies in the idea that (if Horcruxes are easier to make than I thought) some Dark figure of legend should still be alive. This argument seems weak if the spell doesn’t give you much advantage. Also, Quirrell’s claim here fits what we know about the Interdict. (I guess the question is whether the Horcrux spell falls under the Interdict!)
Chapter 39:
Yes, but Dumbledore probably can’t create an Horcrux. The Defense Professor claims the known description is wrong, which could make the theft a piece of misdirection. This is another possible way around the Interdict; publish a fake version of the spell which hints at the truth.
Insofar as it is at all meaningful to consider feelings to have opposites, what would you present as the correct alternative?
It is a wrong question, because reality is never that simple and clear cut and no rationalist should expect it to be. And as with all wrong questions, the thing you should do to resolve the confusion is to take a step back and ask yourself what is actually happening in factual terms:
A more accurate way to describe emotion, much like personality, is in terms of multiple dimensions. One dimension is intensity of emotion. Another dimension is the type of experience it offers. Love and hate both have strong intensity and in that sense they are similar, but they are totally opposite in the way they make you feel. They are also totally opposite in terms of the effect it has on your preferences: Thinking well vs. thinking poorly of someone (ignoring the fact that there are multiple types of hate and love, and the 9999 other added complexities).
Ordinary people notice that hate and love are totally the opposite in several meaningful ways, and say as much. Then along comes a contrarian who wants to show how clever he is, and he picks up on the one way that love and hate are similar and which can make them go well together: The intensity of emotion towards someone or something. And so the contrarian states that really love and hate are the same and indifference is the opposite of both (somehow), which can cause people who aren’t any good at mapping complex subjects along multiple axi in their head to throw out their useful heuristic and award status to the contrarian for his fake wisdom.
I’m a bit disappointed that Eliezer fell for the number one danger of rationalists everywhere: Too much eagerness to throw out common sense in favour of cleverness.
(Eliezer if you are reading this: You are awesome and HPMOR is awesome. Please keep writing it and don’t get discouraged by this criticism)
I’m surprised how strongly you’re reacting to this, given that you seem to be aware that the whole “emotions having opposites” system is really just a word game anyway.
Why is it important that you prioritise the “effect on preferences” axis and Eliezer prioritises the “intensity” axis, except insofar as it is a bit embarrassing to see an intelligent person presenting one of these as wisdom? Perhaps Eliezer simply considers apathy to be a more dangerous affliction than hatred, and is thus trying to shift his readers’ priorities accordingly. Insofar as there are far more people in the world moved to inaction through apathy than there are people moved to wrong action through hatred, perhaps there’s something to that.
Hm, I didn’t think I was reacting that strongly… If I was, it’s probably because I am frustrated in general by people’s inability to just take a step back and look at an issue for what it actually is, instead of superimposing their own favourite views on top of reality. I remember I recently got frustrated by some of the most rational people I know claiming that sun burn was caused by literal heat from the sun instead of UV light. Once they formed the hypothesis, they could only look at the issue through the ‘eyes’ of that view. And I see the same mistake made on Less Wrong all the time. I guess it’s just frustrating to see EY do the same thing. I don’t get why everyone, even practising rationalists, find this most elementary skill so hard to master.
Could you describe this skill in more detail please? If it is one I do not possess, I would like to learn.
Your attitude makes me happy, thank you. :)
It’s the most basic rationalist skill there is, in my opinion, but for some reason it’s not much talked about here. I call it “thinking like the universe” as opposed to “thinking like a human”. It means you remove yourself from the picture, you forget all about your favourite views and you stop caring about the implications of your answer since those should not impact the truth of the matter, and describe the situation in purely factual terms. You don’t follow any specific chain of logic towards finding an answer: You instead allow the answer to naturally flow from the facts.
It means you don’t ask “which facts argue in favour of my view and which against?”, but “what are the facts?”
It means you don’t ask “What is my hypothesis?”, you ask “which hypotheses flow naturally from the facts?”
It means you don’t ask “What do I believe?” but “what would an intelligent person believe given these facts?”
It means you don’t ask “which hypothesis do I believe is true?”, but “how does the probability mass naturally divide itself over competing hypotheses based on the evidence?”
It means you don’t ask “How can I test this hypothesis?” but “Which test would maximally distinguish between competing hypotheses?”
It means you never ever ask who has the “burden of proof”.
And so on and so forth. I see it as the most fundamental skill because it allows you to ask the right questions, and if you start with the wrong question it really doesn’t matter what you do with it afterwards.
I think I understand now, thank you.
Do you follow any specific practices in order to internalise this approach, or do you simply endeavour to apply it whenever you remember?
The primary thing I seem to do is to remind myself to care about the right things. I am irrelevant. My emotions are irrelevant. Truth is not influenced by what I want to be true. I am frequently amazed by the degree with which my emotions are influenced by subconscious beliefs. For example I notice that the people who make me most angry when they’re irrational are the ones I respect the most. People who get offended usually believe at some level that they are entitled to being offended. People who are bad at getting to the truth of a matter usually care more about how they feel than about what is actually true. (This is related to the fundamental optimization problem: The truth will always sound less truthful than the most truthful sounding falsehood.) Noticing that kind of thing is often more effective than trying to control emotions the hard way.
Secondly, you want to pay attention to your thoughts as much as possible. This is just meditation, really. If you become conscious of your thoughts, you gain a degree of control over them. Notice what you think, when you think it, and why. If a question makes you angry, don’t just suppress the anger, ask yourself why.
For the rest it’s just about cultivating a habit of asking the right questions. Never ask yourself what you think, since the universe doesn’t care what you think. Instead say “Velorien believes X: How much does this increase the probability of X?”.
Bertrand Russel gets it right, of course
This needs to be on posters and T-shirts if it isn’t already. Is it a well-known principle?
Thank you for the explanation. This overall idea (of the relationship between belief and reality, and the fact that it only goes one way) is in itself not new to me, but your perspective on it is, and I hope it will help me develop my ability to think objectively.
Also thanks for the music video. Shame I can’t upvote you multiple times.
Sadly not. I keep meaning to post an article about this, but it’s really hard to write an article about a complex subject in such a way that people really get it (especially if the reader has little patience/charity), so I keep putting it off until I have the time to make it perfect. I have some time this weekend though, so maybe...
I think the Fundamental Optimization Problem is the biggest problem humanity has right now and it explains everything that’s wrong with society: It represents the fact that doing what’s good will always feel less good than doing what feels good, people who optimize for altruism will always be seen as more selfish than people who optimize for being seen as altruistic, the people who get in power will always be the ones whose skills are optimized for getting in power and not for knowing what to do once they get there, and people who yell about truth the most are the biggest liars. It’s also why “no good deed goes unpunished”. Despite what Yoda claims, the dark side really is stronger.
Unfortunately there’s no good post about this on LW AFAIK, but Yvain’s post about Moloch is related and is really good (and really long).
Aww shucks. ^_^
I think that people who fully possess such a skill are usually described as “have achieved enlightenment” and, um, are rare :-) The skill doesn’t look “elementary” to me.
Heheh, fair point. I guess a better way of putting it is that people fail to even bother to try this in the first place, or heck even acknowledge that this is important to begin with.
I cannot count the number of times I see someone try to answer a question by coming up with an explanation and then defending it, and utterly failing to graps that that’s not how you answer a question. (In fact, I may be misremembering but I think you do this a lot, Lumifer.)
The appropriateness of that probably depends on what kind of question it is...
I think my hackles got raised by the claim that your perception is “what it actually is”—and that’s a remarkably strong claim. It probably works better phrased like something along the lines of “trying to take your ego and preconceived notions out of the picture”.
Any links to egregious examples? :-)
I guess it is slightly more acceptable if it’s a binary question. But even so it’s terrible epistimology, since you are giving undue attention to a hypothesis just because it’s the first one you came up with.
An equally awful method of doing things: Reading through someone’s post and trying to find anything wrong with it. If you find anything --> post criticism, if you don’t find anything --> accept conclusion. It’s SOP even on Less Wrong, and it’s not totally stupid but it’s really not what rationalists are supposed to do.
Yes, that is a big part of it, but it’s more than that. It means you stop seeing things from one specific point of view. Think of how confused people get about issues like free will. Only once you stop thinking about the issue from the perspective of an agent and ask what is actually happening from the perspective of the universe can you resolve the confusion.
Or, if you want to see some great examples of people who get this wrong all the time, go to the James Randi forums. There’s a whole host of people there who will say things during discussions like “Well it’s your claim so you have the burden of proof. I am perfectly happy to change my mind if you show me proof that I’m wrong.” and who think that this makes them rationalists. Good grief.
I have spent some time going through your posts but I couldn’t really find any egregious examples. Maybe I got you confused with someone else. I did notice that where politics were involved you’re overly prone to talking about “the left” even though the universe does not think in terms of “left” or “right”. But of course that’s not exactly unique to you.
One other instance I found:
It’s not a huge deal but I personally would not classify ideas as belonging to people, for the reasons described earlier.
In principle I agree with you.
In practice I think “X has the burden of proof” generally means something similar to “The position X is advancing has a rather low prior probability, so substantial evidence would be needed to make it credible, and in particular if X wants us to believe it then s/he would be well advised to offer substantial evidence.” Which, yes, involves confusion between an idea and the people who hold it, and might encourage an argument-as-conflict view of things that can work out really badly—but it’s still a convenient short phrase, reasonably well understood by many people, that (fuzzily) denotes something it’s often useful to say.
So, yeah, issuing such challenges in such terms is a sign of imperfect enlightenment and certainly doesn’t make the one who does it a rationalist in any useful sense. But I don’t see it as such a bad sign as I think you do.
Yea, the concept of burden of proof can be a useful social convention, but that’s all it is. The thing is that taking a sceptical position and waiting for someone to proof you wrong is the opposite of what a sceptic should do. If you ever see two ‘sceptics’ both taking turns postinf ‘you have the burden of proof’, ‘no you have the burden of proof!’… You’ll see what i mean. Actual rationality isn’t supposed to be easy.
No, that’s not what I had in mind. For example, there are questions which explicitly ask for an explanation and answering them with an explanation is fine. Or, say, there are questions which are wrong (as a question) so you answer them with an explanation of why they don’t make sense.
I don’t think you can. Or, rather, I think you can see things from multiple specific point of views, but you cannot see them without any point of view. Yes, I understand you talk about looking at things “from the perspective of the universe” but this expression is meaningless to me.
That may or may not be a reasonable position to take. Let me illustrate how it can be reasonable: people often talk in shortcuts. The sentence quoted could be a shortcut expression for “I have evaluated the evidence for and against X and have come to the conclusion Y. You are claiming that Y is wrong, but your claim by itself is not evidence. Please provide me with actual evidence and then I will update my beliefs”.
But humans do and I’m talking to humans, not to the universe.
A more general point—you said in another post
This is true when you are evaluating the physical reality. But it is NOT true when you are evaluating the social reality—it IS influenced by emotions and what people want to be true.
I don’t quite understand you here.
I suppose “elementary” in the sense of “fundamental” or “simple” or “not relying on other skills before you can learn it”, rather than in the sense of “easy” or “widespread”.
Contrast literacy. Being to read and write one’s own language is elementary. It can be grasped by a small child, and has no prerequisites other than vision, reasonable motor control and not having certain specific brain dysfunctions. Yet one does not have to cast one’s mind that far back through history to reach the days in which this skill was reserved for an educated minority, and most people managed to live their whole lives without picking it up.