I think that, in the first few chapters, Harry did not give enough credence to the hypothesis that he was simply insane and hallucinating. I think, given the observations he had at the time (his mom claimed her sister was a witch; he got a letter implying the same; a woman levitated his dad and turned into a cat), he should have at least seriously considered it. Certainly those pieces of information are some evidence for magic, but considering what that hypothesis entails — existing scientific knowledge about physics (even at the level of abstraction that we experience directly) is so completely wrong that it’s actually possible to make the universe understand human words or intentions, or there’s this incredibly advanced technology that looks like it’s violating the laws of physics, and it’s existed for thousands of years and apparently everyone has forgotten how it works — I think an honest rationalist would have to look into the “I’m cuckoo” hypothesis.
I’m not sure what one is supposed to do upon concluding that one is quite that cuckoo. Upon getting that far gone, what can you do? Can you even assume that your actions and words will leave your brain and impact reality in roughly the way you intend? If you are that crazy, and you try to walk across the room, will you get there? Are you in a room? Do you have legs? It might be that being as insane as all that is so game over that, whatever one’s epistemic position is, one has to operate as though the observations were correct.
It would be a good idea to consider the hypothesis that one is crazy in a conventional way, such as schizophrenia. One can try to test that hypothesis. But the “anything goes”-crazy hypothesis isn’t really useful.
Oh, you’re right—and what’s more, it doesn’t take much to make the “anything goes”-crazy hypothesis more ridiculous than magic. We know that human brains have limited processing power and storage capacity, so if you can produce sensations which the brain should be unable to fake, you can reduce the probability mass of the hypothesis significantly.
I wrote out a long response involving an analogy to a CPU self-test program, but at the end I realised that I had arrived at the same conclusion you stated. :-) So I’m voting you up and wish to extend you an Internet high-five.
However, on this topic, it seems like there’s no good approach for handling the scenario where your brain messes with your internal tests in such a way as to point them invariably at a false positive, i.e. anosognosia.
I agree that a good self-test of the sort you describe would reduce the probability for most kinds of anything-goes insanity, but what sort of test could be used to check against the not-insignificant subset of insanity that specifically acts against self-tests and forces them to return false positive at the highest level?
It’s always possible to produce insane minds that cannot fix themselves—the interesting question is how big a diff can be bridged at what price. And that’s a bit more difficult to answer.
I wonder, however, whether a sufficiently educated anosognosiac could determine that the sources informing them of their paralysis were more reliable than their firsthand observations. It seems unlikely, of course.
The answer appears to be no. There were a few articles in Scientific American: Mind about it a while back. Experiments show that the flaw causing stuff like people denying they can’t move their arms is part of their logic processing; they proved this by figuring out they could reset their thinking for a short time, at which point people were able to clearly state that they were paralyzed and they were surprised at their earlier thinking.
After a minute, the effect wore off and the patient returned to an earlier state. So the effect appears to short circuit the decision making process on a hardware level.
True. Even if upon witnessing such absurdities he had immediately assumed he was seeing things and demanded to be checked into a mental hospital, he couldn’t even be sure that there was really anyone around him to hear, or that he was really saying what he thought he was saying, etc.
But then, if he’s that far removed from reality, whatever he’s really doing must appear crazy enough to draw the attention of those around him. Maybe he’s already in a mental institution… which he imagines to be a school of wizardry! From the inside, he already sort of feels (and acts) as though he’s the only sane person in a madhouse… while in reality, he’s just another patient.
I think David Hume said something more or less like this when discussing the likelihood of miracles; that if you witnessed a miracle, you ought to conclude you were insane.
I am not sure I buy into this. For one thing, I see a problem with falsifiability. If there is nothing that I could see to convince me that magic might work, I am not objecting to the reality of magic on rational grounds, but as a sort of knee-jerk. It’s like the doubleplus loony creationist types who think the devil planted archaeopteryx.
There are reasons I think magic in the Harry Potter sense is not true, reasons that could be argued against (e.g., show me a plausible medium for magic to be carried in). I don’t think it would be very rational to make it sort of… axiomatic that magic is false. That seems to in fact be the attitude Eliezer is criticizing in the character of Harry’s father.
So yeah, some probability mass goes to the “hallucination/insane” hypothesis, but not very much. Most goes to the “I don’t know what’s going on here at all, but she did just apparently turn into a cat” hypothesis.
True; but where does that factor come in? I mean, hallucinations can presumably be repeatable too. “I tested Monday, Tuesday, Wednesday, Thursday and Friday—and I am still Napoleon!”
If he was having completely full-blown auditory, visual, and tactile hallucinations (note that this is fairly unusual, for example schizophrenia apparently usually only manifests hallucinations in one modality), then what exactly could he do about it or even how would he test it?
From the reader’s perspective, it doesn’t appear that that’s what we are supposed to believe (though I’m still wondering...), so I’m tentatively guessing that the mechanism of magic is some kind of technology, and that the in-story universe has the same laws as this one. It does seem implausible that an ancient civilization could have invented technology advanced enough to be indistinguishable from this kind of magic, but that could be different in an alternate history, and it still seems less implausible than any set of physical laws that would actually make this kind of magic a normal, natural thing that a non-industrial civilization could invent/discover.
We are supposed to be wondering why magic works at all, right? It doesn’t seem like Eliezer to expect us to be satisfied with an Inherently Mysterious phenomenon at the center of the story, even if it’s a story based on someone else’s fictional world that already had that feature… but I don’t know, maybe it’s a demonstration that, no matter how ridiculous the rules are, rationality will still allow you to win.
But I’m still hoping that magic will be explained at some point, and I’m still looking for clues about it.
I think magic will be explained as an addition onto physics: a new “force” is involved, but still behaves in an intelligible way. I can’t imagine how the MoR series would explain the magic exhibited thus far as coming from current physical understanding.
Unless the magicians control quantum wavefunctions directly, or something like that. Or Harry is a brain in a vat.
If something totally crazy seemed like it was about to happen and the world was at stake, like a technological singularity was about to occur or something, and I was called to work for the team of great minds that were trying their hardest to stop the destruction of the entire universe, dropping out of high school in the process, and meeting a beautiful girl who had been living literally a few houses down from me for the last 4 years without my knowing about it, who just so happened to be doing an essay on transhumanism for her English class and would just love to interview someone who was doing work for the Singularity Institute.
The events in a story fit into a narrative. If I were in a story, I might be able to make especially accurate predictions by privileging hypotheses that make narrative sense. Dumbledore did this on an intuitive level, and it is the reason for his success.
Then again, if your predictions are part of the narrative, the narrative might go on to explicitly falsify your predictions. And if you except it to falsify your predictions… well, two can play that game.
I’m not sure, considering the number of different kinds of story there are even in our world, and especially considering that entities which could create our world will probably have sorts of fiction we haven’t thought of, and may have sorts of fiction we can’t think of.
However, Eliezer may come up with something which would plausibly convince Harry.
I think something like “brain in a vat” is the best inference from observing magic. [EDITED to add: of course I mean after getting very good evidence against deception, insanity, etc.]
More precisely: if you find evidence that something deeply embedded in the universe is best understood at something like the level of human concepts—it matters what words you say, whether you really hate someone else as you say them, etc. -- then you should assign more probability to the hypothesis that the-universe-as-it-now-is was made, or at least heavily influenced, by someone or something with a mind (or minds) somewhat like ours. That could be a god, a graduate student in another universe with a big computer, superintelligent aliens or AIs who’ve messed with the fabric of reality in our part of the world, or any number of other things.
In a manner of speaking this is obviously correct for the Potterverse (either Rowling’s or Yudkowsky’s): in that universe, magic works; and indeed that universe was designed by an intelligent being or beings, namely Rowling or Rowling+Yudkowsky. It probably doesn’t work “internally” for the original Potterverse—I’ve no idea whether Rowling has any particular position on whether within the stories the world should be thought of as created by intelligent beings—but I’m guessing that it does for Eliezer’s.
I’m not convinced that concluding one is in a simulation is really the best bet here. A simulation would have a terrible amount of trouble specifying these effects. If for example, I have a simulation for say just our local system, how the heck are the people running the simulation easily going to be able to specify emotional states or the like? The only possible explanation I can have for this is that the simulation was originally started with humans having certain (simulated) brain structure and that structure is the type of structure that wizards have. Other humans can’t do it because their structure isn’t of the type the simulation recognizes to trigger magic.
I agree. This is why I think the Hogwarts letter is charmed to make itself sound more plausible than it should be (which would be a sensible way to ease the transition for muggleborns). Harry explicitly wonders where his own certainty that magic is real comes from and doesn’t get an answer via introspection. That sounds like the effect of a weak charm to me.
I think that, in the first few chapters, Harry did not give enough credence to the hypothesis that he was simply insane and hallucinating. I think, given the observations he had at the time (his mom claimed her sister was a witch; he got a letter implying the same; a woman levitated his dad and turned into a cat), he should have at least seriously considered it. Certainly those pieces of information are some evidence for magic, but considering what that hypothesis entails — existing scientific knowledge about physics (even at the level of abstraction that we experience directly) is so completely wrong that it’s actually possible to make the universe understand human words or intentions, or there’s this incredibly advanced technology that looks like it’s violating the laws of physics, and it’s existed for thousands of years and apparently everyone has forgotten how it works — I think an honest rationalist would have to look into the “I’m cuckoo” hypothesis.
I’m not sure what one is supposed to do upon concluding that one is quite that cuckoo. Upon getting that far gone, what can you do? Can you even assume that your actions and words will leave your brain and impact reality in roughly the way you intend? If you are that crazy, and you try to walk across the room, will you get there? Are you in a room? Do you have legs? It might be that being as insane as all that is so game over that, whatever one’s epistemic position is, one has to operate as though the observations were correct.
It would be a good idea to consider the hypothesis that one is crazy in a conventional way, such as schizophrenia. One can try to test that hypothesis. But the “anything goes”-crazy hypothesis isn’t really useful.
Oh, you’re right—and what’s more, it doesn’t take much to make the “anything goes”-crazy hypothesis more ridiculous than magic. We know that human brains have limited processing power and storage capacity, so if you can produce sensations which the brain should be unable to fake, you can reduce the probability mass of the hypothesis significantly.
How can you use your brain to test if a sensation your brain is experiencing cannot be faked by your brain?
How long would it take you to factor the number 495 967 020 337 by hand?
And how long would it take you to multiply two numbers, both less than 1 300 000, together?
Some operations are much easier to verify than to execute.
I wrote out a long response involving an analogy to a CPU self-test program, but at the end I realised that I had arrived at the same conclusion you stated. :-) So I’m voting you up and wish to extend you an Internet high-five.
However, on this topic, it seems like there’s no good approach for handling the scenario where your brain messes with your internal tests in such a way as to point them invariably at a false positive, i.e. anosognosia.
I agree that a good self-test of the sort you describe would reduce the probability for most kinds of anything-goes insanity, but what sort of test could be used to check against the not-insignificant subset of insanity that specifically acts against self-tests and forces them to return false positive at the highest level?
It’s always possible to produce insane minds that cannot fix themselves—the interesting question is how big a diff can be bridged at what price. And that’s a bit more difficult to answer.
I wonder, however, whether a sufficiently educated anosognosiac could determine that the sources informing them of their paralysis were more reliable than their firsthand observations. It seems unlikely, of course.
The answer appears to be no. There were a few articles in Scientific American: Mind about it a while back. Experiments show that the flaw causing stuff like people denying they can’t move their arms is part of their logic processing; they proved this by figuring out they could reset their thinking for a short time, at which point people were able to clearly state that they were paralyzed and they were surprised at their earlier thinking.
After a minute, the effect wore off and the patient returned to an earlier state. So the effect appears to short circuit the decision making process on a hardware level.
True. Even if upon witnessing such absurdities he had immediately assumed he was seeing things and demanded to be checked into a mental hospital, he couldn’t even be sure that there was really anyone around him to hear, or that he was really saying what he thought he was saying, etc.
But then, if he’s that far removed from reality, whatever he’s really doing must appear crazy enough to draw the attention of those around him. Maybe he’s already in a mental institution… which he imagines to be a school of wizardry! From the inside, he already sort of feels (and acts) as though he’s the only sane person in a madhouse… while in reality, he’s just another patient.
I think David Hume said something more or less like this when discussing the likelihood of miracles; that if you witnessed a miracle, you ought to conclude you were insane.
I am not sure I buy into this. For one thing, I see a problem with falsifiability. If there is nothing that I could see to convince me that magic might work, I am not objecting to the reality of magic on rational grounds, but as a sort of knee-jerk. It’s like the doubleplus loony creationist types who think the devil planted archaeopteryx.
There are reasons I think magic in the Harry Potter sense is not true, reasons that could be argued against (e.g., show me a plausible medium for magic to be carried in). I don’t think it would be very rational to make it sort of… axiomatic that magic is false. That seems to in fact be the attitude Eliezer is criticizing in the character of Harry’s father.
So yeah, some probability mass goes to the “hallucination/insane” hypothesis, but not very much. Most goes to the “I don’t know what’s going on here at all, but she did just apparently turn into a cat” hypothesis.
Miracles are one-time events, whereas magic spells are repeatable (in every fictional universe I’ve seen, anyway).
True; but where does that factor come in? I mean, hallucinations can presumably be repeatable too. “I tested Monday, Tuesday, Wednesday, Thursday and Friday—and I am still Napoleon!”
If he was having completely full-blown auditory, visual, and tactile hallucinations (note that this is fairly unusual, for example schizophrenia apparently usually only manifests hallucinations in one modality), then what exactly could he do about it or even how would he test it?
Yes, me[2010-05] did not think of that :) I agree now
Addenda:
From the reader’s perspective, it doesn’t appear that that’s what we are supposed to believe (though I’m still wondering...), so I’m tentatively guessing that the mechanism of magic is some kind of technology, and that the in-story universe has the same laws as this one. It does seem implausible that an ancient civilization could have invented technology advanced enough to be indistinguishable from this kind of magic, but that could be different in an alternate history, and it still seems less implausible than any set of physical laws that would actually make this kind of magic a normal, natural thing that a non-industrial civilization could invent/discover.
We are supposed to be wondering why magic works at all, right? It doesn’t seem like Eliezer to expect us to be satisfied with an Inherently Mysterious phenomenon at the center of the story, even if it’s a story based on someone else’s fictional world that already had that feature… but I don’t know, maybe it’s a demonstration that, no matter how ridiculous the rules are, rationality will still allow you to win.
But I’m still hoping that magic will be explained at some point, and I’m still looking for clues about it.
I think magic will be explained as an addition onto physics: a new “force” is involved, but still behaves in an intelligible way. I can’t imagine how the MoR series would explain the magic exhibited thus far as coming from current physical understanding.
Unless the magicians control quantum wavefunctions directly, or something like that. Or Harry is a brain in a vat.
Or if Harry figures out that he’s in a story.
What kind of evidence would convince you that you were in a story?
If something totally crazy seemed like it was about to happen and the world was at stake, like a technological singularity was about to occur or something, and I was called to work for the team of great minds that were trying their hardest to stop the destruction of the entire universe, dropping out of high school in the process, and meeting a beautiful girl who had been living literally a few houses down from me for the last 4 years without my knowing about it, who just so happened to be doing an essay on transhumanism for her English class and would just love to interview someone who was doing work for the Singularity Institute.
Oh wait...
The events in a story fit into a narrative. If I were in a story, I might be able to make especially accurate predictions by privileging hypotheses that make narrative sense. Dumbledore did this on an intuitive level, and it is the reason for his success.
This is basically an attempt to formalize genre savviness.
And of course, if you really were in a story and tried it, story logic dictates that you would almost certainly end up being wrong genre savvy
Then again, if your predictions are part of the narrative, the narrative might go on to explicitly falsify your predictions. And if you except it to falsify your predictions… well, two can play that game.
If I started hearing the narrator’s voice
Talking animals
Beanstalks of unusual size
A pair of boxes, one containing $1000 …
Black comedy
Poetic justice
People living happily ever after.
I’m not sure, considering the number of different kinds of story there are even in our world, and especially considering that entities which could create our world will probably have sorts of fiction we haven’t thought of, and may have sorts of fiction we can’t think of.
However, Eliezer may come up with something which would plausibly convince Harry.
I think something like “brain in a vat” is the best inference from observing magic. [EDITED to add: of course I mean after getting very good evidence against deception, insanity, etc.]
More precisely: if you find evidence that something deeply embedded in the universe is best understood at something like the level of human concepts—it matters what words you say, whether you really hate someone else as you say them, etc. -- then you should assign more probability to the hypothesis that the-universe-as-it-now-is was made, or at least heavily influenced, by someone or something with a mind (or minds) somewhat like ours. That could be a god, a graduate student in another universe with a big computer, superintelligent aliens or AIs who’ve messed with the fabric of reality in our part of the world, or any number of other things.
In a manner of speaking this is obviously correct for the Potterverse (either Rowling’s or Yudkowsky’s): in that universe, magic works; and indeed that universe was designed by an intelligent being or beings, namely Rowling or Rowling+Yudkowsky. It probably doesn’t work “internally” for the original Potterverse—I’ve no idea whether Rowling has any particular position on whether within the stories the world should be thought of as created by intelligent beings—but I’m guessing that it does for Eliezer’s.
I’m not convinced that concluding one is in a simulation is really the best bet here. A simulation would have a terrible amount of trouble specifying these effects. If for example, I have a simulation for say just our local system, how the heck are the people running the simulation easily going to be able to specify emotional states or the like? The only possible explanation I can have for this is that the simulation was originally started with humans having certain (simulated) brain structure and that structure is the type of structure that wizards have. Other humans can’t do it because their structure isn’t of the type the simulation recognizes to trigger magic.
I agree. This is why I think the Hogwarts letter is charmed to make itself sound more plausible than it should be (which would be a sensible way to ease the transition for muggleborns). Harry explicitly wonders where his own certainty that magic is real comes from and doesn’t get an answer via introspection. That sounds like the effect of a weak charm to me.
Oh, and you’re forgetting the bit where Mrs. Figg just randomly knows magic exists. That would be pretty jarring.