That’s one way to start to do science, to observe the phenomenon and record my observations. Would something prevent me from doing that? Would the ability itself stop working? Something else?
Suppose that the Great Psychicator who imbues all psychics with their amazing powers hates Science and revokes their abilities the moment one decides to systematically study them, so all such experiments lead to a null result. This model “explains” the null results found by the James Randi Educational Foundation.
This might look far-fetched, but recall that “Nature” already behaves like it, say in the double-slit experiment. Replace “psychic abilities” with “interference pattern produced by an electron psychically detecting the other slit” and “systematic study” with “adding detector to one of the slits”. The moment you start this “systematic study” the electron’s “psychic abilities” to sense the presence of the other slit disappear without a trace.
(nods) This is basically (modulo snark) the position of several people I know who believe in such things, and at least one LW contributor.
Faced with that response, I usually pursue a track along the lines of “Ah, I see. That makes sense as far as it goes, but it gets tricky, because it’s also true that we frequently perceive patterns that aren’t justified by the data at all, but without systematic study it’s hard to tell. For example, (examples).”
Followed by a longish discussion to get clear on the idea that some perceived patterns are indeed hallucinatory in this sense, even though your cousin’s psychic power isn’t necessarily one of those patterns. This sometimes fails… nobody I know actually claims that all perceived patterns are non-hallucinatory, but some people I know reject the path from “not all perceived patterns are non-hallucinatory” to “some perceived patterns are hallucinatory.” Which I generally interpret as refusing to have the conversation at all because they don’t like where it’s headed. a preference I usually respect out of politeness.
If it succeeds, I move on to “OK. So when I see a pattern that goes away when I begin systematic study, there are two possible theories: either the phenomenon is evasive as you describe, or my brain is perceiving patterns that aren’t there in the first place. How might I go about telling the difference, so I could be sure which was which? For example, if I wake up in the middle of the night frightened that there’s an intruder in my house, what could I do to figure out whether there is one or not?”
Which moves pretty quickly to the realization that an intruder in my house that systematically evades detection becomes increasingly implausible the more failed tests I perform, and at some point the theory that there simply isn’t such an intruder becomes more plausible.
I generally consider this a good place to stop with most people. Lather, rinse, repeat. They have one track that supports “I believe X no matter how many experiments fail to provide evidence for it,” and another track that supports “the more experiments fail to provide evidence for X, the less I should believe X”. They tend to mutually inhibit one another. The more that second track is activated, the less powerful that first track is; eventually it crumbles.
an intruder in my house that systematically evades detection becomes increasingly implausible the more failed tests I perform
But things keep disappearing from my house at random! Surely it’s an evidence for an invisible intruder, not only for my memory going bad! And this never happens in the office, so it can’t be my memory! Therefore intruder!
I’m not really interested in role-playing out a whole conversation. If you insist that your invisible intruder, like your cousin’s psychic ability, is real and evasive, I look for a different example. If you insist that everything you ever think about, however idly, is real and evasive, I tap out and recommend you seek professional help.
Which moves pretty quickly to the realization that an intruder in my house that systematically evades detection becomes increasingly implausible the more failed tests I perform, and at some point the theory that there simply isn’t such an intruder becomes more plausible.
This assume’s that the person you are talking to didn’t perform any tests that provide them evidence for their belief.
If you are facing someone who got his ideas from reading books, that might work. If you are facing someone who does have reference experiences for his belief, things get a bit different. You are basically telling them that they intruder that they found in their house is a hallucination.
The observer could go and study his cousin systematically. The cousin does 1000 trials and no trial shows any evidence that his cousin isn’t psychic. If the observer believes “the more experiments fail to provide evidence for X, the less I should believe X”, the huge quantity of experiements dictate to himself that he should believe that his cousin is psychic. The idea that the cousin uses a trick is supposed to become increasingly implausible the more failed tests the observer performs.
Some experiements are obviously systematically flawed. Doing more of those experiments shouldn’t lead you to increase your belief.
The debate is more more about which experiements are systematically flawed than it’s about “I believe X no matter how many experiments fail to provide evidence for it,” vs “the more experiments fail to provide evidence for X, the less I should believe X”.
This assumes that the person you are talking to didn’t perform any tests that provide them evidence for their belief.
It doesn’t assume this, it infers it about a particular person from the evidence provided by shminux above. The interlocutor shminux is describing rejects the idea that experimental results can be definitive on this question, which is different from the position you describe here. (Anyone who starts out asserting the former, then switches to the latter in mid-stream, is no longer asserting a coherent position at all and requires altogether different techniques for engaging with them.)
The debate is more more about which experiements are systematically flawed
I’m not quite sure what you mean by “the debate”. Is there only one? That surprises me; it certainly seems to me that some people adopt the stance shminux described, to which I responded.
All that aside, I certainly agree with you that my response to someone taking the stance you describe here (embracing experimentalism as it applies to psychic phenomena in theory, but implementing experiments in a problematic way) should differ from my response to someone taking the stance shminux describes above (rejecting experimentalism as it applies to psychic phenomena).
The interlocutor shminux is describing rejects the idea that experimental results can be definitive on this question, which is different from the position you describe here.
That depends on what you mean by “experiment”. If you mean doing a proper replicable controlled experiment than there is no experimental evidence. If you mean any evidence based on observation than there is experimental evidence.
In other words, there is evidence for the intruder, just not scientific evidence in the sense of this post.
I don’t in fact mean, by “experiment”, any evidence based on observation. I agree that there is evidence for (and against) the intruder, and did not say otherwise, although in general I don’t endorse using “evidence” in this sense without tagging it in some way (e.g., “Bayesian evidence”), since the alternative is reliably confusing.
How does the Great Psychicator distinguish between Science and normal events? Maybe we can trick it.
What would you propose?
This model is more complicated and so it’s less probable.
True. But so is Quantum Mechanics vs Classical Mechanics. (Un)Fortunately, CM is not enough to explain what we see (e.g. the interference pattern). To make a parallel with the issue at hand, the null-psychic model alone (without the complicated machinery of cognitive biases) does not explain the many non-scientifically tested claims of psychic powers.
A model where people are delusional about psychic powers or making claims about them they don’t believe is less complicated than a model with a Great Psychicator because we already know that people are delusional in lots of ways similar to that.
RE: tricking it. The problem is that we can’t observe one of the groups involved here, or we invalidate the experiment. But it seems likely to me that all conditions that could prohibit it from working when science is involved would also prohibit the claims about it working from having any basis in reality. In other words, either it can be tricked, or those people making the claims must concede that they don’t really have any basis to assert that it works. Any conditions which allow it to prohibit science should also preclude anyone from being justified in making claims about the reasons why it works.
An infinitely complex Great Psychicator would be smart enough to fool our experiments though, I think.
An infinitely complex Great Psychicator would be smart enough to fool our experiments though, I think.
Actually, all this entity would have to do is to foil any attempts to go meta on a certain class of phenomena, like psychic powers (and plenty of others, say, homeopathy). In a simulated universe this could be as “simple” as detecting that a certain computation is likely to discover its simulated nature and disallow this computation by altering the inputs. A sensible security safeguard, really. This can be done by terminating and rolling back any computation that is dangerously close to getting out of control and restarting it from a safe point with the inputs adjusted. Nothing infinite is required. In fact, there is very little extra complexity beyond sounding an alarm when dangerous things happen. Of course, anything can be “explained” in the framework of a simulation.
In a simulated universe this could be as “simple” as detecting that a certain computation is likely to discover its simulated nature and disallow this computation by altering the inputs.
But mixing “certain computation” and “discover” like that is mixing syntax and semantics—in order to watch out for that occurrence, you’d have to be aware of all possible semantics for a certain computation, to know if it counts as a “discovery”.
you’d have to be aware of all possible semantics for a certain computation
Not at all. You set up a trigger such as “50% of all AI researchers believe in the simulation argument”, then trace back their reasons for believing so and restart from a safe point with less dangerous inputs.
You set up a trigger such as “50% of all AI researchers believe in the simulation argument”
If your simulation has beliefs as a primitive, then you can set up that sort of trigger—but then it’s not a universe anything like ours.
If your simulation is simulating things like particles or atoms, then you don’t have direct access to whether they’ve arranged themselves into a “belief” unless you keep track of every possible way that an arrangement of atoms can be interpreted as a “belief”.
Sure, if you run your computation unstructured at the level of quarks and leptons, then you cannot tell what happens in the minds of simulated humans. This would be silly, and no one does any non-trivial bit of programming this way. There are always multi-level structures, like modules. classes, interfaces… Some of these can be created on the fly as needed (admittedly, this is a tricky part, though by no means impossible). So after a time you end up with a module that represents, say, a human, with sub-modules representing beliefs and interfaces representing communication with other humans, etc. And now you are well equipped to set up an alert.
If the Great Psychicator uses triggers on a level of reality less precise than the atomic or subatomic ones, then I believe its triggers could not possibly be precise enough to A. prevent science from discovering psychic powers and simultaneously B. allow normal people not doing science access to its psychic powers.
If there’s a flaw in its model of the universe, we can exploit that and use the flaw do to science (this would probably involve some VERY complex work arounds, but the universe is self consistent so it seems possible in theory). So the relevant question is whether or not its model of the universe is better than ours, which is why I concede that a sufficiently complex Great Psychicator would be able to trick us.
If the Great Psychicator uses triggers on a level of reality less precise than the atomic or subatomic ones, then I believe its triggers could not possibly be precise enough to A. prevent science from discovering psychic powers and simultaneously B. allow normal people not doing science access to its psychic powers.
No, it just needs to be better at optimizing than we are.
I don’t know exactly what you mean by “optimizing”, but if your main point is that it’s an issue of comparative advantage then I agree. Or, if your point is that it’s not sufficient for humans to have a better model of reality in the abstract, we’d also need to be able to apply that model in such a way as to trick the GP and that might not be possible depending on the nature of the GP’s intervention, I can agree with that as well.
Yeah, thanks. This is part of what I was trying to get at. And I’m further contending that if the semantics for every possible scientific experiment were invalidated, then in doing so the hypothetical Great Psychicator would also have to invalidate the semantics for any legitimate claims that psychic powers worked. The two categories overlap perfectly.
This isn’t intended to be an argument from definition, I hope that is also clear.
Suppose that the Great Psychicator who imbues all psychics with their amazing powers hates Science and revokes their abilities the moment one decides to systematically study them, so all such experiments lead to a null result. This model “explains” the null results found by the James Randi Educational Foundation.
This might look far-fetched, but recall that “Nature” already behaves like it, say in the double-slit experiment. Replace “psychic abilities” with “interference pattern produced by an electron psychically detecting the other slit” and “systematic study” with “adding detector to one of the slits”. The moment you start this “systematic study” the electron’s “psychic abilities” to sense the presence of the other slit disappear without a trace.
How would you proceed?
(nods) This is basically (modulo snark) the position of several people I know who believe in such things, and at least one LW contributor.
Faced with that response, I usually pursue a track along the lines of “Ah, I see. That makes sense as far as it goes, but it gets tricky, because it’s also true that we frequently perceive patterns that aren’t justified by the data at all, but without systematic study it’s hard to tell. For example, (examples).”
Followed by a longish discussion to get clear on the idea that some perceived patterns are indeed hallucinatory in this sense, even though your cousin’s psychic power isn’t necessarily one of those patterns. This sometimes fails… nobody I know actually claims that all perceived patterns are non-hallucinatory, but some people I know reject the path from “not all perceived patterns are non-hallucinatory” to “some perceived patterns are hallucinatory.” Which I generally interpret as refusing to have the conversation at all because they don’t like where it’s headed. a preference I usually respect out of politeness.
If it succeeds, I move on to “OK. So when I see a pattern that goes away when I begin systematic study, there are two possible theories: either the phenomenon is evasive as you describe, or my brain is perceiving patterns that aren’t there in the first place. How might I go about telling the difference, so I could be sure which was which? For example, if I wake up in the middle of the night frightened that there’s an intruder in my house, what could I do to figure out whether there is one or not?”
Which moves pretty quickly to the realization that an intruder in my house that systematically evades detection becomes increasingly implausible the more failed tests I perform, and at some point the theory that there simply isn’t such an intruder becomes more plausible.
I generally consider this a good place to stop with most people. Lather, rinse, repeat. They have one track that supports “I believe X no matter how many experiments fail to provide evidence for it,” and another track that supports “the more experiments fail to provide evidence for X, the less I should believe X”. They tend to mutually inhibit one another. The more that second track is activated, the less powerful that first track is; eventually it crumbles.
But things keep disappearing from my house at random! Surely it’s an evidence for an invisible intruder, not only for my memory going bad! And this never happens in the office, so it can’t be my memory! Therefore intruder!
I’m not really interested in role-playing out a whole conversation. If you insist that your invisible intruder, like your cousin’s psychic ability, is real and evasive, I look for a different example. If you insist that everything you ever think about, however idly, is real and evasive, I tap out and recommend you seek professional help.
I thought LW was that professional help…
Hmm, my pay slips must be getting lost in the post.
This assume’s that the person you are talking to didn’t perform any tests that provide them evidence for their belief. If you are facing someone who got his ideas from reading books, that might work. If you are facing someone who does have reference experiences for his belief, things get a bit different. You are basically telling them that they intruder that they found in their house is a hallucination.
The observer could go and study his cousin systematically. The cousin does 1000 trials and no trial shows any evidence that his cousin isn’t psychic. If the observer believes “the more experiments fail to provide evidence for X, the less I should believe X”, the huge quantity of experiements dictate to himself that he should believe that his cousin is psychic. The idea that the cousin uses a trick is supposed to become increasingly implausible the more failed tests the observer performs.
Some experiements are obviously systematically flawed. Doing more of those experiments shouldn’t lead you to increase your belief. The debate is more more about which experiements are systematically flawed than it’s about “I believe X no matter how many experiments fail to provide evidence for it,” vs “the more experiments fail to provide evidence for X, the less I should believe X”.
It doesn’t assume this, it infers it about a particular person from the evidence provided by shminux above. The interlocutor shminux is describing rejects the idea that experimental results can be definitive on this question, which is different from the position you describe here. (Anyone who starts out asserting the former, then switches to the latter in mid-stream, is no longer asserting a coherent position at all and requires altogether different techniques for engaging with them.)
I’m not quite sure what you mean by “the debate”.
Is there only one?
That surprises me; it certainly seems to me that some people adopt the stance shminux described, to which I responded.
All that aside, I certainly agree with you that my response to someone taking the stance you describe here (embracing experimentalism as it applies to psychic phenomena in theory, but implementing experiments in a problematic way) should differ from my response to someone taking the stance shminux describes above (rejecting experimentalism as it applies to psychic phenomena).
That depends on what you mean by “experiment”. If you mean doing a proper replicable controlled experiment than there is no experimental evidence. If you mean any evidence based on observation than there is experimental evidence.
In other words, there is evidence for the intruder, just not scientific evidence in the sense of this post.
I don’t in fact mean, by “experiment”, any evidence based on observation. I agree that there is evidence for (and against) the intruder, and did not say otherwise, although in general I don’t endorse using “evidence” in this sense without tagging it in some way (e.g., “Bayesian evidence”), since the alternative is reliably confusing.
How does the Great Psychicator distinguish between Science and normal events? Maybe we can trick it.
This model is more complicated and so it’s less probable.
What would you propose?
True. But so is Quantum Mechanics vs Classical Mechanics. (Un)Fortunately, CM is not enough to explain what we see (e.g. the interference pattern). To make a parallel with the issue at hand, the null-psychic model alone (without the complicated machinery of cognitive biases) does not explain the many non-scientifically tested claims of psychic powers.
A model where people are delusional about psychic powers or making claims about them they don’t believe is less complicated than a model with a Great Psychicator because we already know that people are delusional in lots of ways similar to that.
RE: tricking it. The problem is that we can’t observe one of the groups involved here, or we invalidate the experiment. But it seems likely to me that all conditions that could prohibit it from working when science is involved would also prohibit the claims about it working from having any basis in reality. In other words, either it can be tricked, or those people making the claims must concede that they don’t really have any basis to assert that it works. Any conditions which allow it to prohibit science should also preclude anyone from being justified in making claims about the reasons why it works.
An infinitely complex Great Psychicator would be smart enough to fool our experiments though, I think.
Actually, all this entity would have to do is to foil any attempts to go meta on a certain class of phenomena, like psychic powers (and plenty of others, say, homeopathy). In a simulated universe this could be as “simple” as detecting that a certain computation is likely to discover its simulated nature and disallow this computation by altering the inputs. A sensible security safeguard, really. This can be done by terminating and rolling back any computation that is dangerously close to getting out of control and restarting it from a safe point with the inputs adjusted. Nothing infinite is required. In fact, there is very little extra complexity beyond sounding an alarm when dangerous things happen. Of course, anything can be “explained” in the framework of a simulation.
But mixing “certain computation” and “discover” like that is mixing syntax and semantics—in order to watch out for that occurrence, you’d have to be aware of all possible semantics for a certain computation, to know if it counts as a “discovery”.
Not at all. You set up a trigger such as “50% of all AI researchers believe in the simulation argument”, then trace back their reasons for believing so and restart from a safe point with less dangerous inputs.
If your simulation has beliefs as a primitive, then you can set up that sort of trigger—but then it’s not a universe anything like ours.
If your simulation is simulating things like particles or atoms, then you don’t have direct access to whether they’ve arranged themselves into a “belief” unless you keep track of every possible way that an arrangement of atoms can be interpreted as a “belief”.
Sure, if you run your computation unstructured at the level of quarks and leptons, then you cannot tell what happens in the minds of simulated humans. This would be silly, and no one does any non-trivial bit of programming this way. There are always multi-level structures, like modules. classes, interfaces… Some of these can be created on the fly as needed (admittedly, this is a tricky part, though by no means impossible). So after a time you end up with a module that represents, say, a human, with sub-modules representing beliefs and interfaces representing communication with other humans, etc. And now you are well equipped to set up an alert.
If the Great Psychicator uses triggers on a level of reality less precise than the atomic or subatomic ones, then I believe its triggers could not possibly be precise enough to A. prevent science from discovering psychic powers and simultaneously B. allow normal people not doing science access to its psychic powers.
If there’s a flaw in its model of the universe, we can exploit that and use the flaw do to science (this would probably involve some VERY complex work arounds, but the universe is self consistent so it seems possible in theory). So the relevant question is whether or not its model of the universe is better than ours, which is why I concede that a sufficiently complex Great Psychicator would be able to trick us.
No, it just needs to be better at optimizing than we are.
I don’t know exactly what you mean by “optimizing”, but if your main point is that it’s an issue of comparative advantage then I agree. Or, if your point is that it’s not sufficient for humans to have a better model of reality in the abstract, we’d also need to be able to apply that model in such a way as to trick the GP and that might not be possible depending on the nature of the GP’s intervention, I can agree with that as well.
Yeah, thanks. This is part of what I was trying to get at. And I’m further contending that if the semantics for every possible scientific experiment were invalidated, then in doing so the hypothetical Great Psychicator would also have to invalidate the semantics for any legitimate claims that psychic powers worked. The two categories overlap perfectly.
This isn’t intended to be an argument from definition, I hope that is also clear.