How does the Great Psychicator distinguish between Science and normal events? Maybe we can trick it.
What would you propose?
This model is more complicated and so it’s less probable.
True. But so is Quantum Mechanics vs Classical Mechanics. (Un)Fortunately, CM is not enough to explain what we see (e.g. the interference pattern). To make a parallel with the issue at hand, the null-psychic model alone (without the complicated machinery of cognitive biases) does not explain the many non-scientifically tested claims of psychic powers.
A model where people are delusional about psychic powers or making claims about them they don’t believe is less complicated than a model with a Great Psychicator because we already know that people are delusional in lots of ways similar to that.
RE: tricking it. The problem is that we can’t observe one of the groups involved here, or we invalidate the experiment. But it seems likely to me that all conditions that could prohibit it from working when science is involved would also prohibit the claims about it working from having any basis in reality. In other words, either it can be tricked, or those people making the claims must concede that they don’t really have any basis to assert that it works. Any conditions which allow it to prohibit science should also preclude anyone from being justified in making claims about the reasons why it works.
An infinitely complex Great Psychicator would be smart enough to fool our experiments though, I think.
An infinitely complex Great Psychicator would be smart enough to fool our experiments though, I think.
Actually, all this entity would have to do is to foil any attempts to go meta on a certain class of phenomena, like psychic powers (and plenty of others, say, homeopathy). In a simulated universe this could be as “simple” as detecting that a certain computation is likely to discover its simulated nature and disallow this computation by altering the inputs. A sensible security safeguard, really. This can be done by terminating and rolling back any computation that is dangerously close to getting out of control and restarting it from a safe point with the inputs adjusted. Nothing infinite is required. In fact, there is very little extra complexity beyond sounding an alarm when dangerous things happen. Of course, anything can be “explained” in the framework of a simulation.
In a simulated universe this could be as “simple” as detecting that a certain computation is likely to discover its simulated nature and disallow this computation by altering the inputs.
But mixing “certain computation” and “discover” like that is mixing syntax and semantics—in order to watch out for that occurrence, you’d have to be aware of all possible semantics for a certain computation, to know if it counts as a “discovery”.
you’d have to be aware of all possible semantics for a certain computation
Not at all. You set up a trigger such as “50% of all AI researchers believe in the simulation argument”, then trace back their reasons for believing so and restart from a safe point with less dangerous inputs.
You set up a trigger such as “50% of all AI researchers believe in the simulation argument”
If your simulation has beliefs as a primitive, then you can set up that sort of trigger—but then it’s not a universe anything like ours.
If your simulation is simulating things like particles or atoms, then you don’t have direct access to whether they’ve arranged themselves into a “belief” unless you keep track of every possible way that an arrangement of atoms can be interpreted as a “belief”.
Sure, if you run your computation unstructured at the level of quarks and leptons, then you cannot tell what happens in the minds of simulated humans. This would be silly, and no one does any non-trivial bit of programming this way. There are always multi-level structures, like modules. classes, interfaces… Some of these can be created on the fly as needed (admittedly, this is a tricky part, though by no means impossible). So after a time you end up with a module that represents, say, a human, with sub-modules representing beliefs and interfaces representing communication with other humans, etc. And now you are well equipped to set up an alert.
If the Great Psychicator uses triggers on a level of reality less precise than the atomic or subatomic ones, then I believe its triggers could not possibly be precise enough to A. prevent science from discovering psychic powers and simultaneously B. allow normal people not doing science access to its psychic powers.
If there’s a flaw in its model of the universe, we can exploit that and use the flaw do to science (this would probably involve some VERY complex work arounds, but the universe is self consistent so it seems possible in theory). So the relevant question is whether or not its model of the universe is better than ours, which is why I concede that a sufficiently complex Great Psychicator would be able to trick us.
If the Great Psychicator uses triggers on a level of reality less precise than the atomic or subatomic ones, then I believe its triggers could not possibly be precise enough to A. prevent science from discovering psychic powers and simultaneously B. allow normal people not doing science access to its psychic powers.
No, it just needs to be better at optimizing than we are.
I don’t know exactly what you mean by “optimizing”, but if your main point is that it’s an issue of comparative advantage then I agree. Or, if your point is that it’s not sufficient for humans to have a better model of reality in the abstract, we’d also need to be able to apply that model in such a way as to trick the GP and that might not be possible depending on the nature of the GP’s intervention, I can agree with that as well.
Yeah, thanks. This is part of what I was trying to get at. And I’m further contending that if the semantics for every possible scientific experiment were invalidated, then in doing so the hypothetical Great Psychicator would also have to invalidate the semantics for any legitimate claims that psychic powers worked. The two categories overlap perfectly.
This isn’t intended to be an argument from definition, I hope that is also clear.
How does the Great Psychicator distinguish between Science and normal events? Maybe we can trick it.
This model is more complicated and so it’s less probable.
What would you propose?
True. But so is Quantum Mechanics vs Classical Mechanics. (Un)Fortunately, CM is not enough to explain what we see (e.g. the interference pattern). To make a parallel with the issue at hand, the null-psychic model alone (without the complicated machinery of cognitive biases) does not explain the many non-scientifically tested claims of psychic powers.
A model where people are delusional about psychic powers or making claims about them they don’t believe is less complicated than a model with a Great Psychicator because we already know that people are delusional in lots of ways similar to that.
RE: tricking it. The problem is that we can’t observe one of the groups involved here, or we invalidate the experiment. But it seems likely to me that all conditions that could prohibit it from working when science is involved would also prohibit the claims about it working from having any basis in reality. In other words, either it can be tricked, or those people making the claims must concede that they don’t really have any basis to assert that it works. Any conditions which allow it to prohibit science should also preclude anyone from being justified in making claims about the reasons why it works.
An infinitely complex Great Psychicator would be smart enough to fool our experiments though, I think.
Actually, all this entity would have to do is to foil any attempts to go meta on a certain class of phenomena, like psychic powers (and plenty of others, say, homeopathy). In a simulated universe this could be as “simple” as detecting that a certain computation is likely to discover its simulated nature and disallow this computation by altering the inputs. A sensible security safeguard, really. This can be done by terminating and rolling back any computation that is dangerously close to getting out of control and restarting it from a safe point with the inputs adjusted. Nothing infinite is required. In fact, there is very little extra complexity beyond sounding an alarm when dangerous things happen. Of course, anything can be “explained” in the framework of a simulation.
But mixing “certain computation” and “discover” like that is mixing syntax and semantics—in order to watch out for that occurrence, you’d have to be aware of all possible semantics for a certain computation, to know if it counts as a “discovery”.
Not at all. You set up a trigger such as “50% of all AI researchers believe in the simulation argument”, then trace back their reasons for believing so and restart from a safe point with less dangerous inputs.
If your simulation has beliefs as a primitive, then you can set up that sort of trigger—but then it’s not a universe anything like ours.
If your simulation is simulating things like particles or atoms, then you don’t have direct access to whether they’ve arranged themselves into a “belief” unless you keep track of every possible way that an arrangement of atoms can be interpreted as a “belief”.
Sure, if you run your computation unstructured at the level of quarks and leptons, then you cannot tell what happens in the minds of simulated humans. This would be silly, and no one does any non-trivial bit of programming this way. There are always multi-level structures, like modules. classes, interfaces… Some of these can be created on the fly as needed (admittedly, this is a tricky part, though by no means impossible). So after a time you end up with a module that represents, say, a human, with sub-modules representing beliefs and interfaces representing communication with other humans, etc. And now you are well equipped to set up an alert.
If the Great Psychicator uses triggers on a level of reality less precise than the atomic or subatomic ones, then I believe its triggers could not possibly be precise enough to A. prevent science from discovering psychic powers and simultaneously B. allow normal people not doing science access to its psychic powers.
If there’s a flaw in its model of the universe, we can exploit that and use the flaw do to science (this would probably involve some VERY complex work arounds, but the universe is self consistent so it seems possible in theory). So the relevant question is whether or not its model of the universe is better than ours, which is why I concede that a sufficiently complex Great Psychicator would be able to trick us.
No, it just needs to be better at optimizing than we are.
I don’t know exactly what you mean by “optimizing”, but if your main point is that it’s an issue of comparative advantage then I agree. Or, if your point is that it’s not sufficient for humans to have a better model of reality in the abstract, we’d also need to be able to apply that model in such a way as to trick the GP and that might not be possible depending on the nature of the GP’s intervention, I can agree with that as well.
Yeah, thanks. This is part of what I was trying to get at. And I’m further contending that if the semantics for every possible scientific experiment were invalidated, then in doing so the hypothetical Great Psychicator would also have to invalidate the semantics for any legitimate claims that psychic powers worked. The two categories overlap perfectly.
This isn’t intended to be an argument from definition, I hope that is also clear.