For the particular scenario described in the Pascal’s mugger, I provided a reasonable way to test it. If the mugger wants to dicker about the ways of testing it, I might decide to listen. It is up to the mugger to provide a satisfactory test. Hand-waving and threats are not tests. You are saying that there are models where testing is unfeasible or too dangerous to try. Name one.
That such models exist is trivial—take model A, add a single difference B, where exercising the difference is bad. For instance,
Model A: universe is a simulation
Model B: universe is a simulation with a bug that will crash the system, destroying the universe, if X, but is otherwise identical to model A.
Models that would deserve to be raised to the level of our attention in the first place, however, will take more thought.
Ah yes, that is good one. Suppose a mad scientist threatens to make an earth-swallowing black hole in the LHC, unless his demands for world domination are met. What would be a prudent course of action? Calculate the utility of complying vs non-complying and go with the higher utility choice? Or do something else? (I have a solution or two, but will hold off suggesting any for now.)
For the particular scenario described in the Pascal’s mugger, I provided a reasonable way to test it. If the mugger wants to dicker about the ways of testing it, I might decide to listen. It is up to the mugger to provide a satisfactory test. Hand-waving and threats are not tests. You are saying that there are models where testing is unfeasible or too dangerous to try. Name one.
That such models exist is trivial—take model A, add a single difference B, where exercising the difference is bad. For instance,
Model A: universe is a simulation Model B: universe is a simulation with a bug that will crash the system, destroying the universe, if X, but is otherwise identical to model A.
Models that would deserve to be raised to the level of our attention in the first place, however, will take more thought.
By all means, apply more thought. Until then, I’m happy to stick by my testability assertion.
A simple example might be if more of the worries around the LHC were a little better founded.
Ah yes, that is good one. Suppose a mad scientist threatens to make an earth-swallowing black hole in the LHC, unless his demands for world domination are met. What would be a prudent course of action? Calculate the utility of complying vs non-complying and go with the higher utility choice? Or do something else? (I have a solution or two, but will hold off suggesting any for now.)