That you can demand testing in many real world scenarios, a heuristic not always usable.
Or do you have a principled decision theory in mind, where testing is a key modification to the equations of expected-value etc and which defuses the mugging?
How do you determine if the model is testable? What if there is in principle a test, but it has unacceptable consequences in at least one reasonably probable model?
For the particular scenario described in the Pascal’s mugger, I provided a reasonable way to test it. If the mugger wants to dicker about the ways of testing it, I might decide to listen. It is up to the mugger to provide a satisfactory test. Hand-waving and threats are not tests. You are saying that there are models where testing is unfeasible or too dangerous to try. Name one.
That such models exist is trivial—take model A, add a single difference B, where exercising the difference is bad. For instance,
Model A: universe is a simulation
Model B: universe is a simulation with a bug that will crash the system, destroying the universe, if X, but is otherwise identical to model A.
Models that would deserve to be raised to the level of our attention in the first place, however, will take more thought.
Ah yes, that is good one. Suppose a mad scientist threatens to make an earth-swallowing black hole in the LHC, unless his demands for world domination are met. What would be a prudent course of action? Calculate the utility of complying vs non-complying and go with the higher utility choice? Or do something else? (I have a solution or two, but will hold off suggesting any for now.)
That you can demand testing in many real world scenarios, a heuristic not always usable.
Or do you have a principled decision theory in mind, where testing is a key modification to the equations of expected-value etc and which defuses the mugging?
As a natural scientist, I would refuse to accept untestable models. Feel free to point out where this fails in any scenario that matters.
How do you determine if the model is testable? What if there is in principle a test, but it has unacceptable consequences in at least one reasonably probable model?
For the particular scenario described in the Pascal’s mugger, I provided a reasonable way to test it. If the mugger wants to dicker about the ways of testing it, I might decide to listen. It is up to the mugger to provide a satisfactory test. Hand-waving and threats are not tests. You are saying that there are models where testing is unfeasible or too dangerous to try. Name one.
That such models exist is trivial—take model A, add a single difference B, where exercising the difference is bad. For instance,
Model A: universe is a simulation Model B: universe is a simulation with a bug that will crash the system, destroying the universe, if X, but is otherwise identical to model A.
Models that would deserve to be raised to the level of our attention in the first place, however, will take more thought.
By all means, apply more thought. Until then, I’m happy to stick by my testability assertion.
A simple example might be if more of the worries around the LHC were a little better founded.
Ah yes, that is good one. Suppose a mad scientist threatens to make an earth-swallowing black hole in the LHC, unless his demands for world domination are met. What would be a prudent course of action? Calculate the utility of complying vs non-complying and go with the higher utility choice? Or do something else? (I have a solution or two, but will hold off suggesting any for now.)