Phenomenon: For virtually any goal specification, if you pursue it sufficiently hard, you are guaranteed to get human extinction.[1] Situation where it seems false and unfalsifiable: The present world. Problems with the example: (i) We don’t know whether it is true. (ii) Not obvious enough that it is unfalsifiable.
Phenomenon: Physics and chemistry can give rise to complex life. Situation where it seems false and unfalsifiable: If Earth didn’t exist. Problems with the example: (i) if Earth didn’t exist, there wouldn’t be anybody to ask the question, so the scenario is a bit too weird. (ii) The example would be much better if it was the case that if you wait long enough, any planet will produce life.
Phenomenon: Gravity—all things with mass attract each other. (As opposed to “things just fall in this one particular direction”.) Situation where it seems false and unfalsifiable: If you lived in a bunker your whole life, with no knowledge of the outside world.[2] Problems with the example: The example would be even better if we somehow had some formal model that: (a) describes how physics works, (b) where we would be confident that the model is correct, (c) and that by analysing that model, we will be able to determine whether the theory is correct or false, (d) but the model would be too complex to actually analyse. (Similarly to how chemistry-level simulations are too complex for studying evolution.)
Phenomenon: Eating too much sweet stuff is unhealthy. Situation where it seems false and unfalsifiable: If you can’t get lots of sugar yet, and only rely on fruit etc. Problems with the example: The scenario is a bit too artificial. You would have to pretend that you can’t just go and harvest sugar from sugar cane and have somebody eat lots of it.
See here for comments on this. Note that this doesn’t imply AI X-risk, since “sufficiently hard” might be unrealistic, and also we might choose not to use agentic AI, etc.
Some partial examples I have so far:
Phenomenon: For virtually any goal specification, if you pursue it sufficiently hard, you are guaranteed to get human extinction.[1]
Situation where it seems false and unfalsifiable: The present world.
Problems with the example: (i) We don’t know whether it is true. (ii) Not obvious enough that it is unfalsifiable.
Phenomenon: Physics and chemistry can give rise to complex life.
Situation where it seems false and unfalsifiable: If Earth didn’t exist.
Problems with the example: (i) if Earth didn’t exist, there wouldn’t be anybody to ask the question, so the scenario is a bit too weird. (ii) The example would be much better if it was the case that if you wait long enough, any planet will produce life.
Phenomenon: Gravity—all things with mass attract each other. (As opposed to “things just fall in this one particular direction”.)
Situation where it seems false and unfalsifiable: If you lived in a bunker your whole life, with no knowledge of the outside world.[2]
Problems with the example: The example would be even better if we somehow had some formal model that: (a) describes how physics works, (b) where we would be confident that the model is correct, (c) and that by analysing that model, we will be able to determine whether the theory is correct or false, (d) but the model would be too complex to actually analyse. (Similarly to how chemistry-level simulations are too complex for studying evolution.)
Phenomenon: Eating too much sweet stuff is unhealthy.
Situation where it seems false and unfalsifiable: If you can’t get lots of sugar yet, and only rely on fruit etc.
Problems with the example: The scenario is a bit too artificial. You would have to pretend that you can’t just go and harvest sugar from sugar cane and have somebody eat lots of it.
See here for comments on this. Note that this doesn’t imply AI X-risk, since “sufficiently hard” might be unrealistic, and also we might choose not to use agentic AI, etc.
And if you didn’t have any special equipment, etc.