First of all, this has nothing to do with Everett interpretation
If you toss a coin a million times while thoroughly investigating and preventing any cause of significant bias, and it always comes up “heads”, this starts looking like a compelling argument to stop tossing the coin; maybe “tails” triggers a gun.
First, how do you reconcile your second statement with the first one? I must be missing something. Second, if anthropics save us from ourselves via quantum immortality, that’s a good reason to be less careful, not stop tossing the coin.
Suppose people were trying to make LHC and similar machines work for 1000 years and never succeeded
I’m sure there are plenty of examples of problems which ended up being much harder than they appeared, but were eventually solved (Fermat’s last theorem? Human flight?) or will be eventually solved (fusion energy? Machine vision? You name some), all are arguments for the anthropic principle… until they are no longer. They all have specific reasons for failures, unrelated to anthropics. I would keep looking for those reasons and ignore the athropics altogether as an unproductive hypothesis.
how do you reconcile your second statement with the first one?
What Everett interpretation gives you is some sense of “actuality” of hypotheticals, but when thinking about possible futures you don’t need all (any!) of the hypothetical possibilities to be “actual”. Not knowing which possibility obtains would result in the same line of argument as when you assume that they all obtain.
Assuming you are being killed if a coin falls up “tails”, only the hypotheticals where all coins fall “heads” will contain you observing the coin toss (so you reason before starting the experiment), so if you do observe that, the hypothesis of “tails” being lethal stands (as you observe before starting the experiment). If on the other hand it’s not lethal, then your observing the coins is possible with other tossing outcomes, so your observing of something else falsifies the lethal-tails hypothesis.
if anthropics save us from ourselves via quantum immortality, that’s a good reason to be less careful, not stop tossing the coin.
There is no saving, the probability of survival is getting reduced. It would be very unlikely to survive a million coin tosses if “tails” are lethal (so you reason before tossing the first coin), but even if you do happen to survive that long, you would still risk being killed with each subsequent coin toss, so you shouldn’t keep doing that if a million “heads” would happen to be your observation (you decide in advance).
First, how do you reconcile your second statement with the first one? I must be missing something. Second, if anthropics save us from ourselves via quantum immortality, that’s a good reason to be less careful, not stop tossing the coin.
I’m sure there are plenty of examples of problems which ended up being much harder than they appeared, but were eventually solved (Fermat’s last theorem? Human flight?) or will be eventually solved (fusion energy? Machine vision? You name some), all are arguments for the anthropic principle… until they are no longer. They all have specific reasons for failures, unrelated to anthropics. I would keep looking for those reasons and ignore the athropics altogether as an unproductive hypothesis.
What Everett interpretation gives you is some sense of “actuality” of hypotheticals, but when thinking about possible futures you don’t need all (any!) of the hypothetical possibilities to be “actual”. Not knowing which possibility obtains would result in the same line of argument as when you assume that they all obtain.
Assuming you are being killed if a coin falls up “tails”, only the hypotheticals where all coins fall “heads” will contain you observing the coin toss (so you reason before starting the experiment), so if you do observe that, the hypothesis of “tails” being lethal stands (as you observe before starting the experiment). If on the other hand it’s not lethal, then your observing the coins is possible with other tossing outcomes, so your observing of something else falsifies the lethal-tails hypothesis.
There is no saving, the probability of survival is getting reduced. It would be very unlikely to survive a million coin tosses if “tails” are lethal (so you reason before tossing the first coin), but even if you do happen to survive that long, you would still risk being killed with each subsequent coin toss, so you shouldn’t keep doing that if a million “heads” would happen to be your observation (you decide in advance).