My objection is that we only have around 1^21 or so of observed improbability of intelligent civilizations ccoming per planet to burn off due to the Fermi paradox, while a strong anthropic shadow implies the odds against us reaching this position to be vastly worse than that. If you think that abiogenisis is incredibly unlikely, that reduces the pressure to think that there were lots of potential catastrophes that could have wiped out life on earth.
To get into the situation of “anthropic fragility” we don’t need very strong anthropic shadow, just few orders of magnitude. Such anthropic shadow will give us hundreds millions of years of life expectancy vs. billions, so the situation seems not very bad at first glance.
However, such world will be an order of magnitude more fragile to new unique events which never happened in its past. For example, it could mean that a tipping point for climate transition is not 40C but 4 C above the mean + quick enough growth of temperature which will start some feedbacks which will never happened in the past.
Imagine a plane from the survivor bias picture which has many holes and is approaching its base. The holes don’t destroy its ability fly just fine. The question is what are the chances that it will perform successful landing? Landing is a unique event which has not happened to holed-plane. From history (video) we can see that many damaged planes made it safely to their bases but crashed during landing.
Or you can buy a used car which had run 300K miles. It is a unique survivor for its age. Even strongest anthropic shadow gives it around 30K more miles because future life expectancy is connected with anthropic shadow power by logarithmic law which grow very slowly. But this car is much more fragile than new car: if you try to perform a stunt on it, it will break in parts. (If the car had 1 in 1000 chance to survive until its age, then doubling period of death for it 30K miles (10 doublings); if it had 1 in 1 000 000 chances to survive, its 20 doubling or 15K miles. The 1000 times growth of anthropic shadow lower its life expectancy only 2 times.)
Perhaps I’m misunderstanding the notion of the anthropic shadow, but it seems like whether it implies anthropic fragility depends strongly on the gears level explanation of what is causing the anthropic shadow.
For example, a tank might have survived a dozen battles where statistically it ought to have been hit and destroyed lots of times, but where it got lucky and was missed each time. In this case the selection effect does not make us think that the tank will perform any differentfly from another tank in the next battle.
So the question is whether we have a plane with holes, that can fly, but can’t land, or if we have a tank that got really lucky, and is currently fine.
Having said that, it still seems plausible to me that I should view the eight degree temperature rise in the fossil record as less reassuring than I generally have due to this sort of argument.
Note: I am aware that this might be already addressed by your text, and I would see that if I closely reread it.
Yes, you are right—fragility depends on the gear level. There is a table in the middle of the text which discussed fragility for eleven different x-risks. For example, there is no fragility for gamma-ray burst. But it is for false vacuum decay, supervolcanos and climate as we can affect them.
My objection is that we only have around 1^21 or so of observed improbability of intelligent civilizations ccoming per planet to burn off due to the Fermi paradox, while a strong anthropic shadow implies the odds against us reaching this position to be vastly worse than that. If you think that abiogenisis is incredibly unlikely, that reduces the pressure to think that there were lots of potential catastrophes that could have wiped out life on earth.
To get into the situation of “anthropic fragility” we don’t need very strong anthropic shadow, just few orders of magnitude. Such anthropic shadow will give us hundreds millions of years of life expectancy vs. billions, so the situation seems not very bad at first glance.
However, such world will be an order of magnitude more fragile to new unique events which never happened in its past. For example, it could mean that a tipping point for climate transition is not 40C but 4 C above the mean + quick enough growth of temperature which will start some feedbacks which will never happened in the past.
Imagine a plane from the survivor bias picture which has many holes and is approaching its base. The holes don’t destroy its ability fly just fine. The question is what are the chances that it will perform successful landing? Landing is a unique event which has not happened to holed-plane. From history (video) we can see that many damaged planes made it safely to their bases but crashed during landing.
Or you can buy a used car which had run 300K miles. It is a unique survivor for its age. Even strongest anthropic shadow gives it around 30K more miles because future life expectancy is connected with anthropic shadow power by logarithmic law which grow very slowly. But this car is much more fragile than new car: if you try to perform a stunt on it, it will break in parts. (If the car had 1 in 1000 chance to survive until its age, then doubling period of death for it 30K miles (10 doublings); if it had 1 in 1 000 000 chances to survive, its 20 doubling or 15K miles. The 1000 times growth of anthropic shadow lower its life expectancy only 2 times.)
Perhaps I’m misunderstanding the notion of the anthropic shadow, but it seems like whether it implies anthropic fragility depends strongly on the gears level explanation of what is causing the anthropic shadow.
For example, a tank might have survived a dozen battles where statistically it ought to have been hit and destroyed lots of times, but where it got lucky and was missed each time. In this case the selection effect does not make us think that the tank will perform any differentfly from another tank in the next battle.
So the question is whether we have a plane with holes, that can fly, but can’t land, or if we have a tank that got really lucky, and is currently fine.
Having said that, it still seems plausible to me that I should view the eight degree temperature rise in the fossil record as less reassuring than I generally have due to this sort of argument.
Note: I am aware that this might be already addressed by your text, and I would see that if I closely reread it.
Yes, you are right—fragility depends on the gear level. There is a table in the middle of the text which discussed fragility for eleven different x-risks. For example, there is no fragility for gamma-ray burst. But it is for false vacuum decay, supervolcanos and climate as we can affect them.