Can you explain how so? This does not seem obvious to me. It seems broadly true, but not broadly useful. (And I’m not really sure what you mean by useful anyway.)
I think I get it. If you have a big weapon of doom that will ruin everything, it’s not useless; you can use it when you’re absolutely desperate. So options that sound completely stupid are worth looking at when you need a last resort.
Having a scary desperate option, along with clear, publicly-known criteria which will trigger it, can prevent things from deteriorating to the point where you’ll be tempted to use that desperate option. A honeybee will die if it stings you, but it will sting you if it feels too threatened, so people try to avoid antagonizing honeybees, and the bees don’t end up dead because people didn’t antagonize them.
Can you explain how so? This does not seem obvious to me. It seems broadly true, but not broadly useful. (And I’m not really sure what you mean by useful anyway.)
My model of Eliezer says: “You can launch AGI, but only once.”
I think I get it. If you have a big weapon of doom that will ruin everything, it’s not useless; you can use it when you’re absolutely desperate. So options that sound completely stupid are worth looking at when you need a last resort.
Having a scary desperate option, along with clear, publicly-known criteria which will trigger it, can prevent things from deteriorating to the point where you’ll be tempted to use that desperate option. A honeybee will die if it stings you, but it will sting you if it feels too threatened, so people try to avoid antagonizing honeybees, and the bees don’t end up dead because people didn’t antagonize them.
Related: Thomas Schelling’s “Strategy of Conflict”.
Just because you can do something doesn’t mean the price for doing it is acceptable.
Just because the price for doing something is your own death (or consignment to non-volatile ROM) doesn’t mean the price is unacceptable.