In general I would expect that raising the cost of an action reduces the likelihood that an (at least partially rational) agent will choose that action.
I think I was explicit about how this works: if someone credibly threatens you by saying “unless you stop using LessWrong forever I will beat you up”, you’re unlikely to cave to this demand even though it raises the cost of using LW for you. Even if you don’t have the power to resist right now, such demands breed resentment and animosity, and if there’s enough accumulation of those you may decide you’d rather take a chance and fight than live according to the bully’s demands.
When both sides are playing your game, both sides try to find ways of imposing ever-increasing costs on each other and making those vary with states of the world in the correct way to incentivize the behavior they want. This is essentially a recipe for disaster in the real world: it’s unlikely that two people, or two countries, interacting in such a way can remain at peace for long.
Separate point: I am not quite sure where you’re coming from with the comment about how the US could threaten to nuke Moscow, and I think you may have misunderstood my argument. I’m certainly not proposing anything so dangerous. We should be aiming to increase the costs of launching aggressive wars in order to prevent future wars and especially to prevent future nuclear wars. We definitely shouldn’t escalate the current situation in a way that increases the likelihood of nuclear war!
I know you’re not proposing it. The example was meant to illustrate a flaw in your logic: if disincentivizing bad behavior and incentivizing good behavior with punishments and rewards was always a good idea, then there wouldn’t be anything wrong with threatening to nuke Moscow in order to remove Putin from power. There is actually something wrong with it, because you know that this threat can very easily backfire.
What you’re proposing is qualitatively not different from this, it differs from it only in the extent of the punishment that would be imposed. I think you should be cautious about it for essentially the same reasons.
I think I was explicit about how this works: if someone credibly threatens you by saying “unless you stop using LessWrong forever I will beat you up”, you’re unlikely to cave to this demand even though it raises the cost of using LW for you. Even if you don’t have the power to resist right now, such demands breed resentment and animosity, and if there’s enough accumulation of those you may decide you’d rather take a chance and fight than live according to the bully’s demands.
When both sides are playing your game, both sides try to find ways of imposing ever-increasing costs on each other and making those vary with states of the world in the correct way to incentivize the behavior they want. This is essentially a recipe for disaster in the real world: it’s unlikely that two people, or two countries, interacting in such a way can remain at peace for long.
I know you’re not proposing it. The example was meant to illustrate a flaw in your logic: if disincentivizing bad behavior and incentivizing good behavior with punishments and rewards was always a good idea, then there wouldn’t be anything wrong with threatening to nuke Moscow in order to remove Putin from power. There is actually something wrong with it, because you know that this threat can very easily backfire.
What you’re proposing is qualitatively not different from this, it differs from it only in the extent of the punishment that would be imposed. I think you should be cautious about it for essentially the same reasons.