This seems to me like it’s “action to improve accuracy of target’s map” vs. “action on both map and territory” with the strange case being “action to decrease the accuracy of the target’s map”.
An agent/person is considering whether to take some action. That action will have consequences, and various justifications other than its consequences for taking or not taking the action. The agent/person also has a map, which includes both what they believe those consequences would be, and what other reasons exist for taking or not taking that action and how much weight each of these things should carry.
Suppose we wish to prevent this action from taking place. We have two basic approaches here:
We can act upon their map alone, or we can act upon the territory and use this to change their map.
When we engage in a philosophical or other argument with them, we’re not trying to change what effect the action will have, or change any other considerations. Instead, we are trying to update their map such that they no longer consider the action worthwhile. Perhaps we can get them to adopt new philosophical principles that have this effect. Perhaps we are doing something less abstract, and convincing them that their map of the situation and what actions will cause what consequences is flawed.
When we instead try to incentivize them, we do this by changing the consequences of the action, or altering other considerations (e.g. if someone cares about doing that which is endorsed by some authority, moral or otherwise, without regard to their knowledge or future actions, we could still use that as an incentive). We act upon the territory. We change circumstances such that taking action becomes more expensive, or has undesired consequences. This can include the consequence that we will take actions in response. Alternatively, we can improve the results of not taking action (e.g. bribe them, or promise something, or prevent the bad consequences of inaction by solving the underlying problem, etc etc).
This seems mostly like a clear distinction to me, except there’s a tricky situation where you’re acting upon their map in a way that makes it less rather than more accurate. For example, you might threaten something, but not intend to carry out the threat. Or you might lie about the situation’s details. Did you act to persuade them that the action is not desirable, or did you change their incentives?
One possibility that I’m drawn to is to say there are three things here—persuasion, deception and incentivization. And of course many attempts combine two or three of these.
On the question of what the right action is:
Is the/a petition persuasion, deception or incentivization? In this case, it seems clearly to be persuasion. The mission is not to generate bad consequences and then inform the target of this threat. There are already bad consequences, both to the target and in general, that we are making more visible to the target. Whether or not we should also be doing incentivization in other ways seems like a distinct question. I’ve chosen yes to at least some extent.
Sometimes someone will want to do or not something, and we’ll think they’re making a mistake from their perspective, and persuasion should be possible. Other times, we don’t think they’re making a mistake from their perspective, but still prefer that they change their behavior. We all incentivize people from time to time. It seems quite perverse to think we shouldn’t do this in cases where the person is making a mistake! It seems even crazier to not do this when we see someone doing what we believe is the wrong thing because they see incentives (e.g. money or the ability to get clicks) that are compromising what would otherwise be good ethics.
I would strongly agree with your assertion that philosophical questions—and indeed many other questions—should not be settled via majority vote. But does that mean that when the vote is held one must abstain in protest? Do you also believe philosophers should not vote in elections? It’s not like one can issue disclaimers there.
I also am confused by the assertion that the petition would benefit in some way from a “if after consideration you decide we’re wrong, we’ll support you” clause. It does not seem necessary or wise, before one attempts to persuade another that they are wrong, to agree to support their conclusion if they engage in careful consideration. Even when incentives are aligned.
I also am confused by the assertion that the petition would benefit in some way from a “if after consideration you decide we’re wrong, we’ll support you” clause. It does not seem necessary or wise, before one attempts to persuade another that they are wrong, to agree to support their conclusion if they engage in careful consideration. Even when incentives are aligned.
I think what this does is separate out the “I think you should update your map” message and the “I am shifting your incentives” message. If I think that someone would prefer the strawberry ice cream to the vanilla ice cream, I can simply offer that information to them, or I can advise them to get the strawberry, and make it a test of our friendship whether they follow my advice, rewarding them if they try it and like it, and punishing them in all other cases.
In cases where you simply want to offer information, and not put any ‘undue’ pressure on their decision-making, it seems good to be able to flag that. The broader question is something like “what pressures are undue, and why?”; you could imagine that there are cases where I want to shift the incentives of others, like telling a would-be bicycle thief that I think they would prefer not stealing the bike to stealing it, and part of that is because I would take actions against them if they did.
Let’s give that account a shot.
This seems to me like it’s “action to improve accuracy of target’s map” vs. “action on both map and territory” with the strange case being “action to decrease the accuracy of the target’s map”.
An agent/person is considering whether to take some action. That action will have consequences, and various justifications other than its consequences for taking or not taking the action. The agent/person also has a map, which includes both what they believe those consequences would be, and what other reasons exist for taking or not taking that action and how much weight each of these things should carry.
Suppose we wish to prevent this action from taking place. We have two basic approaches here:
We can act upon their map alone, or we can act upon the territory and use this to change their map.
When we engage in a philosophical or other argument with them, we’re not trying to change what effect the action will have, or change any other considerations. Instead, we are trying to update their map such that they no longer consider the action worthwhile. Perhaps we can get them to adopt new philosophical principles that have this effect. Perhaps we are doing something less abstract, and convincing them that their map of the situation and what actions will cause what consequences is flawed.
When we instead try to incentivize them, we do this by changing the consequences of the action, or altering other considerations (e.g. if someone cares about doing that which is endorsed by some authority, moral or otherwise, without regard to their knowledge or future actions, we could still use that as an incentive). We act upon the territory. We change circumstances such that taking action becomes more expensive, or has undesired consequences. This can include the consequence that we will take actions in response. Alternatively, we can improve the results of not taking action (e.g. bribe them, or promise something, or prevent the bad consequences of inaction by solving the underlying problem, etc etc).
This seems mostly like a clear distinction to me, except there’s a tricky situation where you’re acting upon their map in a way that makes it less rather than more accurate. For example, you might threaten something, but not intend to carry out the threat. Or you might lie about the situation’s details. Did you act to persuade them that the action is not desirable, or did you change their incentives?
One possibility that I’m drawn to is to say there are three things here—persuasion, deception and incentivization. And of course many attempts combine two or three of these.
On the question of what the right action is:
Is the/a petition persuasion, deception or incentivization? In this case, it seems clearly to be persuasion. The mission is not to generate bad consequences and then inform the target of this threat. There are already bad consequences, both to the target and in general, that we are making more visible to the target. Whether or not we should also be doing incentivization in other ways seems like a distinct question. I’ve chosen yes to at least some extent.
Sometimes someone will want to do or not something, and we’ll think they’re making a mistake from their perspective, and persuasion should be possible. Other times, we don’t think they’re making a mistake from their perspective, but still prefer that they change their behavior. We all incentivize people from time to time. It seems quite perverse to think we shouldn’t do this in cases where the person is making a mistake! It seems even crazier to not do this when we see someone doing what we believe is the wrong thing because they see incentives (e.g. money or the ability to get clicks) that are compromising what would otherwise be good ethics.
I would strongly agree with your assertion that philosophical questions—and indeed many other questions—should not be settled via majority vote. But does that mean that when the vote is held one must abstain in protest? Do you also believe philosophers should not vote in elections? It’s not like one can issue disclaimers there.
I also am confused by the assertion that the petition would benefit in some way from a “if after consideration you decide we’re wrong, we’ll support you” clause. It does not seem necessary or wise, before one attempts to persuade another that they are wrong, to agree to support their conclusion if they engage in careful consideration. Even when incentives are aligned.
A quick comment on just this part:
I think what this does is separate out the “I think you should update your map” message and the “I am shifting your incentives” message. If I think that someone would prefer the strawberry ice cream to the vanilla ice cream, I can simply offer that information to them, or I can advise them to get the strawberry, and make it a test of our friendship whether they follow my advice, rewarding them if they try it and like it, and punishing them in all other cases.
In cases where you simply want to offer information, and not put any ‘undue’ pressure on their decision-making, it seems good to be able to flag that. The broader question is something like “what pressures are undue, and why?”; you could imagine that there are cases where I want to shift the incentives of others, like telling a would-be bicycle thief that I think they would prefer not stealing the bike to stealing it, and part of that is because I would take actions against them if they did.