In a previous post, we looked at the Prisoners’ Dilemma through Nicky Case’s lens of “two agents that can each pay a cost, in order to bring a bigger benefit to the other.” Through that lens, PrudentBot is doing something that seems straightforwardly in its own interest: it checks to see if you’ll Cooperate with it, and checks whether that Cooperation is conditional by seeing if you’d Cooperate with DefectBot too. (And if you don’t legibly Defect on DefectBot, then PrudentBot Defects on you.)
TrollBot on the other hand does not seem like it’s acting in its own interests: it checks to see if you’d legibly Cooperate with DefectBot, and if so it Cooperates with you! This sets up exactly the opposite incentive as PrudentBot, and this sort of phenomenon precludes any program from being a dominant strategy to delegate your decision to. There’s always some delegate that Defects on your favorite delegate, but Cooperates with another one.
Throughout this sequence I’ve been using the Robust Cooperation paper as a model for acausal trade: with just the ability to accurately model each other and the situation that each agent finds itself in (logical line-of-sight), modal agents are able to robustly bring about a win-win outcome in a situation where their individual incentives seem to be totally misaligned with their collective interests.
Combining those lenses, PrudentBot is like a trading partner that will pay a cost to bring you a bigger benefit, if and only if you’ll reciprocate and your reciprocation is conditional. Whereas TrollBot is like a trading partner that will pay a cost to bring you a bigger benefit, if and only if you’ll pay it forward to a particular third party. Through this lens, TrollBot is acting altruistically to help DefectBot.
Acausal Auctions
For many decisions we make, there are third parties that care what we do. This leads to positive and negative externalities, and in the last post I advocated for creating mechanisms that internalize some but not all of these externalities. (Some actions, like dropping a rocket on someone’s car, create a debt to the affected party that should be enforced. Whereas having a very attractive or very unattractive haircut does not require compensation to flow in either direction.)
When a decision-maker is free to choose among many options without incurring a debt to anyone, it can sometimes still be appropriate for third parties to offer payment in exchange for shifting their decision. Many governments will subsidize the installation of solar power systems, or the purchase of lower-emission vehicles for example. TrollBot can be thought of as DefectBot’s patron, using their resources to improve the treatment that DefectBot receives.
As mentioned previously, PrudentBot and TrollBot offer conflicting incentives about how to treat DefectBot. Each is offering resources in exchange for taking different actions, and we can’t simultaneously collect from both of them. We can summarize the incentives from many interested parties into an auction, where bids take the form “if you take action A, I’ll pay you $X.” Such auctions appear in Project Lawful, and serve as a coordination mechanism that helps to align individual incentives with collective interests.
In many cases, such as when the action A is unsafe or deontologically prohibited, the best thing we or our software systems can do is to ignore such bids. But when choosing among many permissible actions, like which vehicle to buy or what sort of work to do, responding to those incentives can often be better for everyone than ignoring them.
Reframing Acausal Trolling as Acausal Patronage
In a previous post, we looked at the Prisoners’ Dilemma through Nicky Case’s lens of “two agents that can each pay a cost, in order to bring a bigger benefit to the other.” Through that lens, PrudentBot is doing something that seems straightforwardly in its own interest: it checks to see if you’ll Cooperate with it, and checks whether that Cooperation is conditional by seeing if you’d Cooperate with DefectBot too. (And if you don’t legibly Defect on DefectBot, then PrudentBot Defects on you.)
TrollBot on the other hand does not seem like it’s acting in its own interests: it checks to see if you’d legibly Cooperate with DefectBot, and if so it Cooperates with you! This sets up exactly the opposite incentive as PrudentBot, and this sort of phenomenon precludes any program from being a dominant strategy to delegate your decision to. There’s always some delegate that Defects on your favorite delegate, but Cooperates with another one.
Throughout this sequence I’ve been using the Robust Cooperation paper as a model for acausal trade: with just the ability to accurately model each other and the situation that each agent finds itself in (logical line-of-sight), modal agents are able to robustly bring about a win-win outcome in a situation where their individual incentives seem to be totally misaligned with their collective interests.
Combining those lenses, PrudentBot is like a trading partner that will pay a cost to bring you a bigger benefit, if and only if you’ll reciprocate and your reciprocation is conditional. Whereas TrollBot is like a trading partner that will pay a cost to bring you a bigger benefit, if and only if you’ll pay it forward to a particular third party. Through this lens, TrollBot is acting altruistically to help DefectBot.
Acausal Auctions
For many decisions we make, there are third parties that care what we do. This leads to positive and negative externalities, and in the last post I advocated for creating mechanisms that internalize some but not all of these externalities. (Some actions, like dropping a rocket on someone’s car, create a debt to the affected party that should be enforced. Whereas having a very attractive or very unattractive haircut does not require compensation to flow in either direction.)
When a decision-maker is free to choose among many options without incurring a debt to anyone, it can sometimes still be appropriate for third parties to offer payment in exchange for shifting their decision. Many governments will subsidize the installation of solar power systems, or the purchase of lower-emission vehicles for example. TrollBot can be thought of as DefectBot’s patron, using their resources to improve the treatment that DefectBot receives.
As mentioned previously, PrudentBot and TrollBot offer conflicting incentives about how to treat DefectBot. Each is offering resources in exchange for taking different actions, and we can’t simultaneously collect from both of them. We can summarize the incentives from many interested parties into an auction, where bids take the form “if you take action A, I’ll pay you $X.” Such auctions appear in Project Lawful, and serve as a coordination mechanism that helps to align individual incentives with collective interests.
In many cases, such as when the action A is unsafe or deontologically prohibited, the best thing we or our software systems can do is to ignore such bids. But when choosing among many permissible actions, like which vehicle to buy or what sort of work to do, responding to those incentives can often be better for everyone than ignoring them.