Seems like for the Coaesean bargain to work, you have to assign liability to someone along the chain of trades leading to the harm. This complicates 5-7 and 10-12.
Though one could say that the gun trade was only one of the chains, and society-parents-school is another chain, for the placement of the shooter rather than the gun. But it seems like they already receive a good deal of punishment, so it’s unclear how meaningfully it can be changed.
Honestly, I expected to get downvoted for my somewhat tongue-in-cheek response, though it does work as a reductio argument.
I think my main point would be that Coase’s theorem is great for profitable actions with externalities, but doesn’t really work for punishment/elimination of non-monetary-incented actions where the cost is very hard to calculate. The question of blame/responsibility after the fact is almost unrelated to the tax/fee/control of decision before the action is taken.
There’s no bargain involved in a shooting—there’s no profit that can be shared with those hurt by it.
I think my main point would be that Coase’s theorem is great for profitable actions with externalities, but doesn’t really work for punishment/elimination of non-monetary-incented actions where the cost is very hard to calculate.
This brings up another important point which is that a lot of externalities are impossible to calculate, and therefore such approaches end up fixating on the part that seems calculable without even accounting for (or even noticing) the incalculable part. If the calculable externalities happen to be opposed to larger incalculable externalities, then you can end up worse off than if you had never tried.
As applied to the gun externality question, you could theoretically offer a huge payday to the gun shop that sold the firearm used to stop a spree shooting in progress, but you still need a body to count before paying out. It’s really hard to measure the number of murders which didn’t happen because the guns you sold deterred the attacks. And if we accept the pro 2A arguments that the real advantage of an armed populace is that it prevents tyranny, that’s even harder to put a real number on.
I think this applies well to AI, because absent a scenario where gray goo rearranges everyone into paperclips (in which case everyone pays with their life anyway), a lot of the benefits and harms are likely to be illegible. If AI chatbots end up swaying the next election, what is the dollar value we need to stick on someone? How do we know if it’s even positive or negative, or if it even happened? If we latch onto the one measurable thing, that might not help.
This brings up another important point which is that a lot of externalities are impossible to calculate, and therefore such approaches end up fixating on the part that seems calculable without even accounting for (or even noticing) the incalculable part. If the calculable externalities happen to be opposed to larger incalculable externalities, then you can end up worse off than if you had never tried.
I think this is correct as a conditional statement, but I don’t think one can deduce the unconditional implication that attempting to price some externalities in domains where many externalities are difficult to price is generally bad.
As applied to the gun externality question, you could theoretically offer a huge payday to the gun shop that sold the firearm used to stop a spree shooting in progress, but you still need a body to count before paying out.
The nice feature of positive payments by the government (instead of fines, i.e. negative payments by the government) is that the judgment-proof defendant problem goes away, so there’s no reason to actually make these payments to the gun shop at all: you can just directly pay the person who stops the shooting, which probably provides much better incentives to be a Good Samaritan without the shop trying to pass along this incentive to gun buyers.
I think this applies well to AI, because absent a scenario where gray goo rearranges everyone into paperclips (in which case everyone pays with their life anyway), a lot of the benefits and harms are likely to be illegible. If AI chatbots end up swaying the next election, what is the dollar value we need to stick on someone? How do we know if it’s even positive or negative, or if it even happened? If we latch onto the one measurable thing, that might not help.
I don’t agree that most of the benefits of AI are likely to be illegible. I expect plenty of them to take the form of new consumer products that were not available before, for example. “A lot of the benefits” is a weaker phrasing and I don’t quite know how to interpret it, but I thought it’s worth flagging my disagreement with the adjacent phrasing I used.
I think this is correct as a conditional statement, but I don’t think one can deduce the unconditional implication that attempting to price some externalities in domains where many externalities are difficult to price is generally bad.
It’s not “attempting to price some externalities where many are difficult to price is generally bad”, it’s “attempting to price some externalities where the difficult to price externalities on the other side is bad”. Sometimes the difficulty of pricing them means it’s hard to know which side they primarily lie on, but not necessarily.
The direction of legible/illegible externalities might be uncorrelated on average, but that doesn’t mean that ignoring the bigger piece of the pie isn’t costly. If I offer “I’ll pay you twenty dollars, and then make up some rumors about you which may or may not be true and may greatly help or greatly harm your social standing”, you don’t think “Well, the difficult part to price is a wash, but twenty dollars is twenty dollars”
you can just directly pay the person who stops the shooting,
You still need a body.
Sure, you can give people like Elisjsha Dicken a bunch of money, but that’s because he actually blasted someone. If we want to pay him $1M per life he saved though, how much do we pay him? We can’t simply go to the morgue and count how many people aren’t there. We have to start making assumptions, modeling the system, and paying out based on our best guesses of what might have happened in what we think to be the relevant hypothetical. Which could totally work here, to be clear, but it’s still a potentially imperfect attempt to price the illegible and it’s not a coincidence that this was left out of the initial analysis that I’m responding.
But what about the guy who stopped a shooting before it began, simply by walking around looking like the kind of guy that would stop an a spree killer before he accomplished much? What about the good role models in the potential shooters life that lead him onto the right track and stopped a shooting before it was ever planned? This could be ten times as important and you wouldn’t even know without a lot of very careful analysis. And even then you could be mistaken, and good luck creating enough of a consensus on your program to pay out what you believe to be the appropriate amount to the right people who have no concrete evidence to stand on. It’s just not gonna work.
I don’t agree that most of the benefits of AI are likely to be illegible. I expect plenty of them to take the form of new consumer products that were not available before, for example.
Sure, they’ll be a lot of new consumer products and other legible stuff, but how are you estimating the amount of illegible stuff and determining it to be smaller? That’s the stuff that by definition is going to be harder to recognize so you can’t just say “all of the stuff I recognize is legible, therefore legible>>illegible”.
For example, what’s the probability that AI changes the outcome of future elections and political trajectory, is it a good or bad change, and what is the dollar value of that compared to the dollar value of ChatGPT?
I do agree with your point but think you are creating a bit of a strawman here. I think the OP goal was to present situations in which we need to consider AI liability and two of those situations would be where Coasean barganing is possible and where it fails do the the (relatively) Judgement Proof actor. I’d also note that legal trends have tended to be to always look for the entity with the deepest pockets that you have some chance of blaming.
So while the example of the gun is a really poor case to apply Coase for I’m not sure that really detracts from the underlying point/use of Coasean bargaining with respect to approaches to AI liability or undestanding how to look at various cases. I don’t think the claim is that AI liability will be all one type or the other. But I think the ramification here is that trying to define a good, robust AI liability strucutre is going to be complex and difficult. Perhaps to the point we shouldn’t really attempt to do so in a legaslative setting but maybe in a combination of market risk managaement (insurance) and courts via tort complaints.
But that also seems to be an approach that will result in a lot of actual harms done as we all figure out where the good equilibrium might be (assuming it even exists).
I think there’s a lot more options than that!
4. the individual clerk who physically handed the weapon to the shooter.
5. the shooter’s biological father, who failed to role-model a non-shooting lifestyle.
6. the school district or administrator who allowed the unstable student to continue attending. And who failed to stop the actual act.
7. (obPython) Society is to blame. Right! We’ll arrest them instead.
8. The manufacturer or seller of the ammunition.
9. the miner of the lead and copper for the bullet (the actual harmful object).
10. The victims, for failing to protect themselves.
11. Media and consumers, for covering and promoting the shooting.
12. The families of the victims, for sending them to that school.
13. The estate of John Moses Browning, for making modern firearms so effective.
Really, there’s enough liability to go around—almost ANY change in the causal chain of the world COULD have prevented that specific tragedy.
14. The gun itself.
I thought that’s where you were going with this!
Relevant smbc.
Seems like for the Coaesean bargain to work, you have to assign liability to someone along the chain of trades leading to the harm. This complicates 5-7 and 10-12.
Though one could say that the gun trade was only one of the chains, and society-parents-school is another chain, for the placement of the shooter rather than the gun. But it seems like they already receive a good deal of punishment, so it’s unclear how meaningfully it can be changed.
Honestly, I expected to get downvoted for my somewhat tongue-in-cheek response, though it does work as a reductio argument.
I think my main point would be that Coase’s theorem is great for profitable actions with externalities, but doesn’t really work for punishment/elimination of non-monetary-incented actions where the cost is very hard to calculate. The question of blame/responsibility after the fact is almost unrelated to the tax/fee/control of decision before the action is taken.
There’s no bargain involved in a shooting—there’s no profit that can be shared with those hurt by it.
This brings up another important point which is that a lot of externalities are impossible to calculate, and therefore such approaches end up fixating on the part that seems calculable without even accounting for (or even noticing) the incalculable part. If the calculable externalities happen to be opposed to larger incalculable externalities, then you can end up worse off than if you had never tried.
As applied to the gun externality question, you could theoretically offer a huge payday to the gun shop that sold the firearm used to stop a spree shooting in progress, but you still need a body to count before paying out. It’s really hard to measure the number of murders which didn’t happen because the guns you sold deterred the attacks. And if we accept the pro 2A arguments that the real advantage of an armed populace is that it prevents tyranny, that’s even harder to put a real number on.
I think this applies well to AI, because absent a scenario where gray goo rearranges everyone into paperclips (in which case everyone pays with their life anyway), a lot of the benefits and harms are likely to be illegible. If AI chatbots end up swaying the next election, what is the dollar value we need to stick on someone? How do we know if it’s even positive or negative, or if it even happened? If we latch onto the one measurable thing, that might not help.
I think this is correct as a conditional statement, but I don’t think one can deduce the unconditional implication that attempting to price some externalities in domains where many externalities are difficult to price is generally bad.
The nice feature of positive payments by the government (instead of fines, i.e. negative payments by the government) is that the judgment-proof defendant problem goes away, so there’s no reason to actually make these payments to the gun shop at all: you can just directly pay the person who stops the shooting, which probably provides much better incentives to be a Good Samaritan without the shop trying to pass along this incentive to gun buyers.
I don’t agree that most of the benefits of AI are likely to be illegible. I expect plenty of them to take the form of new consumer products that were not available before, for example. “A lot of the benefits” is a weaker phrasing and I don’t quite know how to interpret it, but I thought it’s worth flagging my disagreement with the adjacent phrasing I used.
It’s not “attempting to price some externalities where many are difficult to price is generally bad”, it’s “attempting to price some externalities where the difficult to price externalities on the other side is bad”. Sometimes the difficulty of pricing them means it’s hard to know which side they primarily lie on, but not necessarily.
The direction of legible/illegible externalities might be uncorrelated on average, but that doesn’t mean that ignoring the bigger piece of the pie isn’t costly. If I offer “I’ll pay you twenty dollars, and then make up some rumors about you which may or may not be true and may greatly help or greatly harm your social standing”, you don’t think “Well, the difficult part to price is a wash, but twenty dollars is twenty dollars”
You still need a body.
Sure, you can give people like Elisjsha Dicken a bunch of money, but that’s because he actually blasted someone. If we want to pay him $1M per life he saved though, how much do we pay him? We can’t simply go to the morgue and count how many people aren’t there. We have to start making assumptions, modeling the system, and paying out based on our best guesses of what might have happened in what we think to be the relevant hypothetical. Which could totally work here, to be clear, but it’s still a potentially imperfect attempt to price the illegible and it’s not a coincidence that this was left out of the initial analysis that I’m responding.
But what about the guy who stopped a shooting before it began, simply by walking around looking like the kind of guy that would stop an a spree killer before he accomplished much? What about the good role models in the potential shooters life that lead him onto the right track and stopped a shooting before it was ever planned? This could be ten times as important and you wouldn’t even know without a lot of very careful analysis. And even then you could be mistaken, and good luck creating enough of a consensus on your program to pay out what you believe to be the appropriate amount to the right people who have no concrete evidence to stand on. It’s just not gonna work.
Sure, they’ll be a lot of new consumer products and other legible stuff, but how are you estimating the amount of illegible stuff and determining it to be smaller? That’s the stuff that by definition is going to be harder to recognize so you can’t just say “all of the stuff I recognize is legible, therefore legible>>illegible”.
For example, what’s the probability that AI changes the outcome of future elections and political trajectory, is it a good or bad change, and what is the dollar value of that compared to the dollar value of ChatGPT?
I do agree with your point but think you are creating a bit of a strawman here. I think the OP goal was to present situations in which we need to consider AI liability and two of those situations would be where Coasean barganing is possible and where it fails do the the (relatively) Judgement Proof actor. I’d also note that legal trends have tended to be to always look for the entity with the deepest pockets that you have some chance of blaming.
So while the example of the gun is a really poor case to apply Coase for I’m not sure that really detracts from the underlying point/use of Coasean bargaining with respect to approaches to AI liability or undestanding how to look at various cases. I don’t think the claim is that AI liability will be all one type or the other. But I think the ramification here is that trying to define a good, robust AI liability strucutre is going to be complex and difficult. Perhaps to the point we shouldn’t really attempt to do so in a legaslative setting but maybe in a combination of market risk managaement (insurance) and courts via tort complaints.
But that also seems to be an approach that will result in a lot of actual harms done as we all figure out where the good equilibrium might be (assuming it even exists).