Short answer is “foreseeability of harm coming from the tool being used as intended”. Law is not computer code, so for example intent and reasonableness matter here.
Long answer should probably be a full post.
I think the train of thought here mostly is that people here implicitly have 2 as their main threat model for how things will actually go wrong in practice, but they want legal precedents to be in place before any actual incidents happen, and as such are hoping that increasing the legal risk of companies doing things like 1 and 3 will work for that purpose.
And I think that, while legal liability for cases like 1 is probably good in egregious cases, extending to that to cases where there is no intent of harm and no reasonable expectation of harm (like 3) is a terrible idea, and separately that pushing for 1 won’t significantly help with 2.
That’s also part of a broader pattern of “let’s figure out what outcomes we want from a policy, and then say that we should advocate for policies that cause those outcomes, and then either leave the choice of specific policy as an exercise for the reader (basically fine) or suggest a policy that will not accomplish those goals and also predictably cause a bunch of terrible outcomes (not so fine)”. But I think the idea that the important part is to come up with the intended outcomes of your policy the rest is just unimportant implementation details is bad, and maybe impactfully so if people are trying to take the Churchill “never let a good crisis go to waste” approach for getting their political agenda implemented (i.e. prepare policy suggestions in advance and then push them really hard once a crisis occurs that plausibly could have been mitigated by your favored policy).
Yeah, after writing that out I really think I need to write a full post here.
But I think the idea that the important part is to come up with the intended outcomes of your policy the rest is just unimportant implementation details is bad
This is the story of a lot of failed policies, especially policies that goodharted on their goals, and I’m extremely scared if people don’t actually understand this and use it.
This is a big flaw of a lot of radical groups, and I see this as a warning sign that your policy proposals aren’t net-positive.
How do you distinguish your Case 1 from ‘impose vast liability on Adobe for making Photoshop’?
Short answer is “foreseeability of harm coming from the tool being used as intended”. Law is not computer code, so for example intent and reasonableness matter here.
Long answer should probably be a full post.
I think the train of thought here mostly is that people here implicitly have 2 as their main threat model for how things will actually go wrong in practice, but they want legal precedents to be in place before any actual incidents happen, and as such are hoping that increasing the legal risk of companies doing things like 1 and 3 will work for that purpose.
And I think that, while legal liability for cases like 1 is probably good in egregious cases, extending to that to cases where there is no intent of harm and no reasonable expectation of harm (like 3) is a terrible idea, and separately that pushing for 1 won’t significantly help with 2.
That’s also part of a broader pattern of “let’s figure out what outcomes we want from a policy, and then say that we should advocate for policies that cause those outcomes, and then either leave the choice of specific policy as an exercise for the reader (basically fine) or suggest a policy that will not accomplish those goals and also predictably cause a bunch of terrible outcomes (not so fine)”. But I think the idea that the important part is to come up with the intended outcomes of your policy the rest is just unimportant implementation details is bad, and maybe impactfully so if people are trying to take the Churchill “never let a good crisis go to waste” approach for getting their political agenda implemented (i.e. prepare policy suggestions in advance and then push them really hard once a crisis occurs that plausibly could have been mitigated by your favored policy).
Yeah, after writing that out I really think I need to write a full post here.
This is the story of a lot of failed policies, especially policies that goodharted on their goals, and I’m extremely scared if people don’t actually understand this and use it.
This is a big flaw of a lot of radical groups, and I see this as a warning sign that your policy proposals aren’t net-positive.