[ note: I am not a libertarian, and haven’t been for many years. But I am sympathetic. ]
Like many libertarian ideas, this mixes “ought” and “can” in ways that are a bit hard to follow. It’s pretty well-understood that all rights, including the right to redress of harm, are enforced by violence. In smaller groups, it’s usually social violence and shared beliefs about status. In larger groups, it’s a mix of that, and multi-layered resolution procedures, with violence only when things go very wrong.
When you say you’d “prefer a world cognizant enough of the risk to be telling AI companies that...”, I’m not sure what that means in practice—the world isn’t cognizant and can’t tell anyone anything. Are you saying you wish these ideas were popular enough that citizens forced governments to do something? Or that you wish AI companies would voluntarily do this without being told? Or something else?
[ note: I am not a libertarian, and haven’t been for many years. But I am sympathetic. ]
Like many libertarian ideas, this mixes “ought” and “can” in ways that are a bit hard to follow. It’s pretty well-understood that all rights, including the right to redress of harm, are enforced by violence. In smaller groups, it’s usually social violence and shared beliefs about status. In larger groups, it’s a mix of that, and multi-layered resolution procedures, with violence only when things go very wrong.
When you say you’d “prefer a world cognizant enough of the risk to be telling AI companies that...”, I’m not sure what that means in practice—the world isn’t cognizant and can’t tell anyone anything. Are you saying you wish these ideas were popular enough that citizens forced governments to do something? Or that you wish AI companies would voluntarily do this without being told? Or something else?