I’d state this in a different way which captures most of the problem, which is that any libertarian proposal for addressing AI x-risk can only deal with outcomes that everyone opposes, not nearly everyone, which is for example the case if 4 AI companies used AI to takeover everything, and while the AI is aligned to them specifically, it does horrible things to the rest of the population.
In essence, it can only deal with technical alignment concerns, not any other risk from AIs.
I have my theories for why technical alignment is so focused on compared to other risks, but that’s a story for another day.
I’d state this in a different way which captures most of the problem, which is that any libertarian proposal for addressing AI x-risk can only deal with outcomes that everyone opposes, not nearly everyone, which is for example the case if 4 AI companies used AI to takeover everything, and while the AI is aligned to them specifically, it does horrible things to the rest of the population.
In essence, it can only deal with technical alignment concerns, not any other risk from AIs.
I have my theories for why technical alignment is so focused on compared to other risks, but that’s a story for another day.