I agree with most of this, but as a hardline libertarian take on AI risk it is incomplete since it addresses only how to slow down AI capabilities. Another thing you may want a government to do is speed up alignment, for example through government funding of R&D for hopefully safer whole brain emulation. Having arbitration firms, private security companies, and so on enforce proof of insurance (with prediction markets and whichever other economic tools seem appropriate to determine how to set that up) answers how to slow down AI capabilities but doesn’t answer how to fund alignment.
One libertarian take on how to speed up alignment is that
(1) speeding up alignment / WBE is a regular public good / positive externality problem (I don’t personally see how you do value learning in a non-brute-force way without doing much of the work that is required for WBE anyway, so I just assume that “funding alignment” means “funding WBE”; this is a problem that can be solved with enough funding; if you don’t think alignment can be solved by raising enough money, no matter how much money and what the money can be spent on, then the rest of this isn’t applicable)
(2) there are a bunch of ways in which markets fund public goods (for example, many information goods are funded by bundling ads with them) and coordination problems involving positive or negative externalities or other market failures (all of which, if they can in principle be solved by a government by implementing some kind of legislation, can be seen as / converted into public goods problems, if nothing else the public goods problem of funding the operations of a firm that enforces exactly whatever such a legislation would say; so the only kind of market failure that truly needs to be addressed is public goods problems)
(3) ultimately, if none of the ways in which markets fund public goods works, it should always still be possible to fall back on Coasean bargaining or some variant on dominant assurance contracts, if transaction costs can be made low enough
(4) transaction costs in free markets will be lower due, among other reasons, to not having horridly inefficient state-run financial and court systems
(5) prediction markets and dominant assurance contracts and other fun economic technologies don’t, in free markets, have the status of being vaguely shady and perhaps illegal that they have in societies with states
(6) if transaction costs cannot be made low enough for the problem to be solved using free markets, it will not be solved using free markets
(7) in that case, it won’t, either, be solved by a government that makes decisions through, directly or indirectly, some kind of voting system, because for voters to vote for good governments that do good things like funding WBE R&D instead of bad things like funding wars is also an underfunded public good with positive externalities and the coordination problem faced by voters involves transaction costs that are just as great as those faced by potential contributors to a dominant assurance contract (or to a bundle of dominant assurance contracts), since the number of parties, amount of research and communication needed, and so on are just as great and usually greater, and this remains true no matter the kind of voting system used, whether that involves futarchy or range voting or quadratic voting or other attempts at solving relatively minor problems with voting; so using a democratic government to solve a public goods or externality problem is effectively just replacing a public goods or externality problem by another that is harder or equally hard to solve.
In other words: from a libertarian perspective, it makes really quite a lot of sense (without compromising your libertarian ideals even one iota) to look at the AI developers and say “fucking stop (you are taking far too much risk with everyone else’s lives; this is a form of theft until and unless you can pay all the people whose lives you’re risking, enough to offset the risk)”.
Yes, it makes a lot of sense to say that, but not a lot of sense for a democratic government to be making that assessment and enforcing it (not that democratic governments that currently exist have any interest in doing that). Which I think is why you see some libertarians criticize calls for government-enforced AI slowdowns.
I agree with most of this, but as a hardline libertarian take on AI risk it is incomplete since it addresses only how to slow down AI capabilities. Another thing you may want a government to do is speed up alignment, for example through government funding of R&D for hopefully safer whole brain emulation. Having arbitration firms, private security companies, and so on enforce proof of insurance (with prediction markets and whichever other economic tools seem appropriate to determine how to set that up) answers how to slow down AI capabilities but doesn’t answer how to fund alignment.
One libertarian take on how to speed up alignment is that
(1) speeding up alignment / WBE is a regular public good / positive externality problem (I don’t personally see how you do value learning in a non-brute-force way without doing much of the work that is required for WBE anyway, so I just assume that “funding alignment” means “funding WBE”; this is a problem that can be solved with enough funding; if you don’t think alignment can be solved by raising enough money, no matter how much money and what the money can be spent on, then the rest of this isn’t applicable)
(2) there are a bunch of ways in which markets fund public goods (for example, many information goods are funded by bundling ads with them) and coordination problems involving positive or negative externalities or other market failures (all of which, if they can in principle be solved by a government by implementing some kind of legislation, can be seen as / converted into public goods problems, if nothing else the public goods problem of funding the operations of a firm that enforces exactly whatever such a legislation would say; so the only kind of market failure that truly needs to be addressed is public goods problems)
(3) ultimately, if none of the ways in which markets fund public goods works, it should always still be possible to fall back on Coasean bargaining or some variant on dominant assurance contracts, if transaction costs can be made low enough
(4) transaction costs in free markets will be lower due, among other reasons, to not having horridly inefficient state-run financial and court systems
(5) prediction markets and dominant assurance contracts and other fun economic technologies don’t, in free markets, have the status of being vaguely shady and perhaps illegal that they have in societies with states
(6) if transaction costs cannot be made low enough for the problem to be solved using free markets, it will not be solved using free markets
(7) in that case, it won’t, either, be solved by a government that makes decisions through, directly or indirectly, some kind of voting system, because for voters to vote for good governments that do good things like funding WBE R&D instead of bad things like funding wars is also an underfunded public good with positive externalities and the coordination problem faced by voters involves transaction costs that are just as great as those faced by potential contributors to a dominant assurance contract (or to a bundle of dominant assurance contracts), since the number of parties, amount of research and communication needed, and so on are just as great and usually greater, and this remains true no matter the kind of voting system used, whether that involves futarchy or range voting or quadratic voting or other attempts at solving relatively minor problems with voting; so using a democratic government to solve a public goods or externality problem is effectively just replacing a public goods or externality problem by another that is harder or equally hard to solve.
Yes, it makes a lot of sense to say that, but not a lot of sense for a democratic government to be making that assessment and enforcing it (not that democratic governments that currently exist have any interest in doing that). Which I think is why you see some libertarians criticize calls for government-enforced AI slowdowns.