This argument applies to AI safety issues, up to the level where a small number of people die. It doesn’t apply to x-risks like a sharp-left turn or an escaped self-propagating AI takeover, since if we’re all dead or enslaved, there is nobody left to sue. It doesn’t even apply to catastrophe-level damages like 10-million people dead, where the proportionate damages are much more then the value of the manufacturing company.
I am primarily worried about x-risks. I don’t see that as a problem that has any viable libertarian solutions. So I want there to be AI regulation and a competent regulator, worldwide, stat — backed up if needed by the threat of the US military. I regard putting such a regulatory system in place as vital for the survival of the human race. If the quickest and easiest way to get that done is to start with a regulator intended to deal with penny-ante harms like those discussed above, and then ramp up their capabilities fast as the potential harms ramp up, then, even if doing that is not the most efficient economic model for AI safety, I’m still very happy to do something economically inefficient and market-distorting as part of a shortcut to trying to avoid the extinction of the human race.
I appreciate your willingness to state your view clearly and directly.
That said, I don’t think that “implement a policy that doesn’t work and has massive downsides on a small scale, and no expectation of working better on a large scale, and then scale it up as fast as you can” is likely to help, and in fact I think it’s likely to make things worse on the x-risk front as well as mundanely (because the x-risk focused policy people become “those people who proposed that disastrous policy last time”)
Yup, that’s basically right. In the previous post, I mentioned that the goal of this sort of thing for X-risk purposes would be to incentivize AI companies to proactively look for safety problems ahead of time, and actually hold back deployment and/or actually fix the problems in a generalizable way when they do come up. They wouldn’t be incentivized to anywhere near the efficient level to avoid X-risk, but they’d at least be incentivized to put in place safety processes with actual teeth at all.
This argument applies to AI safety issues, up to the level where a small number of people die. It doesn’t apply to x-risks like a sharp-left turn or an escaped self-propagating AI takeover, since if we’re all dead or enslaved, there is nobody left to sue. It doesn’t even apply to catastrophe-level damages like 10-million people dead, where the proportionate damages are much more then the value of the manufacturing company.
I am primarily worried about x-risks. I don’t see that as a problem that has any viable libertarian solutions. So I want there to be AI regulation and a competent regulator, worldwide, stat — backed up if needed by the threat of the US military. I regard putting such a regulatory system in place as vital for the survival of the human race. If the quickest and easiest way to get that done is to start with a regulator intended to deal with penny-ante harms like those discussed above, and then ramp up their capabilities fast as the potential harms ramp up, then, even if doing that is not the most efficient economic model for AI safety, I’m still very happy to do something economically inefficient and market-distorting as part of a shortcut to trying to avoid the extinction of the human race.
I appreciate your willingness to state your view clearly and directly.
That said, I don’t think that “implement a policy that doesn’t work and has massive downsides on a small scale, and no expectation of working better on a large scale, and then scale it up as fast as you can” is likely to help, and in fact I think it’s likely to make things worse on the x-risk front as well as mundanely (because the x-risk focused policy people become “those people who proposed that disastrous policy last time”)
Yup, that’s basically right. In the previous post, I mentioned that the goal of this sort of thing for X-risk purposes would be to incentivize AI companies to proactively look for safety problems ahead of time, and actually hold back deployment and/or actually fix the problems in a generalizable way when they do come up. They wouldn’t be incentivized to anywhere near the efficient level to avoid X-risk, but they’d at least be incentivized to put in place safety processes with actual teeth at all.