I have kind of a strong opinion in favor of policy intervention because I don’t think it’s optional. I think it’s necessary. My main argument is as follows:
I think we have two options to reduce AI extinction risk:
1) Fixing it technically and ethically (I’ll call the combination of both working out the ‘tech fix’). Don’t delay.
2) Delay until we can work out 1. After the delay, AGI may or may not still be carried out, depending mainly on the outcome of 1.
If option 1 does not work, of which there is a reasonable chance (it hasn’t worked so far and we’re not necessarily close to a safe solution), I think option 2 is our only chance to reduce the AI X-risk to acceptable levels. However, AI academics and corporations are both strongly opposed to option 2. It would therefore take a force at least as powerful as those two groups combined to still pursue this option. The only option I can think of is a popular movement. Lobbying and think tanking may help, but corporations will be better funded and therefore the public interest is not likely to prevail. Wonkery could be promising as well. I’m happy to be convinced of more alternative options.
If the tech fix works, I’m all for it. But currently, I think the risks are way too big and it may not work at all. Therefore I think it makes sense to apply the precautionary principle here and start with policy interventions, until it can be demonstrated that X-risk for AGI has fallen to an acceptable level. As a nice side effect, this should dramatically increase AI Safety funding, since suddenly corporate incentives are to fund this first in order to reach allowed AGI.
I’m aware that this is a strong minority opinion on LW, since:
1) Many people here have affinity with futurism which would love an AGI revolution
2) Many people have backgrounds in AI academia, and/or AI corporations, which both have incentives to continue working on AGI
3) It could be wrong of course. :) I’m open for arguments which would change the above line of thinking.
So I’m not expecting a host of upvotes, but as rationalists, I’m sure you appreciate the value of dissent as a way to move towards a careful and balanced opinion. I do at least. :)
I have kind of a strong opinion in favor of policy intervention because I don’t think it’s optional. I think it’s necessary. My main argument is as follows:
I think we have two options to reduce AI extinction risk:
1) Fixing it technically and ethically (I’ll call the combination of both working out the ‘tech fix’). Don’t delay.
2) Delay until we can work out 1. After the delay, AGI may or may not still be carried out, depending mainly on the outcome of 1.
If option 1 does not work, of which there is a reasonable chance (it hasn’t worked so far and we’re not necessarily close to a safe solution), I think option 2 is our only chance to reduce the AI X-risk to acceptable levels. However, AI academics and corporations are both strongly opposed to option 2. It would therefore take a force at least as powerful as those two groups combined to still pursue this option. The only option I can think of is a popular movement. Lobbying and think tanking may help, but corporations will be better funded and therefore the public interest is not likely to prevail. Wonkery could be promising as well. I’m happy to be convinced of more alternative options.
If the tech fix works, I’m all for it. But currently, I think the risks are way too big and it may not work at all. Therefore I think it makes sense to apply the precautionary principle here and start with policy interventions, until it can be demonstrated that X-risk for AGI has fallen to an acceptable level. As a nice side effect, this should dramatically increase AI Safety funding, since suddenly corporate incentives are to fund this first in order to reach allowed AGI.
I’m aware that this is a strong minority opinion on LW, since:
1) Many people here have affinity with futurism which would love an AGI revolution
2) Many people have backgrounds in AI academia, and/or AI corporations, which both have incentives to continue working on AGI
3) It could be wrong of course. :) I’m open for arguments which would change the above line of thinking.
So I’m not expecting a host of upvotes, but as rationalists, I’m sure you appreciate the value of dissent as a way to move towards a careful and balanced opinion. I do at least. :)
Want to have a video chat about this? I’d love to. :)
Well sure, why not. I’ll send you a PM.