Perhaps American politics is indeed less rational than European politics, I wouldn’t know. But American politics is more important for influencing AI since the big AI companies are American.
Besides, if you want to get governments involved, raising public awareness is only one way to do that, and not the best way IMO. I think it’s much more effective to do wonkery / think tankery / lobbying / etc. Public movements are only necessary when you have massive organized opposition that needs to be overcome by sheer weight of public opinion. When you don’t have massive organized opposition, and heads are still cool, and there’s still a chance of just straightforwardly convincing people of the merits of your case… best not to risk ruining that lucky situation!
I have kind of a strong opinion in favor of policy intervention because I don’t think it’s optional. I think it’s necessary. My main argument is as follows:
I think we have two options to reduce AI extinction risk:
1) Fixing it technically and ethically (I’ll call the combination of both working out the ‘tech fix’). Don’t delay.
2) Delay until we can work out 1. After the delay, AGI may or may not still be carried out, depending mainly on the outcome of 1.
If option 1 does not work, of which there is a reasonable chance (it hasn’t worked so far and we’re not necessarily close to a safe solution), I think option 2 is our only chance to reduce the AI X-risk to acceptable levels. However, AI academics and corporations are both strongly opposed to option 2. It would therefore take a force at least as powerful as those two groups combined to still pursue this option. The only option I can think of is a popular movement. Lobbying and think tanking may help, but corporations will be better funded and therefore the public interest is not likely to prevail. Wonkery could be promising as well. I’m happy to be convinced of more alternative options.
If the tech fix works, I’m all for it. But currently, I think the risks are way too big and it may not work at all. Therefore I think it makes sense to apply the precautionary principle here and start with policy interventions, until it can be demonstrated that X-risk for AGI has fallen to an acceptable level. As a nice side effect, this should dramatically increase AI Safety funding, since suddenly corporate incentives are to fund this first in order to reach allowed AGI.
I’m aware that this is a strong minority opinion on LW, since:
1) Many people here have affinity with futurism which would love an AGI revolution
2) Many people have backgrounds in AI academia, and/or AI corporations, which both have incentives to continue working on AGI
3) It could be wrong of course. :) I’m open for arguments which would change the above line of thinking.
So I’m not expecting a host of upvotes, but as rationalists, I’m sure you appreciate the value of dissent as a way to move towards a careful and balanced opinion. I do at least. :)
I wouldn’t say less rational, but more bipartisan, yes. But you’re right I guess that European politics is less important in this case. Also don’t forget Chinese politics, which has entirely different dynamics of course.
I think you have a good point as well that wonkery, think tankery, and lobbying are also promising options. I think they, and starting a movement, should be on a little list of policy intervention options. I think each will have its own merits and issues. But still, we should have a group of people actually starting to work on this, whatever the optimal path turns out to be.
Perhaps American politics is indeed less rational than European politics, I wouldn’t know. But American politics is more important for influencing AI since the big AI companies are American.
Besides, if you want to get governments involved, raising public awareness is only one way to do that, and not the best way IMO. I think it’s much more effective to do wonkery / think tankery / lobbying / etc. Public movements are only necessary when you have massive organized opposition that needs to be overcome by sheer weight of public opinion. When you don’t have massive organized opposition, and heads are still cool, and there’s still a chance of just straightforwardly convincing people of the merits of your case… best not to risk ruining that lucky situation!
I have kind of a strong opinion in favor of policy intervention because I don’t think it’s optional. I think it’s necessary. My main argument is as follows:
I think we have two options to reduce AI extinction risk:
1) Fixing it technically and ethically (I’ll call the combination of both working out the ‘tech fix’). Don’t delay.
2) Delay until we can work out 1. After the delay, AGI may or may not still be carried out, depending mainly on the outcome of 1.
If option 1 does not work, of which there is a reasonable chance (it hasn’t worked so far and we’re not necessarily close to a safe solution), I think option 2 is our only chance to reduce the AI X-risk to acceptable levels. However, AI academics and corporations are both strongly opposed to option 2. It would therefore take a force at least as powerful as those two groups combined to still pursue this option. The only option I can think of is a popular movement. Lobbying and think tanking may help, but corporations will be better funded and therefore the public interest is not likely to prevail. Wonkery could be promising as well. I’m happy to be convinced of more alternative options.
If the tech fix works, I’m all for it. But currently, I think the risks are way too big and it may not work at all. Therefore I think it makes sense to apply the precautionary principle here and start with policy interventions, until it can be demonstrated that X-risk for AGI has fallen to an acceptable level. As a nice side effect, this should dramatically increase AI Safety funding, since suddenly corporate incentives are to fund this first in order to reach allowed AGI.
I’m aware that this is a strong minority opinion on LW, since:
1) Many people here have affinity with futurism which would love an AGI revolution
2) Many people have backgrounds in AI academia, and/or AI corporations, which both have incentives to continue working on AGI
3) It could be wrong of course. :) I’m open for arguments which would change the above line of thinking.
So I’m not expecting a host of upvotes, but as rationalists, I’m sure you appreciate the value of dissent as a way to move towards a careful and balanced opinion. I do at least. :)
Want to have a video chat about this? I’d love to. :)
Well sure, why not. I’ll send you a PM.
I wouldn’t say less rational, but more bipartisan, yes. But you’re right I guess that European politics is less important in this case. Also don’t forget Chinese politics, which has entirely different dynamics of course.
I think you have a good point as well that wonkery, think tankery, and lobbying are also promising options. I think they, and starting a movement, should be on a little list of policy intervention options. I think each will have its own merits and issues. But still, we should have a group of people actually starting to work on this, whatever the optimal path turns out to be.