I disagree. I would be surprised if they haven’t brainstormed such a list at least once. And just because you don’t see them doing any concrete action doesn’t mean they aren’t—they just might not be doing anything super public yet.
Don’t get me wrong, I think institutes like FHI are doing very useful research. I think there should be a lot more of them, at many different universities. I just think what’s missing in the whole X-risk scene is a way to take things out of this still fairly marginal scene and into the mainstream. As long as the mainstream is not convinced that this is an actual problem, efforts are always enormously going to lag mainstream AI efforts, with predictable results.
Maybe. But I actually currently think that the longer these issues stay out of the mainstream, the better. Mainstream political discourse is so corrupted; when something becomes politicized, that means it’s harder for anything to be done about it and a LOT harder for the truth to win out. You don’t see nuanced, balancing-risks-and-benefits solutions come out of politicized debates. Instead you see two one-sided, extreme agendas bashing on each other and then occasionally one of them wins.
(That said, now that I put it that way, maybe that’s what we want for AI risk—but only if we get to dictate the content of one of the extreme agendas and only if we are likely to win. Those are two very big ifs.)
It’s funny, I heard that opinion a number of times before, mostly from Americans. Maybe it has to do with your bipartisan flavor of democracy. I think Americans are also much more skeptical of states in general. You tend to look to companies for solving problems, Europeans tend to look to states (generalized). In The Netherlands we have a host of parties, and although there are still a lot of pointless debates, I wouldn’t say it’s nearly as bad as what you describe. I can’t imagine e.g. climate change solved without state intervention (the situation here is now that the left is calling for renewables, the right for nuclear—not so bad).
For AI Safety, even with a bipartisan debate, the situation now is that both parties implicitly think AI Safety is not an issue (probably because they have never heard of it, or at least not given it serious thought). After politicization, worst case at least one of the parties will think it’s a serious issue. That would mean that roughly 50% of the time, if party #1 wins, we get a fair chance of meaningful intervention such as appropriate funding, hopefully helpful regulation efforts (that’s our responsibility too—we can put good regulation proposals out there), and even cooperation with other countries. If party #2 wins, there will perhaps be zero effort or some withdrawal. I would say this 50% solution easily beats the 0% solution we have now. In a multi-party system such as we have, the outcome could even be better.
I think we should prioritize getting the issue out there. The way I see it, it’s the only hope for state intervention, which is badly needed.
Perhaps American politics is indeed less rational than European politics, I wouldn’t know. But American politics is more important for influencing AI since the big AI companies are American.
Besides, if you want to get governments involved, raising public awareness is only one way to do that, and not the best way IMO. I think it’s much more effective to do wonkery / think tankery / lobbying / etc. Public movements are only necessary when you have massive organized opposition that needs to be overcome by sheer weight of public opinion. When you don’t have massive organized opposition, and heads are still cool, and there’s still a chance of just straightforwardly convincing people of the merits of your case… best not to risk ruining that lucky situation!
I have kind of a strong opinion in favor of policy intervention because I don’t think it’s optional. I think it’s necessary. My main argument is as follows:
I think we have two options to reduce AI extinction risk:
1) Fixing it technically and ethically (I’ll call the combination of both working out the ‘tech fix’). Don’t delay.
2) Delay until we can work out 1. After the delay, AGI may or may not still be carried out, depending mainly on the outcome of 1.
If option 1 does not work, of which there is a reasonable chance (it hasn’t worked so far and we’re not necessarily close to a safe solution), I think option 2 is our only chance to reduce the AI X-risk to acceptable levels. However, AI academics and corporations are both strongly opposed to option 2. It would therefore take a force at least as powerful as those two groups combined to still pursue this option. The only option I can think of is a popular movement. Lobbying and think tanking may help, but corporations will be better funded and therefore the public interest is not likely to prevail. Wonkery could be promising as well. I’m happy to be convinced of more alternative options.
If the tech fix works, I’m all for it. But currently, I think the risks are way too big and it may not work at all. Therefore I think it makes sense to apply the precautionary principle here and start with policy interventions, until it can be demonstrated that X-risk for AGI has fallen to an acceptable level. As a nice side effect, this should dramatically increase AI Safety funding, since suddenly corporate incentives are to fund this first in order to reach allowed AGI.
I’m aware that this is a strong minority opinion on LW, since:
1) Many people here have affinity with futurism which would love an AGI revolution
2) Many people have backgrounds in AI academia, and/or AI corporations, which both have incentives to continue working on AGI
3) It could be wrong of course. :) I’m open for arguments which would change the above line of thinking.
So I’m not expecting a host of upvotes, but as rationalists, I’m sure you appreciate the value of dissent as a way to move towards a careful and balanced opinion. I do at least. :)
I wouldn’t say less rational, but more bipartisan, yes. But you’re right I guess that European politics is less important in this case. Also don’t forget Chinese politics, which has entirely different dynamics of course.
I think you have a good point as well that wonkery, think tankery, and lobbying are also promising options. I think they, and starting a movement, should be on a little list of policy intervention options. I think each will have its own merits and issues. But still, we should have a group of people actually starting to work on this, whatever the optimal path turns out to be.
I disagree. I would be surprised if they haven’t brainstormed such a list at least once. And just because you don’t see them doing any concrete action doesn’t mean they aren’t—they just might not be doing anything super public yet.
Don’t get me wrong, I think institutes like FHI are doing very useful research. I think there should be a lot more of them, at many different universities. I just think what’s missing in the whole X-risk scene is a way to take things out of this still fairly marginal scene and into the mainstream. As long as the mainstream is not convinced that this is an actual problem, efforts are always enormously going to lag mainstream AI efforts, with predictable results.
Maybe. But I actually currently think that the longer these issues stay out of the mainstream, the better. Mainstream political discourse is so corrupted; when something becomes politicized, that means it’s harder for anything to be done about it and a LOT harder for the truth to win out. You don’t see nuanced, balancing-risks-and-benefits solutions come out of politicized debates. Instead you see two one-sided, extreme agendas bashing on each other and then occasionally one of them wins.
(That said, now that I put it that way, maybe that’s what we want for AI risk—but only if we get to dictate the content of one of the extreme agendas and only if we are likely to win. Those are two very big ifs.)
It’s funny, I heard that opinion a number of times before, mostly from Americans. Maybe it has to do with your bipartisan flavor of democracy. I think Americans are also much more skeptical of states in general. You tend to look to companies for solving problems, Europeans tend to look to states (generalized). In The Netherlands we have a host of parties, and although there are still a lot of pointless debates, I wouldn’t say it’s nearly as bad as what you describe. I can’t imagine e.g. climate change solved without state intervention (the situation here is now that the left is calling for renewables, the right for nuclear—not so bad).
For AI Safety, even with a bipartisan debate, the situation now is that both parties implicitly think AI Safety is not an issue (probably because they have never heard of it, or at least not given it serious thought). After politicization, worst case at least one of the parties will think it’s a serious issue. That would mean that roughly 50% of the time, if party #1 wins, we get a fair chance of meaningful intervention such as appropriate funding, hopefully helpful regulation efforts (that’s our responsibility too—we can put good regulation proposals out there), and even cooperation with other countries. If party #2 wins, there will perhaps be zero effort or some withdrawal. I would say this 50% solution easily beats the 0% solution we have now. In a multi-party system such as we have, the outcome could even be better.
I think we should prioritize getting the issue out there. The way I see it, it’s the only hope for state intervention, which is badly needed.
Perhaps American politics is indeed less rational than European politics, I wouldn’t know. But American politics is more important for influencing AI since the big AI companies are American.
Besides, if you want to get governments involved, raising public awareness is only one way to do that, and not the best way IMO. I think it’s much more effective to do wonkery / think tankery / lobbying / etc. Public movements are only necessary when you have massive organized opposition that needs to be overcome by sheer weight of public opinion. When you don’t have massive organized opposition, and heads are still cool, and there’s still a chance of just straightforwardly convincing people of the merits of your case… best not to risk ruining that lucky situation!
I have kind of a strong opinion in favor of policy intervention because I don’t think it’s optional. I think it’s necessary. My main argument is as follows:
I think we have two options to reduce AI extinction risk:
1) Fixing it technically and ethically (I’ll call the combination of both working out the ‘tech fix’). Don’t delay.
2) Delay until we can work out 1. After the delay, AGI may or may not still be carried out, depending mainly on the outcome of 1.
If option 1 does not work, of which there is a reasonable chance (it hasn’t worked so far and we’re not necessarily close to a safe solution), I think option 2 is our only chance to reduce the AI X-risk to acceptable levels. However, AI academics and corporations are both strongly opposed to option 2. It would therefore take a force at least as powerful as those two groups combined to still pursue this option. The only option I can think of is a popular movement. Lobbying and think tanking may help, but corporations will be better funded and therefore the public interest is not likely to prevail. Wonkery could be promising as well. I’m happy to be convinced of more alternative options.
If the tech fix works, I’m all for it. But currently, I think the risks are way too big and it may not work at all. Therefore I think it makes sense to apply the precautionary principle here and start with policy interventions, until it can be demonstrated that X-risk for AGI has fallen to an acceptable level. As a nice side effect, this should dramatically increase AI Safety funding, since suddenly corporate incentives are to fund this first in order to reach allowed AGI.
I’m aware that this is a strong minority opinion on LW, since:
1) Many people here have affinity with futurism which would love an AGI revolution
2) Many people have backgrounds in AI academia, and/or AI corporations, which both have incentives to continue working on AGI
3) It could be wrong of course. :) I’m open for arguments which would change the above line of thinking.
So I’m not expecting a host of upvotes, but as rationalists, I’m sure you appreciate the value of dissent as a way to move towards a careful and balanced opinion. I do at least. :)
Want to have a video chat about this? I’d love to. :)
Well sure, why not. I’ll send you a PM.
I wouldn’t say less rational, but more bipartisan, yes. But you’re right I guess that European politics is less important in this case. Also don’t forget Chinese politics, which has entirely different dynamics of course.
I think you have a good point as well that wonkery, think tankery, and lobbying are also promising options. I think they, and starting a movement, should be on a little list of policy intervention options. I think each will have its own merits and issues. But still, we should have a group of people actually starting to work on this, whatever the optimal path turns out to be.