So imagine my surprise when I informally learn that this sort of thinking is taboo.
It’s not taboo. I’ve been discussing whether we should do this with various people off and on for the past five years. People take these ideas seriously. Just because people don’t agree, or don’t take it seriously enough, doesn’t mean it’s taboo!
FWIW I think it’s a good idea too (even though for years I argued against it!). I think it should be done by a well-coordinated group of people who did lots of thinking and planning beforehand (and coordinating with the broader community probably) rather than by lone wolves (for unilateralist’s curse reasons.)
It seems “taboo” to me. Like, when I go to think about this, I feel … inhibited in some not-very-verbal, not-very-explicit way. Kinda like how I feel if I imagine asking an inane question of a stranger without a socially sensible excuse, or when a clerk asked me why I was buying so many canned goods very early in Covid.
I think we are partly seeing the echoes of a social flinch here, somehow. It bears examining!
Open tolerance of the people involved with status quo and fear of alienating / making enemies of powerful groups is a core part of current EA culture! Steve’s top comment on this post is an example of enforcing/reiterating this norm.
It’s an unwritten rule that seems very strongly enforced yet never really explicitly acknowledged, much less discussed. People were shadow blacklisted by CEA from the Covid documentary they funded for being too disrespectful in their speech re: how governments have handled covid. That fits what I’d consider a taboo, something any socially savvy person would pick up on and internalize if they were around it.
Maybe this norm for open tolerance is downstream of the implications of truly considering some people to be your adversaries (which you might do if you thought delaying AI development by even an hour was a considerable moral victory, as the OP seems to). Doing so does expose you to danger. I would point out that while lc’s post analogizes their relationship with AI researchers to Isreal’s relationship with Iran. When I think of Israel’s resistance to Iran nonviolence is not the first thing that comes to mind.
People were shadow blacklisted by CEA from the Covid documentary they funded for being too disrespectful in their speech re: how governments have handled covid.
I agree. I also think this is a topic that needs to be seriously considered and discussed because not doing so may leave behind a hidden hindrance to accurate collective assessment and planning for AI risks. Because contrary to our conceits and aspirations, our judgements aren’t at all immune to the sway of biases, flawed assumptions, and human emotions. I’m not sure how to put this, but people on this forum don’t come off as very worldly, if that makes sense. A lot of people are in technical professions where understanding of political realities seem to be lacking. The US and China stand to be the two major drivers of AI development in the next decades. Increasingly they don’t see eye to eye, and an arm-race dynamic might develop. So I feel there’s been a lot of focus on the technical/theoretical side of things, but not enough concern over the practical side of development, the geopolitical implications, and all that might entail.
FYI, I thought this sort of idea was an obvious one, and I’ve been continuously surprised that it didn’t have more discussion. I don’t feel inhibited and am sort of surprised you are.
(I do think there’s a lot of ways to do this badly, with costs on the overall coordination-commons, so, maybe I feel somewhat inhibited from actually going off to do the thing. But I don’t feel inhibited from brainstorming potential ways to address the costs and thinking about how to do it)
OK! Well, I can’t speak for everyone’s experiences, only my own. I don’t think this subject should be taboo and I’m glad people are talking more about it now.
You’re right that it’s not actually taboo. Shouldn’t have used that word. Just seemed so far like I got a lot of weird resistance when I brought it up.
The main argument I made was: It’s already very hard to get governments and industry to meaningfully limit fossil fuel emissions, even though the relevant scientists (climatologists etc.) are near-unanimous about the long-term negative consequences. Imagine how much harder it would be if there wasn’t a separate field of climatology, and instead the only acknowledged experts on the long-term effects of fossil fuels were… petroleum industry engineers and executives! That’s the situation with AGI risk. We could create a separate field of AI risk studies, but even better would be to convince the people in the industry to take the risk seriously. Then our position would be *better* than the situation with climate change, not worse. How do we do this? Well, we do this by *not antagonizing the industry*. So don’t call for bans, at least not yet.
The two arguments that changed my mind:
(a) We are running out of time. My timelines dropped from median 2040ish to median 2030ish.
(b) I had assumed that banning or slowing down AGI research would be easier the closer we get to AGI, because people would “wake up” to the danger after seeing compelling demonstrations and warning shots etc. However I am now unsure; there’s plausibly a “whirlpool effect” where the closer you get to AGI the more everyone starts racing towards it and the harder it is to stop. Maybe the easiest time to ban it or slow it down is 10 years, even 20 years, before takeoff. (Compare to genetically engineered superbabies. Research into making them was restricted and slowed down decades before it became possible to do it, as far as I can tell.)
Ah, OK, that all sounds pretty sensible. I think ‘this sort of thinking’ is doing a lot of work here. I agree that sponsoring mobs to picket DeepMind HQ is just silly and will probably make things harder. I think buttonholing DeepMind people and government people and trying to convince them of the dangers of what they’re doing is something we should have been doing all along.
I got the impression that the late Dominic Cummings was on our side just organically without anyone needing to persuade him, in fact I seem to remember Boris Johnson saying something silly about terminators just before the covid kerfuffle broke out. It may not be too hard to convince people.
If at the very least we can get people to only do this sort of thing in secret government labs run by people who know perfectly well that they’re messing with world-ending weapons in defiance of international treaties that’s a start. Not that that will save us, but it might be slower than the current headlong rush to doom. If things go really well we might hang on long enough to experience grey goo or a deliberately engineered pandemic!
Half the problem is that the people who actually do AI research seem divided as to whether there’s any danger. If we can’t convince our own kind, convincing politicians and the like is going to be hard indeed.
All I’ve got is ill-formed intuitions. I read one science-fiction story about postage stamps ten years ago and became a doomer on the spot. I think maths and computer people are unusually easy to convince with clear true arguments and if we come up with some we might find getting most of the industry on side easier than everyone seems to think.
Have we tried to actually express our arguments in some convincing way? I’m thinking it’s not actually a very complicated argument, and most of the objections people come up with on the spot are easy enough to counter convincingly. Some sort of one-page main argument with an FAQ for frequently thought of objections might win most of the battle. I don’t suppose you happen to know of one already constructed do you?
As you point out and I agree, if we don’t win this battle then the world just suddenly ends in about ten years time so we should probably have a pop at easy routes to victory if they’re available and don’t have any obvious downsides.
Oh god, sorry, I just can’t stop myself. I mean his political reputation is shredded beyond hope of repair. Loathed by the people of the UK in the same way that Tony Blair is, and seen as brilliant but disloyal in the same way that the guy in Mad Men is after he turns on the tobacco people.
We may be touching on the mind-killer here. Let us speak of such things no further.
Dominic Cummings lives, a prosperous gentleman.
Dominic Cummings is not dead, and I should remember that my ironic flourishes are likely to be taken literally because other people on the internet don’t have the shared context that I would have if I was sounding off in the pub.
No, I think John is saying he died politically; that is, he no longer holds power. This is definitely overstated (he might get power in the future) and confusing.
It’s not taboo. I’ve been discussing whether we should do this with various people off and on for the past five years. People take these ideas seriously. Just because people don’t agree, or don’t take it seriously enough, doesn’t mean it’s taboo!
FWIW I think it’s a good idea too (even though for years I argued against it!). I think it should be done by a well-coordinated group of people who did lots of thinking and planning beforehand (and coordinating with the broader community probably) rather than by lone wolves (for unilateralist’s curse reasons.)
It seems “taboo” to me. Like, when I go to think about this, I feel … inhibited in some not-very-verbal, not-very-explicit way. Kinda like how I feel if I imagine asking an inane question of a stranger without a socially sensible excuse, or when a clerk asked me why I was buying so many canned goods very early in Covid.
I think we are partly seeing the echoes of a social flinch here, somehow. It bears examining!
Open tolerance of the people involved with status quo and fear of alienating / making enemies of powerful groups is a core part of current EA culture! Steve’s top comment on this post is an example of enforcing/reiterating this norm.
It’s an unwritten rule that seems very strongly enforced yet never really explicitly acknowledged, much less discussed. People were shadow blacklisted by CEA from the Covid documentary they funded for being too disrespectful in their speech re: how governments have handled covid. That fits what I’d consider a taboo, something any socially savvy person would pick up on and internalize if they were around it.
Maybe this norm for open tolerance is downstream of the implications of truly considering some people to be your adversaries (which you might do if you thought delaying AI development by even an hour was a considerable moral victory, as the OP seems to). Doing so does expose you to danger. I would point out that while lc’s post analogizes their relationship with AI researchers to Isreal’s relationship with Iran. When I think of Israel’s resistance to Iran nonviolence is not the first thing that comes to mind.
???
I agree. I also think this is a topic that needs to be seriously considered and discussed because not doing so may leave behind a hidden hindrance to accurate collective assessment and planning for AI risks. Because contrary to our conceits and aspirations, our judgements aren’t at all immune to the sway of biases, flawed assumptions, and human emotions. I’m not sure how to put this, but people on this forum don’t come off as very worldly, if that makes sense. A lot of people are in technical professions where understanding of political realities seem to be lacking. The US and China stand to be the two major drivers of AI development in the next decades. Increasingly they don’t see eye to eye, and an arm-race dynamic might develop. So I feel there’s been a lot of focus on the technical/theoretical side of things, but not enough concern over the practical side of development, the geopolitical implications, and all that might entail.
FYI, I thought this sort of idea was an obvious one, and I’ve been continuously surprised that it didn’t have more discussion. I don’t feel inhibited and am sort of surprised you are.
(I do think there’s a lot of ways to do this badly, with costs on the overall coordination-commons, so, maybe I feel somewhat inhibited from actually going off to do the thing. But I don’t feel inhibited from brainstorming potential ways to address the costs and thinking about how to do it)
(kinda intrigued by the notion of there being dark-matter taboos)
I feel similarly.
OK! Well, I can’t speak for everyone’s experiences, only my own. I don’t think this subject should be taboo and I’m glad people are talking more about it now.
I also find it somewhat taboo but not so much that I haven’t wondered about it.
You’re right that it’s not actually taboo. Shouldn’t have used that word. Just seemed so far like I got a lot of weird resistance when I brought it up.
My experience agrees with yours. I thought “taboo” was a bit strong, but I immediately got what you meant and nodded along.
Daniel, why did you argue against it and what changed your mind?
The main argument I made was: It’s already very hard to get governments and industry to meaningfully limit fossil fuel emissions, even though the relevant scientists (climatologists etc.) are near-unanimous about the long-term negative consequences. Imagine how much harder it would be if there wasn’t a separate field of climatology, and instead the only acknowledged experts on the long-term effects of fossil fuels were… petroleum industry engineers and executives! That’s the situation with AGI risk. We could create a separate field of AI risk studies, but even better would be to convince the people in the industry to take the risk seriously. Then our position would be *better* than the situation with climate change, not worse. How do we do this? Well, we do this by *not antagonizing the industry*. So don’t call for bans, at least not yet.
The two arguments that changed my mind:
(a) We are running out of time. My timelines dropped from median 2040ish to median 2030ish.
(b) I had assumed that banning or slowing down AGI research would be easier the closer we get to AGI, because people would “wake up” to the danger after seeing compelling demonstrations and warning shots etc. However I am now unsure; there’s plausibly a “whirlpool effect” where the closer you get to AGI the more everyone starts racing towards it and the harder it is to stop. Maybe the easiest time to ban it or slow it down is 10 years, even 20 years, before takeoff. (Compare to genetically engineered superbabies. Research into making them was restricted and slowed down decades before it became possible to do it, as far as I can tell.)
Ah, OK, that all sounds pretty sensible. I think ‘this sort of thinking’ is doing a lot of work here. I agree that sponsoring mobs to picket DeepMind HQ is just silly and will probably make things harder. I think buttonholing DeepMind people and government people and trying to convince them of the dangers of what they’re doing is something we should have been doing all along.
I got the impression that the late Dominic Cummings was on our side just organically without anyone needing to persuade him, in fact I seem to remember Boris Johnson saying something silly about terminators just before the covid kerfuffle broke out. It may not be too hard to convince people.
If at the very least we can get people to only do this sort of thing in secret government labs run by people who know perfectly well that they’re messing with world-ending weapons in defiance of international treaties that’s a start. Not that that will save us, but it might be slower than the current headlong rush to doom. If things go really well we might hang on long enough to experience grey goo or a deliberately engineered pandemic!
Half the problem is that the people who actually do AI research seem divided as to whether there’s any danger. If we can’t convince our own kind, convincing politicians and the like is going to be hard indeed.
All I’ve got is ill-formed intuitions. I read one science-fiction story about postage stamps ten years ago and became a doomer on the spot. I think maths and computer people are unusually easy to convince with clear true arguments and if we come up with some we might find getting most of the industry on side easier than everyone seems to think.
Have we tried to actually express our arguments in some convincing way? I’m thinking it’s not actually a very complicated argument, and most of the objections people come up with on the spot are easy enough to counter convincingly. Some sort of one-page main argument with an FAQ for frequently thought of objections might win most of the battle. I don’t suppose you happen to know of one already constructed do you?
As you point out and I agree, if we don’t win this battle then the world just suddenly ends in about ten years time so we should probably have a pop at easy routes to victory if they’re available and don’t have any obvious downsides.
Did he die? If so, it’s not in the news. (I mean, I did a quick search and didn’t find it.)
Oh god, sorry, I just can’t stop myself. I mean his political reputation is shredded beyond hope of repair. Loathed by the people of the UK in the same way that Tony Blair is, and seen as brilliant but disloyal in the same way that the guy in Mad Men is after he turns on the tobacco people.
We may be touching on the mind-killer here. Let us speak of such things no further.
Dominic Cummings lives, a prosperous gentleman.Dominic Cummings is not dead, and I should remember that my ironic flourishes are likely to be taken literally because other people on the internet don’t have the shared context that I would have if I was sounding off in the pub.
Thanks for the clarification!
No, I think John is saying he died politically; that is, he no longer holds power. This is definitely overstated (he might get power in the future) and confusing.