I think it’s misleading to use terms like “us”. AGI will harm some humans, almost no matter what. AGI MAY harm all humans, current and future. It may also vastly increase flourishing of humans (and other mind types, including AGI itself). This is true of most significant technologies, but even more so with AGI, which is likely to have much broader impact than most things.
I think that, exactly because of breadth of impact, it’s going to be pursued by many people/organizations, with different goals and different kinds of influence that you or I might exert over them. That diversity makes the pursuit of AGI very resilient, and unlikely to be significantly slowed by our actions.
Not to speak for Dagon, but I think point 2 as you write it is way, way too narrow and optimistic. Saying “it would be rather difficult to get useful regulation” is sort of like saying “it would be rather difficult to invent time travel”.
I mean, yes, it would be incredibly hard, way beyond “rather difficult”, and maybe into “flat-out impossible”, to get any given government to put useful regulations in place… assuming anybody could present a workable approach to begin with.
It’s not a matter of going to a government and making an argument. For one thing, a government isn’t really a unitary thing. You go to some *part *of a government, fight to even get your issue noticed. Then you compete with all the other people who have opinions. Some of them will directly oppose your objectives. Others will suggest different approaches, leading to delays in hashing out those differences, and possibly to compromises that are far less effective than any of the sides’ “pure” proposals.
Then you get to take whatever you hashed out with the part of the government you’ve started dealing with, and sell it in all of the other parts of that government and the people who’ve been lobbying them. In the process, you find out about a lot of oxen you propose to gore that you didn’t even imagine existed.
In big countries, people often spend whole careers in politics, arguing, fighting, building relationships, doing deals… to get even compromised, watered-down versions of the policies they came in looking for.
But that’s just the start. You have to get many governments, possibly almost all governments, to put in similar or at least compatible regulations… bearing in mind that they don’t trust each other, and are often trying either to compete with each other, or to advantage their citizens in competing with each other. Even that formulation is badly oversimplified, because governments aren’t the only players.
You also have to get them to apply those regulations to themselves, which is hard because they will basically all believe that the other governments are cheating, and probably that the private sector is also cheating… and they will probably be right about that. And of course it’s very easy for any kind of leader to kid themselves that their experts are too smart to blow it, whereas the other guys will probably destroy the world if they get there first.
Which brings you to compliance, whether voluntary or coerced, inside and outside of governments. People break laws and regulations all the time. It’s relatively easy to enforce compliance if what you’re trying to stamp out is necessarily large-scale and conspicuous… but not all dangerous AI activity necessarily has to be that way. And nowadays you can coordinate a pretty large project in a way that’s awfully hard to shut down.
Then there’s the blowback. There’s a risk of provoking arms races. If there are restrictions, players have incentives to move faster if they think the other players are cheating and getting ahead… but they also have incentives to move if they think the other players are not cheating ,and can therefore be attacked and dominated. If a lot of the work is driven into secrecy, or even if people just think there might be secret work, then there are lots of chances for people to think both of those things… with uncertainty to make them nervous.
… and, by the way, by creating secrecy, you’ve reduced the chance of somebody saying “Ahem, old chaps, have you happened to notice that this seemingly innocuous part of your plan will destroy the world”? Of course, the more risk-averse players may think of things like that themselves, but that just means that the least risk-averse players become more likely first movers. Probably not what you wanted.
Meanwhile, resources you could be using to win hearts and minds, or to come up with technical approaches, end up tied up arguing for regulation, enforcing regulation, and complying with regulation.
… and the substance of the rules isn’t easy, either. Even getting a rough, vague consensus on what’s “safe enough” would be really hard, especially if the consensus had to be close enough to “right” to actually be useful. And you might not be able to make much progress on safety without simultaneously getting closer to AGI. For that matter, you may not be able to define “AGI” as well as you might like… nor know when you’re about to create it by accident, perhaps as a side effect of your safety research. So it’s not as simple as “We won’t do this until we know how to do it safely”. How can you formulate rules to deal with that?
I don’t mean to say that laws or regulations have no place, and still less do I mean to say that not-doing-bloody-stupid-things has no place. They do have a place.
But it’s very easy, and very seductive, to oversimplify the problem, and think of regulation as a magic wand. It’s nice to dream that you can just pass a law, and this or that will go away, but you don’t often get that lucky.
“Relinquish this until it’s safe” is a nice slogan, but hard to actually pin down into a real, implementable set of rules. Still more seductive, and probably more dangerous, is the idea that, once you do come up with some optimal set of rules, there’s actually some “we” out there that can easily adopt them, or effectively enforce them. You can do that with some rules in some circumstances, but you can’t do it with just any rules under just any circumstances. And complete relinquishment is probably not one you can do.
In fact, I’ve been in or near this particular debate since the 1990s, and I have found that the question “Should we do X” is a pretty reliable danger flag. Phrasing things that way invites the mind to think of the whole world, or at least some mythical set of “good guys”, as some kind of unit with a single will, and that’s just not how people work. There is no “we” or “us”, so it’s dangerous to think about “us” doing anything. It can be dangerous to talk about any large entity, even a government or corporation, as though it had a coordinated will… and still more so for an undefined “we”.
I appreciate the effort you took in writing a detailed response. There’s one thing you say in which I’m particularly interested, for personal reasons. You say ‘I’ve been in or near this debate since the 1990s’. That suggests there are many people with my opinion. Who? I would honestly love to know, because frankly it feels lonely. All people I’ve met, so far without a single exception, are either not afraid of AI existential risk at all, or believe in a tech fix and are against regulation. I don’t believe in the tech fix, because as an engineer, I’ve seen how much of engineering is trial and error (and science even more). People have ideas, try them, it says boom and then they try something else. Until they get there. If we do that with AGI, I think it’s sure to go wrong. That’s why I think at least some kind of policy intervention is mandatory, not optional. And yes it will be hard. But no argument I’ve heard so far has convinced me that it’s impossible. Or that it’s counterproductive.
I think we should first answer the question: is postponement until safety a good idea if it would be implementable. What’s your opinion on that one?
Also, I’m serious: who else is on my side of this debate? You would really help me personally to let me talk to them, if they exist.
You say ‘I’ve been in or near this debate since the 1990s’. That suggests there are many people with my opinion. Who?
Nick Bostrom comes to mind as at least having a similar approach. And it’s not like he’s without allies, even in places like Less Wrong.
… and, Jeez, back when I was paying more attention, it seemed like some kind of regulation, or at least some kind of organized restriction, was the first thing a lot of people would suggest when they learned about the risks. Especially people who weren’t “into” the technology itself.
I was hanging around the Foresight Institute. People in that orbit were split about 50-50 between worrying most about AI and worrying most about nanotech… but the two issues weren’t all that different when it came to broad precautionary strategies. The prevailing theory was roughly that the two came as a package anyway; if you got hardcore AI, it would invent nanotech, and if you got nanotech, it would give you enough computing power to brute-force AI. Sometimes “nanotech” was even taken as shorthand for “AI, nanotech, and anything else that could get really hairy”… vaguely what people would now follow Bostrom and call “X-risk”. So you might find some kindred spirits by looking in old “nanotech” discussions.
There always seemed to be plenty of people who’d take various regulate-and-delay positions in bull sessions like this one, both online and offline, with differing degrees of consistency or commitment. I can’t remember names; it’s been ages.
The whole “outside” world also seemed very pro-regulation. It felt like about every 15 minutes, you’d see an op-ed in the “outside world” press, or even a book, advocating for a “simple precautionary approach”, where “we” would hold off either as you propose, until some safety criteria were met, or even permanently. There were, and I think still are, people who think you can just permanently outlaw something like AGI ,and that will somehow actually make it never happen. This really scared me.
I think the word “relinquishment” came from Bill McKibben, who I as I recall was, and for all I know may still be, a permanent relinquishist, at least for nanotech. Somebody else had a small organization and phrased things in terms of the “precautionary principle”. I don’t remember who that was. I do remember that their particular formulation of the precautionary principle was really sloppy and would have amounted to nobody ever being allowed to do anything at all under any circumstances.
There were, of course, plenty of serious risk-ignorers and risk-glosser-overs in that Foresight community. They probably dominated in many ways, even though Foresight itself definitely had a major anti-risk mission component. For example, an early, less debugged version of Eliezer Yudkowsky was around. I think, at least when I first showed up, he still held just-blast-ahead opinions that he has, shall we say, repudiated rather strongly nowadays. Even then, though, he was cautious and level-headed compared to a lot of the people you’d run into. I don’t want to make it sound like everybody was trying to stomp on the brakes or even touch the brakes.
The most precautionary types in that community probably felt pretty beleaguered, and the most “regulatory” types even more so. But you could definitely still find regulation proponents, even among the formal leadership.
However, it still seems to me that ideas vaguely like yours, while not uncommon, were often “loyal opposition”, or brought in by newcomers… or they were things you might hear from the “barbarians at the gates”. A lot of them seemed to come from environmentalist discourse. On the bull-session level, I remember spending days arguing about it on some Greenpeace forum.
So maybe your problem is that your personal “bubble” is more anti-regulation than you are? I mean, you’re hanging out on Less Wrong, and people on Less Wrong, like the people around Foresight, definitely tend to have certain viewpoints… including a general pro-technology bias, an urge to shake up the world, and often extremely, even dogmatically anti-regulation political views. If you looked outside, you might find more people who think they way you do. You could look at environmentalism generally, or even at “mainstream” politics.
Thanks for that comment! I didn’t know Bill McKibben, but I read up on his 2019 book ‘Falter: Has the Human Game Begun to Play Itself Out?’ I’ll post a review as a post later. I appreciate your description of what the scene was like back in the 90s or so, that’s really insightful. Also interesting to read about nanotech, I never knew these concerns were historically so coupled.
But having read McKibben’s book, I still can’t find others on my side of the debate. McKibben is indeed the first one I know who both recognizes AGI danger, and does not believe in a tech fix, or at least does not consider this a good outcome. However, I would expect that he would cite others on his side of the debate. Instead, in the sections on AGI, he cites people like Bostrom and Omohundro, which are not postponists in any way. Therefore I’m still guessing at this moment that a ‘postponement side’ of this debate is now absent, and it’s just that McKibben happened to know Kurzweil who got him personally concerned about AGI risk. If that’s not true and there are more voices out there exploring AGI postponement options, I’d still be happy to hear about it. Also if you could find links to old discussions, I’m interested!
The key point that I think you’re missing here is that evaluating whether such a policy “should” be implemented necessarily depends on how it would be implemented.
We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s). But then of course we need to think about the side effects of such a program, ya know, like them running and hiding in other countries and dedicating their lives to fighting back against the countries that are hunting them. Or whatever.
That’s just one example, and I use it because it might be the only tractable way to stop this form of tech progress: literally wiping out the knowledge base.
I do not endorse this idea, by the way.
I’m just trying to show that your reaction to “should we” depends hugely on “how.”
We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s).
I think this is a good way of putting it. Many people in the debate refer to “regulation”. But in practice, regulation is not very effective for weaponry. If you look at how the international community handles dangerous weapons like nuclear weapons, there are many cases of assassinations, bombing, and war in order to prevent the spread of nuclear weapons. This is what it would look like if the world was convinced that AI research was an existential threat—a world where work on AI happens in secret, in private military programs, with governments making the decisions, and participants are risking their lives. Probably the US and China would race to be the first one to achieve AGI dominance, gambling that they would be able to control the software they produced.
I think it’s misleading to use terms like “us”. AGI will harm some humans, almost no matter what. AGI MAY harm all humans, current and future. It may also vastly increase flourishing of humans (and other mind types, including AGI itself). This is true of most significant technologies, but even more so with AGI, which is likely to have much broader impact than most things.
I think that, exactly because of breadth of impact, it’s going to be pursued by many people/organizations, with different goals and different kinds of influence that you or I might exert over them. That diversity makes the pursuit of AGI very resilient, and unlikely to be significantly slowed by our actions.
That goes under Daniel’s point 2 I guess?
Not to speak for Dagon, but I think point 2 as you write it is way, way too narrow and optimistic. Saying “it would be rather difficult to get useful regulation” is sort of like saying “it would be rather difficult to invent time travel”.
I mean, yes, it would be incredibly hard, way beyond “rather difficult”, and maybe into “flat-out impossible”, to get any given government to put useful regulations in place… assuming anybody could present a workable approach to begin with.
It’s not a matter of going to a government and making an argument. For one thing, a government isn’t really a unitary thing. You go to some *part *of a government, fight to even get your issue noticed. Then you compete with all the other people who have opinions. Some of them will directly oppose your objectives. Others will suggest different approaches, leading to delays in hashing out those differences, and possibly to compromises that are far less effective than any of the sides’ “pure” proposals.
Then you get to take whatever you hashed out with the part of the government you’ve started dealing with, and sell it in all of the other parts of that government and the people who’ve been lobbying them. In the process, you find out about a lot of oxen you propose to gore that you didn’t even imagine existed.
In big countries, people often spend whole careers in politics, arguing, fighting, building relationships, doing deals… to get even compromised, watered-down versions of the policies they came in looking for.
But that’s just the start. You have to get many governments, possibly almost all governments, to put in similar or at least compatible regulations… bearing in mind that they don’t trust each other, and are often trying either to compete with each other, or to advantage their citizens in competing with each other. Even that formulation is badly oversimplified, because governments aren’t the only players.
You also have to get them to apply those regulations to themselves, which is hard because they will basically all believe that the other governments are cheating, and probably that the private sector is also cheating… and they will probably be right about that. And of course it’s very easy for any kind of leader to kid themselves that their experts are too smart to blow it, whereas the other guys will probably destroy the world if they get there first.
Which brings you to compliance, whether voluntary or coerced, inside and outside of governments. People break laws and regulations all the time. It’s relatively easy to enforce compliance if what you’re trying to stamp out is necessarily large-scale and conspicuous… but not all dangerous AI activity necessarily has to be that way. And nowadays you can coordinate a pretty large project in a way that’s awfully hard to shut down.
Then there’s the blowback. There’s a risk of provoking arms races. If there are restrictions, players have incentives to move faster if they think the other players are cheating and getting ahead… but they also have incentives to move if they think the other players are not cheating ,and can therefore be attacked and dominated. If a lot of the work is driven into secrecy, or even if people just think there might be secret work, then there are lots of chances for people to think both of those things… with uncertainty to make them nervous.
… and, by the way, by creating secrecy, you’ve reduced the chance of somebody saying “Ahem, old chaps, have you happened to notice that this seemingly innocuous part of your plan will destroy the world”? Of course, the more risk-averse players may think of things like that themselves, but that just means that the least risk-averse players become more likely first movers. Probably not what you wanted.
Meanwhile, resources you could be using to win hearts and minds, or to come up with technical approaches, end up tied up arguing for regulation, enforcing regulation, and complying with regulation.
… and the substance of the rules isn’t easy, either. Even getting a rough, vague consensus on what’s “safe enough” would be really hard, especially if the consensus had to be close enough to “right” to actually be useful. And you might not be able to make much progress on safety without simultaneously getting closer to AGI. For that matter, you may not be able to define “AGI” as well as you might like… nor know when you’re about to create it by accident, perhaps as a side effect of your safety research. So it’s not as simple as “We won’t do this until we know how to do it safely”. How can you formulate rules to deal with that?
I don’t mean to say that laws or regulations have no place, and still less do I mean to say that not-doing-bloody-stupid-things has no place. They do have a place.
But it’s very easy, and very seductive, to oversimplify the problem, and think of regulation as a magic wand. It’s nice to dream that you can just pass a law, and this or that will go away, but you don’t often get that lucky.
“Relinquish this until it’s safe” is a nice slogan, but hard to actually pin down into a real, implementable set of rules. Still more seductive, and probably more dangerous, is the idea that, once you do come up with some optimal set of rules, there’s actually some “we” out there that can easily adopt them, or effectively enforce them. You can do that with some rules in some circumstances, but you can’t do it with just any rules under just any circumstances. And complete relinquishment is probably not one you can do.
In fact, I’ve been in or near this particular debate since the 1990s, and I have found that the question “Should we do X” is a pretty reliable danger flag. Phrasing things that way invites the mind to think of the whole world, or at least some mythical set of “good guys”, as some kind of unit with a single will, and that’s just not how people work. There is no “we” or “us”, so it’s dangerous to think about “us” doing anything. It can be dangerous to talk about any large entity, even a government or corporation, as though it had a coordinated will… and still more so for an undefined “we”.
The word “safe” is also a scary word.
This is a much more thorough and eloquent statement echoing what I was articulating in my comment above. I fully endorse it.
I appreciate the effort you took in writing a detailed response. There’s one thing you say in which I’m particularly interested, for personal reasons. You say ‘I’ve been in or near this debate since the 1990s’. That suggests there are many people with my opinion. Who? I would honestly love to know, because frankly it feels lonely. All people I’ve met, so far without a single exception, are either not afraid of AI existential risk at all, or believe in a tech fix and are against regulation. I don’t believe in the tech fix, because as an engineer, I’ve seen how much of engineering is trial and error (and science even more). People have ideas, try them, it says boom and then they try something else. Until they get there. If we do that with AGI, I think it’s sure to go wrong. That’s why I think at least some kind of policy intervention is mandatory, not optional. And yes it will be hard. But no argument I’ve heard so far has convinced me that it’s impossible. Or that it’s counterproductive.
I think we should first answer the question: is postponement until safety a good idea if it would be implementable. What’s your opinion on that one?
Also, I’m serious: who else is on my side of this debate? You would really help me personally to let me talk to them, if they exist.
Nick Bostrom comes to mind as at least having a similar approach. And it’s not like he’s without allies, even in places like Less Wrong.
… and, Jeez, back when I was paying more attention, it seemed like some kind of regulation, or at least some kind of organized restriction, was the first thing a lot of people would suggest when they learned about the risks. Especially people who weren’t “into” the technology itself.
I was hanging around the Foresight Institute. People in that orbit were split about 50-50 between worrying most about AI and worrying most about nanotech… but the two issues weren’t all that different when it came to broad precautionary strategies. The prevailing theory was roughly that the two came as a package anyway; if you got hardcore AI, it would invent nanotech, and if you got nanotech, it would give you enough computing power to brute-force AI. Sometimes “nanotech” was even taken as shorthand for “AI, nanotech, and anything else that could get really hairy”… vaguely what people would now follow Bostrom and call “X-risk”. So you might find some kindred spirits by looking in old “nanotech” discussions.
There always seemed to be plenty of people who’d take various regulate-and-delay positions in bull sessions like this one, both online and offline, with differing degrees of consistency or commitment. I can’t remember names; it’s been ages.
The whole “outside” world also seemed very pro-regulation. It felt like about every 15 minutes, you’d see an op-ed in the “outside world” press, or even a book, advocating for a “simple precautionary approach”, where “we” would hold off either as you propose, until some safety criteria were met, or even permanently. There were, and I think still are, people who think you can just permanently outlaw something like AGI ,and that will somehow actually make it never happen. This really scared me.
I think the word “relinquishment” came from Bill McKibben, who I as I recall was, and for all I know may still be, a permanent relinquishist, at least for nanotech. Somebody else had a small organization and phrased things in terms of the “precautionary principle”. I don’t remember who that was. I do remember that their particular formulation of the precautionary principle was really sloppy and would have amounted to nobody ever being allowed to do anything at all under any circumstances.
There were, of course, plenty of serious risk-ignorers and risk-glosser-overs in that Foresight community. They probably dominated in many ways, even though Foresight itself definitely had a major anti-risk mission component. For example, an early, less debugged version of Eliezer Yudkowsky was around. I think, at least when I first showed up, he still held just-blast-ahead opinions that he has, shall we say, repudiated rather strongly nowadays. Even then, though, he was cautious and level-headed compared to a lot of the people you’d run into. I don’t want to make it sound like everybody was trying to stomp on the brakes or even touch the brakes.
The most precautionary types in that community probably felt pretty beleaguered, and the most “regulatory” types even more so. But you could definitely still find regulation proponents, even among the formal leadership.
However, it still seems to me that ideas vaguely like yours, while not uncommon, were often “loyal opposition”, or brought in by newcomers… or they were things you might hear from the “barbarians at the gates”. A lot of them seemed to come from environmentalist discourse. On the bull-session level, I remember spending days arguing about it on some Greenpeace forum.
So maybe your problem is that your personal “bubble” is more anti-regulation than you are? I mean, you’re hanging out on Less Wrong, and people on Less Wrong, like the people around Foresight, definitely tend to have certain viewpoints… including a general pro-technology bias, an urge to shake up the world, and often extremely, even dogmatically anti-regulation political views. If you looked outside, you might find more people who think they way you do. You could look at environmentalism generally, or even at “mainstream” politics.
Thanks for that comment! I didn’t know Bill McKibben, but I read up on his 2019 book ‘Falter: Has the Human Game Begun to Play Itself Out?’ I’ll post a review as a post later. I appreciate your description of what the scene was like back in the 90s or so, that’s really insightful. Also interesting to read about nanotech, I never knew these concerns were historically so coupled.
But having read McKibben’s book, I still can’t find others on my side of the debate. McKibben is indeed the first one I know who both recognizes AGI danger, and does not believe in a tech fix, or at least does not consider this a good outcome. However, I would expect that he would cite others on his side of the debate. Instead, in the sections on AGI, he cites people like Bostrom and Omohundro, which are not postponists in any way. Therefore I’m still guessing at this moment that a ‘postponement side’ of this debate is now absent, and it’s just that McKibben happened to know Kurzweil who got him personally concerned about AGI risk. If that’s not true and there are more voices out there exploring AGI postponement options, I’d still be happy to hear about it. Also if you could find links to old discussions, I’m interested!
The key point that I think you’re missing here is that evaluating whether such a policy “should” be implemented necessarily depends on how it would be implemented.
We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s). But then of course we need to think about the side effects of such a program, ya know, like them running and hiding in other countries and dedicating their lives to fighting back against the countries that are hunting them. Or whatever.
That’s just one example, and I use it because it might be the only tractable way to stop this form of tech progress: literally wiping out the knowledge base.
I do not endorse this idea, by the way.
I’m just trying to show that your reaction to “should we” depends hugely on “how.”
We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s).
I think this is a good way of putting it. Many people in the debate refer to “regulation”. But in practice, regulation is not very effective for weaponry. If you look at how the international community handles dangerous weapons like nuclear weapons, there are many cases of assassinations, bombing, and war in order to prevent the spread of nuclear weapons. This is what it would look like if the world was convinced that AI research was an existential threat—a world where work on AI happens in secret, in private military programs, with governments making the decisions, and participants are risking their lives. Probably the US and China would race to be the first one to achieve AGI dominance, gambling that they would be able to control the software they produced.