You say ‘I’ve been in or near this debate since the 1990s’. That suggests there are many people with my opinion. Who?
Nick Bostrom comes to mind as at least having a similar approach. And it’s not like he’s without allies, even in places like Less Wrong.
… and, Jeez, back when I was paying more attention, it seemed like some kind of regulation, or at least some kind of organized restriction, was the first thing a lot of people would suggest when they learned about the risks. Especially people who weren’t “into” the technology itself.
I was hanging around the Foresight Institute. People in that orbit were split about 50-50 between worrying most about AI and worrying most about nanotech… but the two issues weren’t all that different when it came to broad precautionary strategies. The prevailing theory was roughly that the two came as a package anyway; if you got hardcore AI, it would invent nanotech, and if you got nanotech, it would give you enough computing power to brute-force AI. Sometimes “nanotech” was even taken as shorthand for “AI, nanotech, and anything else that could get really hairy”… vaguely what people would now follow Bostrom and call “X-risk”. So you might find some kindred spirits by looking in old “nanotech” discussions.
There always seemed to be plenty of people who’d take various regulate-and-delay positions in bull sessions like this one, both online and offline, with differing degrees of consistency or commitment. I can’t remember names; it’s been ages.
The whole “outside” world also seemed very pro-regulation. It felt like about every 15 minutes, you’d see an op-ed in the “outside world” press, or even a book, advocating for a “simple precautionary approach”, where “we” would hold off either as you propose, until some safety criteria were met, or even permanently. There were, and I think still are, people who think you can just permanently outlaw something like AGI ,and that will somehow actually make it never happen. This really scared me.
I think the word “relinquishment” came from Bill McKibben, who I as I recall was, and for all I know may still be, a permanent relinquishist, at least for nanotech. Somebody else had a small organization and phrased things in terms of the “precautionary principle”. I don’t remember who that was. I do remember that their particular formulation of the precautionary principle was really sloppy and would have amounted to nobody ever being allowed to do anything at all under any circumstances.
There were, of course, plenty of serious risk-ignorers and risk-glosser-overs in that Foresight community. They probably dominated in many ways, even though Foresight itself definitely had a major anti-risk mission component. For example, an early, less debugged version of Eliezer Yudkowsky was around. I think, at least when I first showed up, he still held just-blast-ahead opinions that he has, shall we say, repudiated rather strongly nowadays. Even then, though, he was cautious and level-headed compared to a lot of the people you’d run into. I don’t want to make it sound like everybody was trying to stomp on the brakes or even touch the brakes.
The most precautionary types in that community probably felt pretty beleaguered, and the most “regulatory” types even more so. But you could definitely still find regulation proponents, even among the formal leadership.
However, it still seems to me that ideas vaguely like yours, while not uncommon, were often “loyal opposition”, or brought in by newcomers… or they were things you might hear from the “barbarians at the gates”. A lot of them seemed to come from environmentalist discourse. On the bull-session level, I remember spending days arguing about it on some Greenpeace forum.
So maybe your problem is that your personal “bubble” is more anti-regulation than you are? I mean, you’re hanging out on Less Wrong, and people on Less Wrong, like the people around Foresight, definitely tend to have certain viewpoints… including a general pro-technology bias, an urge to shake up the world, and often extremely, even dogmatically anti-regulation political views. If you looked outside, you might find more people who think they way you do. You could look at environmentalism generally, or even at “mainstream” politics.
Thanks for that comment! I didn’t know Bill McKibben, but I read up on his 2019 book ‘Falter: Has the Human Game Begun to Play Itself Out?’ I’ll post a review as a post later. I appreciate your description of what the scene was like back in the 90s or so, that’s really insightful. Also interesting to read about nanotech, I never knew these concerns were historically so coupled.
But having read McKibben’s book, I still can’t find others on my side of the debate. McKibben is indeed the first one I know who both recognizes AGI danger, and does not believe in a tech fix, or at least does not consider this a good outcome. However, I would expect that he would cite others on his side of the debate. Instead, in the sections on AGI, he cites people like Bostrom and Omohundro, which are not postponists in any way. Therefore I’m still guessing at this moment that a ‘postponement side’ of this debate is now absent, and it’s just that McKibben happened to know Kurzweil who got him personally concerned about AGI risk. If that’s not true and there are more voices out there exploring AGI postponement options, I’d still be happy to hear about it. Also if you could find links to old discussions, I’m interested!
Nick Bostrom comes to mind as at least having a similar approach. And it’s not like he’s without allies, even in places like Less Wrong.
… and, Jeez, back when I was paying more attention, it seemed like some kind of regulation, or at least some kind of organized restriction, was the first thing a lot of people would suggest when they learned about the risks. Especially people who weren’t “into” the technology itself.
I was hanging around the Foresight Institute. People in that orbit were split about 50-50 between worrying most about AI and worrying most about nanotech… but the two issues weren’t all that different when it came to broad precautionary strategies. The prevailing theory was roughly that the two came as a package anyway; if you got hardcore AI, it would invent nanotech, and if you got nanotech, it would give you enough computing power to brute-force AI. Sometimes “nanotech” was even taken as shorthand for “AI, nanotech, and anything else that could get really hairy”… vaguely what people would now follow Bostrom and call “X-risk”. So you might find some kindred spirits by looking in old “nanotech” discussions.
There always seemed to be plenty of people who’d take various regulate-and-delay positions in bull sessions like this one, both online and offline, with differing degrees of consistency or commitment. I can’t remember names; it’s been ages.
The whole “outside” world also seemed very pro-regulation. It felt like about every 15 minutes, you’d see an op-ed in the “outside world” press, or even a book, advocating for a “simple precautionary approach”, where “we” would hold off either as you propose, until some safety criteria were met, or even permanently. There were, and I think still are, people who think you can just permanently outlaw something like AGI ,and that will somehow actually make it never happen. This really scared me.
I think the word “relinquishment” came from Bill McKibben, who I as I recall was, and for all I know may still be, a permanent relinquishist, at least for nanotech. Somebody else had a small organization and phrased things in terms of the “precautionary principle”. I don’t remember who that was. I do remember that their particular formulation of the precautionary principle was really sloppy and would have amounted to nobody ever being allowed to do anything at all under any circumstances.
There were, of course, plenty of serious risk-ignorers and risk-glosser-overs in that Foresight community. They probably dominated in many ways, even though Foresight itself definitely had a major anti-risk mission component. For example, an early, less debugged version of Eliezer Yudkowsky was around. I think, at least when I first showed up, he still held just-blast-ahead opinions that he has, shall we say, repudiated rather strongly nowadays. Even then, though, he was cautious and level-headed compared to a lot of the people you’d run into. I don’t want to make it sound like everybody was trying to stomp on the brakes or even touch the brakes.
The most precautionary types in that community probably felt pretty beleaguered, and the most “regulatory” types even more so. But you could definitely still find regulation proponents, even among the formal leadership.
However, it still seems to me that ideas vaguely like yours, while not uncommon, were often “loyal opposition”, or brought in by newcomers… or they were things you might hear from the “barbarians at the gates”. A lot of them seemed to come from environmentalist discourse. On the bull-session level, I remember spending days arguing about it on some Greenpeace forum.
So maybe your problem is that your personal “bubble” is more anti-regulation than you are? I mean, you’re hanging out on Less Wrong, and people on Less Wrong, like the people around Foresight, definitely tend to have certain viewpoints… including a general pro-technology bias, an urge to shake up the world, and often extremely, even dogmatically anti-regulation political views. If you looked outside, you might find more people who think they way you do. You could look at environmentalism generally, or even at “mainstream” politics.
Thanks for that comment! I didn’t know Bill McKibben, but I read up on his 2019 book ‘Falter: Has the Human Game Begun to Play Itself Out?’ I’ll post a review as a post later. I appreciate your description of what the scene was like back in the 90s or so, that’s really insightful. Also interesting to read about nanotech, I never knew these concerns were historically so coupled.
But having read McKibben’s book, I still can’t find others on my side of the debate. McKibben is indeed the first one I know who both recognizes AGI danger, and does not believe in a tech fix, or at least does not consider this a good outcome. However, I would expect that he would cite others on his side of the debate. Instead, in the sections on AGI, he cites people like Bostrom and Omohundro, which are not postponists in any way. Therefore I’m still guessing at this moment that a ‘postponement side’ of this debate is now absent, and it’s just that McKibben happened to know Kurzweil who got him personally concerned about AGI risk. If that’s not true and there are more voices out there exploring AGI postponement options, I’d still be happy to hear about it. Also if you could find links to old discussions, I’m interested!