The quote from Schmidhuber literally says nothing about human extinction being good.
I’m disappointed that Critch glosses it that way, because in the past he has been leveler-headed than many, but he’s wrong.
The quote is:
“Don’t think of us versus them: us, the humans, v these future super robots. Think of yourself, and humanity in general, as a small stepping stone, not the last one, on the path of the universe towards more and more unfathomable complexity. Be content with that little role in the grand scheme of things.” As for the near future, our old motto still applies: “Our AI is making human lives longer & healthier & easier.”
Humans not being the “last stepping stone” towards greater complexity does not imply that we’ll go extinct. I’d be happy to live in a world where there are things more complex than humans. Like it’s not a weird interpretation at all—“AI will be more complex than humans” or “Humans are not the final form of complexity in the universe” simply says nothing at all about “humans will go extinct.”
You could spin it into that meaning if you tried really hard. But—for instance—the statement could also be about how AI will do science better than humans in the future, which was (astonishingly) the substance of the talk in which this statement took place, and also what Schmidhuber has been on about for years, so it probably is what he’s actually talking about.
I note that you say, in your section on tribalism.
Accelerationists mostly got busy equating anyone who thinks smarter than human AIs might pose a danger to terrorists and cultists and crazies. The worst forms of ad hominem and gaslighting via power were on display.
It would be great if people were a tad more hesitant to accuse others of wanting omnicide.
Well, I do agree that there are two steps needed from the quote to the position of saying the quote supports omnicide.
Step 1. You have to also think that things smarter (better at science) and more complex than humans will become more powerful than humans, and somehow end up in control of the destiny of the universe.
Step 2. You have to think that humans losing control in this way will be effectively fatal to them, one way or another, not long after it happens.
So yeah, Schmidhuber might think that one or both of these two steps are invalid. I believe they probably are, and thus that Schmidhuber’s position thus points pretty strongly at human extinction. That if we want to avoid human extinction we need to avoid going in the direction of AI being more complex than humans.
My personal take is that we should keep AI as limited and simple as possible, as long as possible. We should aim for increasing human complexity and ability. We should not merge with AI, we should simply use AI as a tool to expand humanity’s abilities. Create digital humans. Then figure out how to let those digital humans grow and improve beyond the limits of biology while still maintaining their core humanity.
We should not merge with AI [...] Create digital humans.
I have been confused for a while about
boundary between humans merging with AI and digital humans (can these approaches be reliably differentiated from each other? or is there a large overlap?)
why digital humans would be a safer alternative than the merge
So this seems like it might be a good occasion to ask you to elaborate on this...
Jones: The existential threat that’s implied is the extent to which humans have control over this technology. We see some early cases of opportunism which, as you say, tends to get more media attention than positive breakthroughs. But you’re implying that this will all balance out?
Schmidhuber: Historically, we have a long tradition of technological breakthroughs that led to advancements in weapons for the purpose of defense but also for protection. From sticks, to rocks, to axes to gunpowder to cannons to rockets… and now to drones… this has had a drastic influence on human history but what has been consistent throughout history is that those who are using technology to achieve their own ends are themselves, facing the same technology because the opposing side is learning to use it against them. And that’s what has been repeated in thousands of years of human history and it will continue. I don’t see the new AI arms race as something that is remotely as existential a threat as the good old nuclear warheads.
Ehhh, I get the impression that Schidhuber doesn’t think of human extinction as specifically “part of the plan”, but he also doesn’t appear to consider human survival to be something particularly important relative to his priority of creating ASI. He wants “to build something smarter than myself, which will build something even smarter, et cetera, et cetera, and eventually colonize and transform the universe”, and thinks that “Generally speaking, our best protection will be their lack of interest in us, because most species’ biggest enemy is their own kind. They will pay about as much attention to us as we do to ants.”
I agree that he’s not overtly “pro-extinction” in the way Rich Sutton is, but he does seem fairly dismissive of humanity’s long-term future in general, while also pushing for the creation of an uncaring non-human thing to take over the universe, so...
The quote from Schmidhuber literally says nothing about human extinction being good.
I’m disappointed that Critch glosses it that way, because in the past he has been leveler-headed than many, but he’s wrong.
The quote is:
Humans not being the “last stepping stone” towards greater complexity does not imply that we’ll go extinct. I’d be happy to live in a world where there are things more complex than humans. Like it’s not a weird interpretation at all—“AI will be more complex than humans” or “Humans are not the final form of complexity in the universe” simply says nothing at all about “humans will go extinct.”
You could spin it into that meaning if you tried really hard. But—for instance—the statement could also be about how AI will do science better than humans in the future, which was (astonishingly) the substance of the talk in which this statement took place, and also what Schmidhuber has been on about for years, so it probably is what he’s actually talking about.
I note that you say, in your section on tribalism.
It would be great if people were a tad more hesitant to accuse others of wanting omnicide.
Well, I do agree that there are two steps needed from the quote to the position of saying the quote supports omnicide.
Step 1. You have to also think that things smarter (better at science) and more complex than humans will become more powerful than humans, and somehow end up in control of the destiny of the universe.
Step 2. You have to think that humans losing control in this way will be effectively fatal to them, one way or another, not long after it happens.
So yeah, Schmidhuber might think that one or both of these two steps are invalid. I believe they probably are, and thus that Schmidhuber’s position thus points pretty strongly at human extinction. That if we want to avoid human extinction we need to avoid going in the direction of AI being more complex than humans.
My personal take is that we should keep AI as limited and simple as possible, as long as possible. We should aim for increasing human complexity and ability. We should not merge with AI, we should simply use AI as a tool to expand humanity’s abilities. Create digital humans. Then figure out how to let those digital humans grow and improve beyond the limits of biology while still maintaining their core humanity.
I have been confused for a while about
boundary between humans merging with AI and digital humans (can these approaches be reliably differentiated from each other? or is there a large overlap?)
why digital humans would be a safer alternative than the merge
So this seems like it might be a good occasion to ask you to elaborate on this...
I think Schmidhuber does in fact think that humans will go extinct as a result of developing ASI: https://www.lesswrong.com/posts/BEtQALqgXmL9d9SfE/q-and-a-with-juergen-schmidhuber-on-risks-from-ai
Note that that’s from 2011 -- it says things which (I agree) could be taken to imply that humans will go extinct, but doesn’t directly state it.
On the other hand, here’s from 6 months ago:
Ehhh, I get the impression that Schidhuber doesn’t think of human extinction as specifically “part of the plan”, but he also doesn’t appear to consider human survival to be something particularly important relative to his priority of creating ASI. He wants “to build something smarter than myself, which will build something even smarter, et cetera, et cetera, and eventually colonize and transform the universe”, and thinks that “Generally speaking, our best protection will be their lack of interest in us, because most species’ biggest enemy is their own kind. They will pay about as much attention to us as we do to ants.”
I agree that he’s not overtly “pro-extinction” in the way Rich Sutton is, but he does seem fairly dismissive of humanity’s long-term future in general, while also pushing for the creation of an uncaring non-human thing to take over the universe, so...