One million years is ten thousand generations of humans as we know them. If AI progress were impossible under the heel of a world-state, we could increase intelligence by a few points each generation. This already happens naturally and it would hardly be difficult to compound the Flynn effect.
I think the “unboosted humans” hypothetical is meant to include mind-uploading (which makes the generation time an underestimate), but we’re assuming that the simulation overlords stop us from drastically improving the quality of our individual reasoning.
Nate assigns “base humans, left alone” an ~82% chance of producing an outcome at least as good as “tiling our universe-shard with computronium that we use to run glorious merely-human civilizations”, which seems unlikely to me if we can’t upload humans at all. (But maybe I’m misunderstanding something about his view.)
Surely we could hit endgame technology that hits the limits of physical possibility/diminishing returns in one million years, let alone five hundred of those spans.
I think we hit the limits of technology we can think about, understand, manipulate, and build vastly earlier than that (especially if we have fast-running human uploads). But I think this limit is a lot lower than the technologies you could invent if your brain were as large as the planet Jupiter, you had native brain hardware for doing different forms of advanced math in your head, you could visualize the connection between millions of different complex machines in your working memory and simulate millions of possible complex connections between those machines inside your own head, etc.
Even when it comes to just winning a space battle using a fixed pool of fighters, I expect to get crushed by a superintelligence that can individually think about and maneuver effectively arbitrary numbers of nanobots in real time, versus humans that are manually piloting (or using crappy AI to pilot) our drones.
In comparative terms, a five hundred year sabbatical from AI would reduce the share of resources we could reach by an epsilon only, and if AI safety premises are sound then it would greatly increase EV.
Oh, agreed. But we’re discussing a scenario where we never build ASI, not one where we delay 500 years.
This point is likely moot, of course. I understand that we do not live in a totalitarian world state and your intent is just to assure people that AI safety people are not neoluddites.
Yep! And more generally, to share enough background model (that doesn’t normally come up in inside-baseball AI discussions) to help people identify cruxes of disagreement.
I suppose one could attempt to help a state establish global dominance
Seems super unrealistic to me, and probably bad if you could achieve it.
A different scenario that makes a lot more sense, IMO, is an AGI project pairing with some number of states during or after an AGI-enabled pivotal act. But that assumes you’ve already solved enough of the alignment problem to do at least one (possibly state-assisted) pivotal act.
I think there’s kind of a lot of room between 95% of potential value being lost and 5%!!
My intuition is that capturing even 1% of the future’s total value is an astoundingly conjunctive feat—a narrow enough target that it’s surprising if we can hit that target and yet not hit 10%, or 99%. Think less “capture at least 1% of the negentropy in our future light cone and use it for something”, more “solve the first 999 digits of a 1000-digit decimal combination lock specifying an extremely complicated function of human brain-states that somehow encodes all the properties of Maximum Extremely-Weird-Posthuman Utility”.
(This is based on the idea that even if the alignment problem is solved such that we know how to specify a goal rigorously to an AI, it doesn’t follow that the people who happen to be programming the goal will be selfless. You work in AI so presumably you have practiced rebuttals to this concept; I do not so I’ll state my thought but be clear that I expect this is well-worn territory to which I expect you to have a solid answer.)
Why do they need to be selfless? What are the selfish benefits to making the future less Fun for innumerable numbers of posthumans you’ll never meet or hear anything about?
(The future light cone is big, and no one human can interact with very much of it. You swamp the selfish desires of every currently-living human before you’ve even used up the negentropy in one hundredth of a single galaxy. And then what do you do with the rest of the universe? We aren’t guaranteed to use the rest of the universe well, but if we use it poorly the explanation probably can’t be “selfishness”.)
It seems to assume that things like hive-mind species are possible or common, which I don’t have information about but maybe you do.
I dunno Nate’s reasoning, but AFAIK the hive-mind thing may just be an example, rather than being central to his reasoning on this point.
Thanks for the comment, Amelia! :)
I think the “unboosted humans” hypothetical is meant to include mind-uploading (which makes the generation time an underestimate), but we’re assuming that the simulation overlords stop us from drastically improving the quality of our individual reasoning.
Nate assigns “base humans, left alone” an ~82% chance of producing an outcome at least as good as “tiling our universe-shard with computronium that we use to run glorious merely-human civilizations”, which seems unlikely to me if we can’t upload humans at all. (But maybe I’m misunderstanding something about his view.)
I think we hit the limits of technology we can think about, understand, manipulate, and build vastly earlier than that (especially if we have fast-running human uploads). But I think this limit is a lot lower than the technologies you could invent if your brain were as large as the planet Jupiter, you had native brain hardware for doing different forms of advanced math in your head, you could visualize the connection between millions of different complex machines in your working memory and simulate millions of possible complex connections between those machines inside your own head, etc.
Even when it comes to just winning a space battle using a fixed pool of fighters, I expect to get crushed by a superintelligence that can individually think about and maneuver effectively arbitrary numbers of nanobots in real time, versus humans that are manually piloting (or using crappy AI to pilot) our drones.
Oh, agreed. But we’re discussing a scenario where we never build ASI, not one where we delay 500 years.
Yep! And more generally, to share enough background model (that doesn’t normally come up in inside-baseball AI discussions) to help people identify cruxes of disagreement.
Seems super unrealistic to me, and probably bad if you could achieve it.
A different scenario that makes a lot more sense, IMO, is an AGI project pairing with some number of states during or after an AGI-enabled pivotal act. But that assumes you’ve already solved enough of the alignment problem to do at least one (possibly state-assisted) pivotal act.
My intuition is that capturing even 1% of the future’s total value is an astoundingly conjunctive feat—a narrow enough target that it’s surprising if we can hit that target and yet not hit 10%, or 99%. Think less “capture at least 1% of the negentropy in our future light cone and use it for something”, more “solve the first 999 digits of a 1000-digit decimal combination lock specifying an extremely complicated function of human brain-states that somehow encodes all the properties of Maximum Extremely-Weird-Posthuman Utility”.
Why do they need to be selfless? What are the selfish benefits to making the future less Fun for innumerable numbers of posthumans you’ll never meet or hear anything about?
(The future light cone is big, and no one human can interact with very much of it. You swamp the selfish desires of every currently-living human before you’ve even used up the negentropy in one hundredth of a single galaxy. And then what do you do with the rest of the universe? We aren’t guaranteed to use the rest of the universe well, but if we use it poorly the explanation probably can’t be “selfishness”.)
I dunno Nate’s reasoning, but AFAIK the hive-mind thing may just be an example, rather than being central to his reasoning on this point.