One of the categories is “They Will Need Us”—claims that AI is no big risk, because AI will always have a need of something that humans have, and that they will therefore preserve us.
I claim something like this. Specifically, I claim that a broad range of superintelligences will preserve their history, and run historical simuations, to help them understand the world. Many possible superintelligences will study their own origins intensely—in order to help them to understand the possible forms of aliens which they might encounter in the future. So, humans are likely to be preserved because superintelligences need us instrumentally—as objects of study.
This applies to (e.g.) gold atom maximisers, with no shred of human values. I don’t claim it for all superintelligences, though—or even 99% of those likely to be built.
I agree with this, but the instrumental scientific motivation to predict hostile aliens that might be encountered in space:
1) doesn’t protect quality-of-life or lifespan for the simulations, brains-in-vats, and Truman Show inhabitants, indeed it suggests poor historical QOL levels and short lifespans;
2) seems likely to consume only a tiny portion of all resources available to an interstellar civilization, in light of diminishing returns.
2) seems likely to consume only a tiny portion of all resources available to an interstellar civilization, in light of diminishing returns.
Would that be due to the proportions between the surface and volume of a sphere, or just the general observation that the more you investigate an area without finding anything the less likely anything exists?
The latter: as you put ever more ridiculous amounts of resources into modeling aliens you’ll find fewer insights per resource unit, especially actionable insights.
Thanks, this is useful. You wouldn’t have a separate write-up of it somewhere? (We can cite a blog comment, but it’s probably more respectable to at least cite something that’s on its own webpage.)
Yes. I’m surprised this isn’t brought up more. AIXI formalizes the idea that intelligence involves predicting the future through deep simulation, but human brains use something like a Monte Carlo sim like approach as well.
I don’t understand why you think “preserve history, run historical simulations, and study AI’s origins” implies that the AI will preserve actual living humans for any significant amount of time. One generation (practically the blink of an eye compared to plausible AI lifetimes) seems like it would produce more than enough data.
Given enough computation the best way to generate accurate generative probabilistic models is to run lots of detailed Monte Carlo simulations. AIXI like models do this, human brains do it to a limited extent.
What does that have to do with whether an AI will need living human beings? It seems like there is an unstated premise that living humans are equivalent to simulated humans. That’s a defensible position, but implicitly asserting the position is not equivalent to defending it.
What does that have to do with whether an AI will need living human beings?
The AI will need to simulate its history as a natural necessary component of its ‘thinking’. For a powerful enough AI, this will entail simulation down to the level of say the Matrix, where individual computers and human minds are simulated at their natural computational scale level.
It seems like there is an unstated premise that living humans are equivalent to simulated humans. That’s a defensible position, but implicitly asserting the position is not equivalent to defending it.
Yes. I’m assuming most people here are sufficiently familiar with this position such that it doesn’t require my defense in a comment like this.
My estimate is more on the “billions of years” timescale. What aliens one might meet is important, potentially life-threatening information, and humans are a big, important and deep clue about the topic that would be difficult to exhaust.
Being inexhaustible, even if true, is not enough. Keeping humans around (or simulated) would have to be a better use of resources (marginally) than anything else the AGI could think of. That’s a strong claim; why do you think that?
Also, if AIs replace humans in the course of history, then arguably studying other AIs would be an even bigger clue to possible aliens. And AIs can be much more diverse than humans, so there would be more to study.
Being inexhaustible, even if true, is not enough. Keeping humans around (or simulated) would have to be a better use of resources (marginally) than anything else the AGI could think of. That’s a strong claim; why do you think that?
History is valuable, and irreplaceable if lost. Possibly a long sequence of wars early on might destroy it before it could be properly backed up—but the chances of such a loss seem low. Human history seems particularly significant when considering the forms of possible aliens. But, I could be wrong about some of this. I’m not overwhelmingly confident of this line of reasoning—though I am prettty sure that many others are neglecting it without having good reasons for doing so.
Why is human history so important, or useful, in predicting aliens? Why would it be better than:
Analyzing the AIs and their history (cheaper, since they exist anyway)
Creating and analyzing other tailored life forms (allows testing hypotheses rather than analyzing human history passively)
Analyzing existing non-human life (could give data about biological evolution as well as humans could; experiments about evolution of intelligence might be more useful than experiments on behavior of already-evolved intelligence)
Simulating, or actually raising, some humans and analyzing them (may be simpler or cheaper than recreating or recording human history due to size, and allows for interactive experiments and many scenarios, unlike the single scenario of human history)
Human history’s importance gets diluted once advanced aliens are encountered—though the chances of any such encounter soon seem slender—for various reasons. Primitive aliens would still be very interesting.
Experiments that create living humans are mostly “fine by me”.
They’ll (probably) preserve a whole chunk of our ecosystem—for the reasons you mention, though only analysing non-human life (or post human life) skips out some of the most interesting bits of their own origin story, which they (like us) are likely to be particularly interested in.
After a while, aliens are likely to be our descendants’ biggest threat. They probably won’t throw away vital clues relating to the issue casually.
I claim something like this. Specifically, I claim that a broad range of superintelligences will preserve their history, and run historical simuations, to help them understand the world. Many possible superintelligences will study their own origins intensely—in order to help them to understand the possible forms of aliens which they might encounter in the future. So, humans are likely to be preserved because superintelligences need us instrumentally—as objects of study.
This applies to (e.g.) gold atom maximisers, with no shred of human values. I don’t claim it for all superintelligences, though—or even 99% of those likely to be built.
I agree with this, but the instrumental scientific motivation to predict hostile aliens that might be encountered in space:
1) doesn’t protect quality-of-life or lifespan for the simulations, brains-in-vats, and Truman Show inhabitants, indeed it suggests poor historical QOL levels and short lifespans;
2) seems likely to consume only a tiny portion of all resources available to an interstellar civilization, in light of diminishing returns.
Would that be due to the proportions between the surface and volume of a sphere, or just the general observation that the more you investigate an area without finding anything the less likely anything exists?
The latter: as you put ever more ridiculous amounts of resources into modeling aliens you’ll find fewer insights per resource unit, especially actionable insights.
Thanks, this is useful. You wouldn’t have a separate write-up of it somewhere? (We can cite a blog comment, but it’s probably more respectable to at least cite something that’s on its own webpage.)
Sorry, no proper write-up.
Yes. I’m surprised this isn’t brought up more. AIXI formalizes the idea that intelligence involves predicting the future through deep simulation, but human brains use something like a Monte Carlo sim like approach as well.
I don’t understand why you think “preserve history, run historical simulations, and study AI’s origins” implies that the AI will preserve actual living humans for any significant amount of time. One generation (practically the blink of an eye compared to plausible AI lifetimes) seems like it would produce more than enough data.
Given enough computation the best way to generate accurate generative probabilistic models is to run lots of detailed Monte Carlo simulations. AIXI like models do this, human brains do it to a limited extent.
What does that have to do with whether an AI will need living human beings? It seems like there is an unstated premise that living humans are equivalent to simulated humans. That’s a defensible position, but implicitly asserting the position is not equivalent to defending it.
The AI will need to simulate its history as a natural necessary component of its ‘thinking’. For a powerful enough AI, this will entail simulation down to the level of say the Matrix, where individual computers and human minds are simulated at their natural computational scale level.
Yes. I’m assuming most people here are sufficiently familiar with this position such that it doesn’t require my defense in a comment like this.
My estimate is more on the “billions of years” timescale. What aliens one might meet is important, potentially life-threatening information, and humans are a big, important and deep clue about the topic that would be difficult to exhaust.
Being inexhaustible, even if true, is not enough. Keeping humans around (or simulated) would have to be a better use of resources (marginally) than anything else the AGI could think of. That’s a strong claim; why do you think that?
Also, if AIs replace humans in the course of history, then arguably studying other AIs would be an even bigger clue to possible aliens. And AIs can be much more diverse than humans, so there would be more to study.
History is valuable, and irreplaceable if lost. Possibly a long sequence of wars early on might destroy it before it could be properly backed up—but the chances of such a loss seem low. Human history seems particularly significant when considering the forms of possible aliens. But, I could be wrong about some of this. I’m not overwhelmingly confident of this line of reasoning—though I am prettty sure that many others are neglecting it without having good reasons for doing so.
Why is human history so important, or useful, in predicting aliens? Why would it be better than:
Analyzing the AIs and their history (cheaper, since they exist anyway)
Creating and analyzing other tailored life forms (allows testing hypotheses rather than analyzing human history passively)
Analyzing existing non-human life (could give data about biological evolution as well as humans could; experiments about evolution of intelligence might be more useful than experiments on behavior of already-evolved intelligence)
Simulating, or actually raising, some humans and analyzing them (may be simpler or cheaper than recreating or recording human history due to size, and allows for interactive experiments and many scenarios, unlike the single scenario of human history)
Human history’s importance gets diluted once advanced aliens are encountered—though the chances of any such encounter soon seem slender—for various reasons. Primitive aliens would still be very interesting.
Experiments that create living humans are mostly “fine by me”.
They’ll (probably) preserve a whole chunk of our ecosystem—for the reasons you mention, though only analysing non-human life (or post human life) skips out some of the most interesting bits of their own origin story, which they (like us) are likely to be particularly interested in.
After a while, aliens are likely to be our descendants’ biggest threat. They probably won’t throw away vital clues relating to the issue casually.