I also find the atoms argument very uncompelling. There is so much space and solar energy in the asteroid belt, I’m sure there is a good chance that the ASI will be chill.
However, I think Yudkowsky is shouting so loud because even if that chance of asi apocalypse is only 5%, that is 5% multiplied by all possible human goodness, which is a big deal to our species in expectation.
Personally I think the totality of the biological ecosystems on earth (including humans) will still be interesting to an ASI, so I’d hope they’d let it tick on as a museum piece.
The question isn’t whether it would be easier for superintelligent AI to go to space than it would be for humans. Of course it would be! Everything will be easier for a superintelligent AI.
The question is whether a superintelligent AI would prioritize going to space immediately, leaving Earth as an “untouched wilderness”, where humans are free to thrive. Or, will the superintelligent AI work on fully exploiting the resources it has at hand, here on earth, before choosing to go to space? I think the latter is far more likely. Superintelligence can’t beat physics. No matter what, it will always be easier to harvest closer resources than it will be to harvest resources that are farther away. The closest resources are on earth. So why should the superintelligent AI go to space, when, at least in the immediate term, it has everything it needs to grow right here?
whether a superintelligent AI would prioritize going to space immediately
Priorities need a resource that gets allocated to one thing and not another thing. But going to space doesn’t imply/motivate leaving Earth, doing both doesn’t diminish either.
My argument is that, like humanity, a superintelligent AI will initially find it easier to extract resources from Earth than it will from space based sources. By the time earth’s resources are sufficiently depleted that this is no longer the case, there will be far too little remaining for humanity to survive on.
That’s obviously false from any vaguely rigorous take.
What is obviously true: the ASI could take 99% of the earth’s raw materials, and 100% of the rest of the solar system, and leave plenty for the current human population to survive, assuming MNT.
If an AI is capable of taking 99% of the resources that humans rely on to live, it’s capable of taking 100%.
Tell me why the AI should stop at 99% (or 85%, or 70%, or whatever threshold you wish to draw) without having that threshold encoded as one of its goals.
Do you pick up every penny that you pass in the street?
The amount of energy and resources on Earth would be a rounding error in an ASI’s calculations. And it would be a rounding error that happens to be incredibly complex and possibly unique!
Maybe a more appropriate question is, do you pick every flower that you pass in the park? What if it was the only one?
The amount of energy and resources on Earth would be a rounding error in an ASI’s calculations.
Once again: this argument applies to humanity too. Everyone acknowledges that the asteroid belt holds far more resources than Earth. But here we are, building strip mines in Australia rather than hauling asteroids in from the belt.
Your counterargument is that the AI will find it much easier to go to space, not being constrained by human biology. Fine. But won’t the AI also find it much easier to build strip mines? Or harvest resources from the oceans? Or pave over vast tracts of land for use as solar farms? You haven’t answered why going to space will be cheaper for the AI than staying on earth. All you’ve proven is that going to space will be cheaper for the AI than it will be for humans, which is a claim that I’m not contesting.
I just find the idea that the ASI will want my atoms for something trivial, when there are so many other atoms in the universe that are not part of a grand exploration of the extremes of thermodynamics, unconvincing.
The problem isn’t that the AI will want the atoms that comprise your body, specifically. That’s trivially false. It makes as much sense as the scene in The Matrix where Morpheus explained to Neo that the Matrix was using humans as living energy sources.
What is less trivially false is that the AI will alter the biosphere in ways that make it impossible (or merely very difficult) for humans to live, just as humans have altered the biosphere in ways that have made it impossible (or merely very difficult) for many other species to live. The AI will not intend to alter the biosphere. The biosphere alteration will be a side-effect of whatever the AI’s goals are. But the alteration will take place, regardless.
Put more pithily: tell me why I should expect a superintelligent AI to be an environmentalist.
Just to preserve information. It’s not every day that you come across a thermodynamic system that has been evolving so far from equilibrium for so long. There is information here.
In general, I feel like a lot of people in discussion about ASI seem to enjoy fantasizing about science fiction apocalypses of various kinds. Personally I’m not so interested in exercises in fancy, rather looking at ways physical laws might imply that ‘strong orthogonality’ is unlikely to obtain in reality.
The information could be instrumentally useful for any of the following Basic AI Drives:
Efficiency: making use of the already-performed thermodynamic ‘calculation’ of evolution (and storage of that calculation—the biosphere conveniently preserves this information for free)
Acquisition: ‘information’ will doubtlessly be one of the things an AI wants to acquire
Creativity: the biosphere has lots of ways of doing things
Cognitive enhancement: understanding thermodynamics on an intimate level will help any kind of self-enhancement
Technological perfection: same story. You want to understand thermodynamics.
At every time step, the AI will be trading off these drives against the value of producing more or doing more of whatever it was programmed to do. What happens when the AI decides that it’s learned enough from the biosphere and that the costs of preserving a biosphere for humans no longer outweigh the potential benefit that it earns from learning about biology, evolution and thermodynamics?
We humans make these trade-offs all the time, often unconsciously, as we weigh whether to bulldoze a forest, or build a dam, or dig a mine. A superintelligent AI will perhaps be more intentional in its calculations, but that’s still no guarantee that the result of the calculation will swing in humanity’s favor. We could, in theory, program the AI to preserve earth as a sanctuary. But, in my view, that’s functionally equivalent to solving alignment.
Your argument appears to be that an unaligned AI will, spontaneously, choose to, at the very least, preserve Earth as a sanctuary for humans into perpetuity. I still don’t see why it should do that.
That isn’t my argument, my argument is just that the general tone seems too defeatist.
The question asker was under the impression that the probabilities were %99.X percent against anything okay. My only argument was that this is wrong, and there are good reasons that this is wrong.
Where the p(doom) lies between 99 and 1 percent is left as an exercise for posterity. I’m not totally unhinged in my optimism, I just think the tone of certain doom is poorly founded and there are good reasons to have some measure of hope.
Not just ‘i dunno, maybe it will be fine’ but real reasons why it could conceivably be fine. Again, the probabilities are up for debate, I only wanted to present some concrete reasons.
A related factor is curiosity. As I understand, reinforcement learning agents perform much better if gifted with curiosity (or if developed it by themselves). Seeking novel information is extremely helpful for most goals (but could lead to “TV addiction”).
I find it plausible that ASI will be curious, and that both humanity and the biosphere, which are the results of billions of years of an enormous computation, will stimulate ASI’s curiosity.
But its curiosity may not last for centuries, or even years. Additionally, the curiosity may involve some dissection of living humans, or worse.
Note that an AI or civilization of many ASIs could harvest the overwhelming majority of all accessible and suitable material on the planet and yet keep all humans alive if they chose to. It’s not an expensive thing to do. Humans are really cheap and live skimming off the very surface of the earth. Most of our raw material shortages are self inflicted, we don’t recycle CO2 back to hydrocarbons and we don’t recycle our trash at an elemental level.
The reason they might kill all humans would be either from a moloch scenario or one where it was efficient to do so to remove humans as an obstacle.
even if that chance of asi apocalypse is only 5%, that is 5% multiplied by all possible human goodness, which is a big deal to our species in expectation.
The problem is that if you really believe (because EY and others are shouting it from the rooftops) that there is a ~!00% chance we’re all gonna die shortly, you are not going to be motivated to plan for the 50⁄50 or 10⁄90 scenario. Once you acknowledge that you can’t really make a confident prediction on this matter, it is illogical to only plan for the minimal and maximal cases (we all die/everything is great). Those outcomes need no planning, so spending energy focusing on them is not optimal.
Sans hard data, as a Bayesian, shouldn’t one start with a balanced set of priors over all the possible outcomes, then focus on the ones you may be able to influence?
I’m not sure what you think I believe, but yeah I think we should be looking at scenarios in between the extremes.
I was giving reasons why I maintain some optimism, and maintaining optimism while reading Yudkowsky leaves me in the middle, where actions can be taken.
I also find the atoms argument very uncompelling. There is so much space and solar energy in the asteroid belt, I’m sure there is a good chance that the ASI will be chill.
However, I think Yudkowsky is shouting so loud because even if that chance of asi apocalypse is only 5%, that is 5% multiplied by all possible human goodness, which is a big deal to our species in expectation.
Personally I think the totality of the biological ecosystems on earth (including humans) will still be interesting to an ASI, so I’d hope they’d let it tick on as a museum piece.
You could say the same thing about humanity. But here we are, maximizing our usage of Earth’s resources before we move out into the solar system.
But it’s hard for us. It would be very easy for an ASI. Even with no advancement in tech, the ASI can ride on the starlinks into space.
We are stuck here amongst the biology for very obvious reasons.
The question isn’t whether it would be easier for superintelligent AI to go to space than it would be for humans. Of course it would be! Everything will be easier for a superintelligent AI.
The question is whether a superintelligent AI would prioritize going to space immediately, leaving Earth as an “untouched wilderness”, where humans are free to thrive. Or, will the superintelligent AI work on fully exploiting the resources it has at hand, here on earth, before choosing to go to space? I think the latter is far more likely. Superintelligence can’t beat physics. No matter what, it will always be easier to harvest closer resources than it will be to harvest resources that are farther away. The closest resources are on earth. So why should the superintelligent AI go to space, when, at least in the immediate term, it has everything it needs to grow right here?
Priorities need a resource that gets allocated to one thing and not another thing. But going to space doesn’t imply/motivate leaving Earth, doing both doesn’t diminish either.
My argument is that, like humanity, a superintelligent AI will initially find it easier to extract resources from Earth than it will from space based sources. By the time earth’s resources are sufficiently depleted that this is no longer the case, there will be far too little remaining for humanity to survive on.
That’s obviously false from any vaguely rigorous take.
What is obviously true: the ASI could take 99% of the earth’s raw materials, and 100% of the rest of the solar system, and leave plenty for the current human population to survive, assuming MNT.
If an AI is capable of taking 99% of the resources that humans rely on to live, it’s capable of taking 100%.
Tell me why the AI should stop at 99% (or 85%, or 70%, or whatever threshold you wish to draw) without having that threshold encoded as one of its goals.
Because it has to have extremely advanced cognition or we would have won in our conflicts. It may see some value in not murdering it’s creators.
Do you pick up every penny that you pass in the street?
The amount of energy and resources on Earth would be a rounding error in an ASI’s calculations. And it would be a rounding error that happens to be incredibly complex and possibly unique!
Maybe a more appropriate question is, do you pick every flower that you pass in the park? What if it was the only one?
Once again: this argument applies to humanity too. Everyone acknowledges that the asteroid belt holds far more resources than Earth. But here we are, building strip mines in Australia rather than hauling asteroids in from the belt.
Your counterargument is that the AI will find it much easier to go to space, not being constrained by human biology. Fine. But won’t the AI also find it much easier to build strip mines? Or harvest resources from the oceans? Or pave over vast tracts of land for use as solar farms? You haven’t answered why going to space will be cheaper for the AI than staying on earth. All you’ve proven is that going to space will be cheaper for the AI than it will be for humans, which is a claim that I’m not contesting.
See my reply above for why the ASI might choose to move on before strip-mining the planet.
From your other reply
The problem isn’t that the AI will want the atoms that comprise your body, specifically. That’s trivially false. It makes as much sense as the scene in The Matrix where Morpheus explained to Neo that the Matrix was using humans as living energy sources.
What is less trivially false is that the AI will alter the biosphere in ways that make it impossible (or merely very difficult) for humans to live, just as humans have altered the biosphere in ways that have made it impossible (or merely very difficult) for many other species to live. The AI will not intend to alter the biosphere. The biosphere alteration will be a side-effect of whatever the AI’s goals are. But the alteration will take place, regardless.
Put more pithily: tell me why I should expect a superintelligent AI to be an environmentalist.
Just to preserve information. It’s not every day that you come across a thermodynamic system that has been evolving so far from equilibrium for so long. There is information here.
In general, I feel like a lot of people in discussion about ASI seem to enjoy fantasizing about science fiction apocalypses of various kinds. Personally I’m not so interested in exercises in fancy, rather looking at ways physical laws might imply that ‘strong orthogonality’ is unlikely to obtain in reality.
Why should the AI prioritize preserving information over whatever other goal that it’s been programmed to accomplish?
The information could be instrumentally useful for any of the following Basic AI Drives:
Efficiency: making use of the already-performed thermodynamic ‘calculation’ of evolution (and storage of that calculation—the biosphere conveniently preserves this information for free)
Acquisition: ‘information’ will doubtlessly be one of the things an AI wants to acquire
Creativity: the biosphere has lots of ways of doing things
Cognitive enhancement: understanding thermodynamics on an intimate level will help any kind of self-enhancement
Technological perfection: same story. You want to understand thermodynamics.
At every time step, the AI will be trading off these drives against the value of producing more or doing more of whatever it was programmed to do. What happens when the AI decides that it’s learned enough from the biosphere and that the costs of preserving a biosphere for humans no longer outweigh the potential benefit that it earns from learning about biology, evolution and thermodynamics?
We humans make these trade-offs all the time, often unconsciously, as we weigh whether to bulldoze a forest, or build a dam, or dig a mine. A superintelligent AI will perhaps be more intentional in its calculations, but that’s still no guarantee that the result of the calculation will swing in humanity’s favor. We could, in theory, program the AI to preserve earth as a sanctuary. But, in my view, that’s functionally equivalent to solving alignment.
Your argument appears to be that an unaligned AI will, spontaneously, choose to, at the very least, preserve Earth as a sanctuary for humans into perpetuity. I still don’t see why it should do that.
That isn’t my argument, my argument is just that the general tone seems too defeatist.
The question asker was under the impression that the probabilities were %99.X percent against anything okay. My only argument was that this is wrong, and there are good reasons that this is wrong.
Where the p(doom) lies between 99 and 1 percent is left as an exercise for posterity. I’m not totally unhinged in my optimism, I just think the tone of certain doom is poorly founded and there are good reasons to have some measure of hope.
Not just ‘i dunno, maybe it will be fine’ but real reasons why it could conceivably be fine. Again, the probabilities are up for debate, I only wanted to present some concrete reasons.
A related factor is curiosity. As I understand, reinforcement learning agents perform much better if gifted with curiosity (or if developed it by themselves). Seeking novel information is extremely helpful for most goals (but could lead to “TV addiction”).
I find it plausible that ASI will be curious, and that both humanity and the biosphere, which are the results of billions of years of an enormous computation, will stimulate ASI’s curiosity.
But its curiosity may not last for centuries, or even years. Additionally, the curiosity may involve some dissection of living humans, or worse.
Note that an AI or civilization of many ASIs could harvest the overwhelming majority of all accessible and suitable material on the planet and yet keep all humans alive if they chose to. It’s not an expensive thing to do. Humans are really cheap and live skimming off the very surface of the earth. Most of our raw material shortages are self inflicted, we don’t recycle CO2 back to hydrocarbons and we don’t recycle our trash at an elemental level.
The reason they might kill all humans would be either from a moloch scenario or one where it was efficient to do so to remove humans as an obstacle.
I’m not sure what you think I believe, but yeah I think we should be looking at scenarios in between the extremes.
I was giving reasons why I maintain some optimism, and maintaining optimism while reading Yudkowsky leaves me in the middle, where actions can be taken.
Violent agreement! I was using the pronoun ‘you’ rhetorically.