Do you pick up every penny that you pass in the street?
The amount of energy and resources on Earth would be a rounding error in an ASI’s calculations. And it would be a rounding error that happens to be incredibly complex and possibly unique!
Maybe a more appropriate question is, do you pick every flower that you pass in the park? What if it was the only one?
The amount of energy and resources on Earth would be a rounding error in an ASI’s calculations.
Once again: this argument applies to humanity too. Everyone acknowledges that the asteroid belt holds far more resources than Earth. But here we are, building strip mines in Australia rather than hauling asteroids in from the belt.
Your counterargument is that the AI will find it much easier to go to space, not being constrained by human biology. Fine. But won’t the AI also find it much easier to build strip mines? Or harvest resources from the oceans? Or pave over vast tracts of land for use as solar farms? You haven’t answered why going to space will be cheaper for the AI than staying on earth. All you’ve proven is that going to space will be cheaper for the AI than it will be for humans, which is a claim that I’m not contesting.
I just find the idea that the ASI will want my atoms for something trivial, when there are so many other atoms in the universe that are not part of a grand exploration of the extremes of thermodynamics, unconvincing.
The problem isn’t that the AI will want the atoms that comprise your body, specifically. That’s trivially false. It makes as much sense as the scene in The Matrix where Morpheus explained to Neo that the Matrix was using humans as living energy sources.
What is less trivially false is that the AI will alter the biosphere in ways that make it impossible (or merely very difficult) for humans to live, just as humans have altered the biosphere in ways that have made it impossible (or merely very difficult) for many other species to live. The AI will not intend to alter the biosphere. The biosphere alteration will be a side-effect of whatever the AI’s goals are. But the alteration will take place, regardless.
Put more pithily: tell me why I should expect a superintelligent AI to be an environmentalist.
Just to preserve information. It’s not every day that you come across a thermodynamic system that has been evolving so far from equilibrium for so long. There is information here.
In general, I feel like a lot of people in discussion about ASI seem to enjoy fantasizing about science fiction apocalypses of various kinds. Personally I’m not so interested in exercises in fancy, rather looking at ways physical laws might imply that ‘strong orthogonality’ is unlikely to obtain in reality.
The information could be instrumentally useful for any of the following Basic AI Drives:
Efficiency: making use of the already-performed thermodynamic ‘calculation’ of evolution (and storage of that calculation—the biosphere conveniently preserves this information for free)
Acquisition: ‘information’ will doubtlessly be one of the things an AI wants to acquire
Creativity: the biosphere has lots of ways of doing things
Cognitive enhancement: understanding thermodynamics on an intimate level will help any kind of self-enhancement
Technological perfection: same story. You want to understand thermodynamics.
At every time step, the AI will be trading off these drives against the value of producing more or doing more of whatever it was programmed to do. What happens when the AI decides that it’s learned enough from the biosphere and that the costs of preserving a biosphere for humans no longer outweigh the potential benefit that it earns from learning about biology, evolution and thermodynamics?
We humans make these trade-offs all the time, often unconsciously, as we weigh whether to bulldoze a forest, or build a dam, or dig a mine. A superintelligent AI will perhaps be more intentional in its calculations, but that’s still no guarantee that the result of the calculation will swing in humanity’s favor. We could, in theory, program the AI to preserve earth as a sanctuary. But, in my view, that’s functionally equivalent to solving alignment.
Your argument appears to be that an unaligned AI will, spontaneously, choose to, at the very least, preserve Earth as a sanctuary for humans into perpetuity. I still don’t see why it should do that.
That isn’t my argument, my argument is just that the general tone seems too defeatist.
The question asker was under the impression that the probabilities were %99.X percent against anything okay. My only argument was that this is wrong, and there are good reasons that this is wrong.
Where the p(doom) lies between 99 and 1 percent is left as an exercise for posterity. I’m not totally unhinged in my optimism, I just think the tone of certain doom is poorly founded and there are good reasons to have some measure of hope.
Not just ‘i dunno, maybe it will be fine’ but real reasons why it could conceivably be fine. Again, the probabilities are up for debate, I only wanted to present some concrete reasons.
A related factor is curiosity. As I understand, reinforcement learning agents perform much better if gifted with curiosity (or if developed it by themselves). Seeking novel information is extremely helpful for most goals (but could lead to “TV addiction”).
I find it plausible that ASI will be curious, and that both humanity and the biosphere, which are the results of billions of years of an enormous computation, will stimulate ASI’s curiosity.
But its curiosity may not last for centuries, or even years. Additionally, the curiosity may involve some dissection of living humans, or worse.
Note that an AI or civilization of many ASIs could harvest the overwhelming majority of all accessible and suitable material on the planet and yet keep all humans alive if they chose to. It’s not an expensive thing to do. Humans are really cheap and live skimming off the very surface of the earth. Most of our raw material shortages are self inflicted, we don’t recycle CO2 back to hydrocarbons and we don’t recycle our trash at an elemental level.
The reason they might kill all humans would be either from a moloch scenario or one where it was efficient to do so to remove humans as an obstacle.
Do you pick up every penny that you pass in the street?
The amount of energy and resources on Earth would be a rounding error in an ASI’s calculations. And it would be a rounding error that happens to be incredibly complex and possibly unique!
Maybe a more appropriate question is, do you pick every flower that you pass in the park? What if it was the only one?
Once again: this argument applies to humanity too. Everyone acknowledges that the asteroid belt holds far more resources than Earth. But here we are, building strip mines in Australia rather than hauling asteroids in from the belt.
Your counterargument is that the AI will find it much easier to go to space, not being constrained by human biology. Fine. But won’t the AI also find it much easier to build strip mines? Or harvest resources from the oceans? Or pave over vast tracts of land for use as solar farms? You haven’t answered why going to space will be cheaper for the AI than staying on earth. All you’ve proven is that going to space will be cheaper for the AI than it will be for humans, which is a claim that I’m not contesting.
See my reply above for why the ASI might choose to move on before strip-mining the planet.
From your other reply
The problem isn’t that the AI will want the atoms that comprise your body, specifically. That’s trivially false. It makes as much sense as the scene in The Matrix where Morpheus explained to Neo that the Matrix was using humans as living energy sources.
What is less trivially false is that the AI will alter the biosphere in ways that make it impossible (or merely very difficult) for humans to live, just as humans have altered the biosphere in ways that have made it impossible (or merely very difficult) for many other species to live. The AI will not intend to alter the biosphere. The biosphere alteration will be a side-effect of whatever the AI’s goals are. But the alteration will take place, regardless.
Put more pithily: tell me why I should expect a superintelligent AI to be an environmentalist.
Just to preserve information. It’s not every day that you come across a thermodynamic system that has been evolving so far from equilibrium for so long. There is information here.
In general, I feel like a lot of people in discussion about ASI seem to enjoy fantasizing about science fiction apocalypses of various kinds. Personally I’m not so interested in exercises in fancy, rather looking at ways physical laws might imply that ‘strong orthogonality’ is unlikely to obtain in reality.
Why should the AI prioritize preserving information over whatever other goal that it’s been programmed to accomplish?
The information could be instrumentally useful for any of the following Basic AI Drives:
Efficiency: making use of the already-performed thermodynamic ‘calculation’ of evolution (and storage of that calculation—the biosphere conveniently preserves this information for free)
Acquisition: ‘information’ will doubtlessly be one of the things an AI wants to acquire
Creativity: the biosphere has lots of ways of doing things
Cognitive enhancement: understanding thermodynamics on an intimate level will help any kind of self-enhancement
Technological perfection: same story. You want to understand thermodynamics.
At every time step, the AI will be trading off these drives against the value of producing more or doing more of whatever it was programmed to do. What happens when the AI decides that it’s learned enough from the biosphere and that the costs of preserving a biosphere for humans no longer outweigh the potential benefit that it earns from learning about biology, evolution and thermodynamics?
We humans make these trade-offs all the time, often unconsciously, as we weigh whether to bulldoze a forest, or build a dam, or dig a mine. A superintelligent AI will perhaps be more intentional in its calculations, but that’s still no guarantee that the result of the calculation will swing in humanity’s favor. We could, in theory, program the AI to preserve earth as a sanctuary. But, in my view, that’s functionally equivalent to solving alignment.
Your argument appears to be that an unaligned AI will, spontaneously, choose to, at the very least, preserve Earth as a sanctuary for humans into perpetuity. I still don’t see why it should do that.
That isn’t my argument, my argument is just that the general tone seems too defeatist.
The question asker was under the impression that the probabilities were %99.X percent against anything okay. My only argument was that this is wrong, and there are good reasons that this is wrong.
Where the p(doom) lies between 99 and 1 percent is left as an exercise for posterity. I’m not totally unhinged in my optimism, I just think the tone of certain doom is poorly founded and there are good reasons to have some measure of hope.
Not just ‘i dunno, maybe it will be fine’ but real reasons why it could conceivably be fine. Again, the probabilities are up for debate, I only wanted to present some concrete reasons.
A related factor is curiosity. As I understand, reinforcement learning agents perform much better if gifted with curiosity (or if developed it by themselves). Seeking novel information is extremely helpful for most goals (but could lead to “TV addiction”).
I find it plausible that ASI will be curious, and that both humanity and the biosphere, which are the results of billions of years of an enormous computation, will stimulate ASI’s curiosity.
But its curiosity may not last for centuries, or even years. Additionally, the curiosity may involve some dissection of living humans, or worse.
Note that an AI or civilization of many ASIs could harvest the overwhelming majority of all accessible and suitable material on the planet and yet keep all humans alive if they chose to. It’s not an expensive thing to do. Humans are really cheap and live skimming off the very surface of the earth. Most of our raw material shortages are self inflicted, we don’t recycle CO2 back to hydrocarbons and we don’t recycle our trash at an elemental level.
The reason they might kill all humans would be either from a moloch scenario or one where it was efficient to do so to remove humans as an obstacle.