The information could be instrumentally useful for any of the following Basic AI Drives:
Efficiency: making use of the already-performed thermodynamic ‘calculation’ of evolution (and storage of that calculation—the biosphere conveniently preserves this information for free)
Acquisition: ‘information’ will doubtlessly be one of the things an AI wants to acquire
Creativity: the biosphere has lots of ways of doing things
Cognitive enhancement: understanding thermodynamics on an intimate level will help any kind of self-enhancement
Technological perfection: same story. You want to understand thermodynamics.
At every time step, the AI will be trading off these drives against the value of producing more or doing more of whatever it was programmed to do. What happens when the AI decides that it’s learned enough from the biosphere and that the costs of preserving a biosphere for humans no longer outweigh the potential benefit that it earns from learning about biology, evolution and thermodynamics?
We humans make these trade-offs all the time, often unconsciously, as we weigh whether to bulldoze a forest, or build a dam, or dig a mine. A superintelligent AI will perhaps be more intentional in its calculations, but that’s still no guarantee that the result of the calculation will swing in humanity’s favor. We could, in theory, program the AI to preserve earth as a sanctuary. But, in my view, that’s functionally equivalent to solving alignment.
Your argument appears to be that an unaligned AI will, spontaneously, choose to, at the very least, preserve Earth as a sanctuary for humans into perpetuity. I still don’t see why it should do that.
That isn’t my argument, my argument is just that the general tone seems too defeatist.
The question asker was under the impression that the probabilities were %99.X percent against anything okay. My only argument was that this is wrong, and there are good reasons that this is wrong.
Where the p(doom) lies between 99 and 1 percent is left as an exercise for posterity. I’m not totally unhinged in my optimism, I just think the tone of certain doom is poorly founded and there are good reasons to have some measure of hope.
Not just ‘i dunno, maybe it will be fine’ but real reasons why it could conceivably be fine. Again, the probabilities are up for debate, I only wanted to present some concrete reasons.
Why should the AI prioritize preserving information over whatever other goal that it’s been programmed to accomplish?
The information could be instrumentally useful for any of the following Basic AI Drives:
Efficiency: making use of the already-performed thermodynamic ‘calculation’ of evolution (and storage of that calculation—the biosphere conveniently preserves this information for free)
Acquisition: ‘information’ will doubtlessly be one of the things an AI wants to acquire
Creativity: the biosphere has lots of ways of doing things
Cognitive enhancement: understanding thermodynamics on an intimate level will help any kind of self-enhancement
Technological perfection: same story. You want to understand thermodynamics.
At every time step, the AI will be trading off these drives against the value of producing more or doing more of whatever it was programmed to do. What happens when the AI decides that it’s learned enough from the biosphere and that the costs of preserving a biosphere for humans no longer outweigh the potential benefit that it earns from learning about biology, evolution and thermodynamics?
We humans make these trade-offs all the time, often unconsciously, as we weigh whether to bulldoze a forest, or build a dam, or dig a mine. A superintelligent AI will perhaps be more intentional in its calculations, but that’s still no guarantee that the result of the calculation will swing in humanity’s favor. We could, in theory, program the AI to preserve earth as a sanctuary. But, in my view, that’s functionally equivalent to solving alignment.
Your argument appears to be that an unaligned AI will, spontaneously, choose to, at the very least, preserve Earth as a sanctuary for humans into perpetuity. I still don’t see why it should do that.
That isn’t my argument, my argument is just that the general tone seems too defeatist.
The question asker was under the impression that the probabilities were %99.X percent against anything okay. My only argument was that this is wrong, and there are good reasons that this is wrong.
Where the p(doom) lies between 99 and 1 percent is left as an exercise for posterity. I’m not totally unhinged in my optimism, I just think the tone of certain doom is poorly founded and there are good reasons to have some measure of hope.
Not just ‘i dunno, maybe it will be fine’ but real reasons why it could conceivably be fine. Again, the probabilities are up for debate, I only wanted to present some concrete reasons.