I share a similar concern, not so much about this particular philosophical problem, but that the AI will be wrong on some philosophical issue and reach some kind of disastrous or strongly suboptimal conclusion.
There’s a possibility that we are disastrously wrong about our own philosophical conclusions. Consciousness itself may be ethically monstrous in a truly rational moral framework. Especially when you contrast the desire for immortality with the heat death. What is the utility of 3^^^3 people facing an eventual certain death versus even just 2^^^2 or a few trillion?
I don’t think there’s a high probability that consciousness itself will turn out to be the ultimate evil but it’s at least a possibility. A more subtle problem may be that allowing consciousness to exist in this universe is evil. It may be far more ethical to only allow consciousness inside simulations with no defined end and just run them as long as possible with the inhabitants blissfully unaware of their eventual eternal pause. They won’t cease to exist so much as wait for some random universe to exist that just happens to encode their next valid state...
They won’t cease to exist so much as wait for some random universe to exist that just happens to encode their next valid state...
You could say the same of anyone who has ever died, for some sense of “valid” … This, and similar waterfall-type arguments lead me to suspect that we haven’t satisfactorily defined what it means for something to “happen.”
You could say the same of anyone who has ever died, for some sense of “valid” … This, and similar waterfall-type arguments lead me to suspect that we haven’t satisfactorily defined what it means for something to “happen.”
It depends on the natural laws the person lived under. The next “valid” state of a dead person is decomposition. I don’t find the waterfall argument compelling because the information necessary to specify the mappings is more complex than the computed function itself.
There’s a possibility that we are disastrously wrong about our own philosophical conclusions. Consciousness itself may be ethically monstrous in a truly rational moral framework. Especially when you contrast the desire for immortality with the heat death. What is the utility of 3^^^3 people facing an eventual certain death versus even just 2^^^2 or a few trillion?
I don’t think there’s a high probability that consciousness itself will turn out to be the ultimate evil but it’s at least a possibility. A more subtle problem may be that allowing consciousness to exist in this universe is evil. It may be far more ethical to only allow consciousness inside simulations with no defined end and just run them as long as possible with the inhabitants blissfully unaware of their eventual eternal pause. They won’t cease to exist so much as wait for some random universe to exist that just happens to encode their next valid state...
You could say the same of anyone who has ever died, for some sense of “valid” … This, and similar waterfall-type arguments lead me to suspect that we haven’t satisfactorily defined what it means for something to “happen.”
It depends on the natural laws the person lived under. The next “valid” state of a dead person is decomposition. I don’t find the waterfall argument compelling because the information necessary to specify the mappings is more complex than the computed function itself.