I see, that kind of makes sense. I still don’t like it though, if that is the only process to optimize.
For me, in your fictional world, humans are to AI what in our world pets are to humans. I understand that it could come about, but I would not call it a “Utopia”.
this was assuming a point in the future when we don’t have to worry about existential risk
This is kind of what I meant before. Of course you can assume that, but it is such a powerful assumption, that you can use it to derive nearly anything at all. (Just like in math if you divide by zero). Of course optimizing for survival is not important, if you cannot die by definition.
For me, in your fictional world, humans are to AI what in our world pets are to humans.
If I understand your meaning of this correctly, I think you’re anthropomorphizing AI too much. In the scenario where AI is well aligned to our values (other scenarios probably not having much of a future to speak of), their role might be something without a good parallel to society today; maybe an active benevolent deity without innate desires.
Of course you can assume that, but it is such a powerful assumption, that you can use it to derive nearly anything at all. (Just like in math if you divide by zero). Of course optimizing for survival is not important, if you cannot die by definition.
I think it’s possible we would still die at the end of the universe. But even without AI, I think there would be a future point where we can be reasonably certain of our control over our environment to the extent that barring probably unprovable problems like simulation theory, we can rest easy until then.
I see, that kind of makes sense. I still don’t like it though, if that is the only process to optimize.
For me, in your fictional world, humans are to AI what in our world pets are to humans. I understand that it could come about, but I would not call it a “Utopia”.
This is kind of what I meant before. Of course you can assume that, but it is such a powerful assumption, that you can use it to derive nearly anything at all. (Just like in math if you divide by zero). Of course optimizing for survival is not important, if you cannot die by definition.
If I understand your meaning of this correctly, I think you’re anthropomorphizing AI too much. In the scenario where AI is well aligned to our values (other scenarios probably not having much of a future to speak of), their role might be something without a good parallel to society today; maybe an active benevolent deity without innate desires.
I think it’s possible we would still die at the end of the universe. But even without AI, I think there would be a future point where we can be reasonably certain of our control over our environment to the extent that barring probably unprovable problems like simulation theory, we can rest easy until then.