The way to recognize local extrema is exactly to walk away from them far enough. If you know of another way, please elaborate, because I’d very much like to sell it myself if you don’t mind.
Thinking it back after a couple days, I think my reply with finding maximums was still caught up in indirect measures of achieving hedons. We have complete control over our sensory inputs, we can give ourselves exactly whatever upper bound there exists. Less “semi-random walks in n-space to find extrema” and more “redefine the space so where you’re standing goes as high as your program allows”.
The only way to keep surviving in a dynamic potential landscape, is to keep optimizing, and not tap yourself on the shoulder for a job well done, and just stop.
For what it’s worth, that was just to keep in with the fictional scenario I was describing. In a more realistic scenario of that playing out, we would task AGI with optimizing; we’re just relatively standing around anyway.
In that scenario, though: why do we consider growth important? You talked about surviving, I’m not clear on that—this was assuming a point in the future when we don’t have to worry about existential risk (or they’re the kind we provably can’t solve, like the universe ending) or death of sentient lives. Yes, growth allows for more sophisticated methods of value attainment, but I also said that it’s plausible that we reach so high that there we start getting diminishing returns. Then, are the benefits of that future potential worth not reaping them to their maximum for a longer stretch of time?
I see, that kind of makes sense. I still don’t like it though, if that is the only process to optimize.
For me, in your fictional world, humans are to AI what in our world pets are to humans. I understand that it could come about, but I would not call it a “Utopia”.
this was assuming a point in the future when we don’t have to worry about existential risk
This is kind of what I meant before. Of course you can assume that, but it is such a powerful assumption, that you can use it to derive nearly anything at all. (Just like in math if you divide by zero). Of course optimizing for survival is not important, if you cannot die by definition.
For me, in your fictional world, humans are to AI what in our world pets are to humans.
If I understand your meaning of this correctly, I think you’re anthropomorphizing AI too much. In the scenario where AI is well aligned to our values (other scenarios probably not having much of a future to speak of), their role might be something without a good parallel to society today; maybe an active benevolent deity without innate desires.
Of course you can assume that, but it is such a powerful assumption, that you can use it to derive nearly anything at all. (Just like in math if you divide by zero). Of course optimizing for survival is not important, if you cannot die by definition.
I think it’s possible we would still die at the end of the universe. But even without AI, I think there would be a future point where we can be reasonably certain of our control over our environment to the extent that barring probably unprovable problems like simulation theory, we can rest easy until then.
Thinking it back after a couple days, I think my reply with finding maximums was still caught up in indirect measures of achieving hedons. We have complete control over our sensory inputs, we can give ourselves exactly whatever upper bound there exists. Less “semi-random walks in n-space to find extrema” and more “redefine the space so where you’re standing goes as high as your program allows”.
For what it’s worth, that was just to keep in with the fictional scenario I was describing. In a more realistic scenario of that playing out, we would task AGI with optimizing; we’re just relatively standing around anyway.
In that scenario, though: why do we consider growth important? You talked about surviving, I’m not clear on that—this was assuming a point in the future when we don’t have to worry about existential risk (or they’re the kind we provably can’t solve, like the universe ending) or death of sentient lives. Yes, growth allows for more sophisticated methods of value attainment, but I also said that it’s plausible that we reach so high that there we start getting diminishing returns. Then, are the benefits of that future potential worth not reaping them to their maximum for a longer stretch of time?
I see, that kind of makes sense. I still don’t like it though, if that is the only process to optimize.
For me, in your fictional world, humans are to AI what in our world pets are to humans. I understand that it could come about, but I would not call it a “Utopia”.
This is kind of what I meant before. Of course you can assume that, but it is such a powerful assumption, that you can use it to derive nearly anything at all. (Just like in math if you divide by zero). Of course optimizing for survival is not important, if you cannot die by definition.
If I understand your meaning of this correctly, I think you’re anthropomorphizing AI too much. In the scenario where AI is well aligned to our values (other scenarios probably not having much of a future to speak of), their role might be something without a good parallel to society today; maybe an active benevolent deity without innate desires.
I think it’s possible we would still die at the end of the universe. But even without AI, I think there would be a future point where we can be reasonably certain of our control over our environment to the extent that barring probably unprovable problems like simulation theory, we can rest easy until then.