I read the original post, and kind of liked it, but I also very much disagreed with it.
I am somewhat befuddled by the chain of reasoning in that post, as well as that of the community in general.
In mathematics, you may start from some assumptions, and derive lots of things, and if ever you come upon some inconsistencies, you normally conclude that one of your assumptions is wrong (if your derivation is okay).
Anyway, here it seems to me, that you make assumptions, derive something ludicrous, and then tap yourself on the shoulder and conclude, that obviously everything has to be correct. To me, that does not follow.
If you assume an omnipotent basilisk (if you multiply by infinity), then obviously you can derive anything you damn well please.
One concrete example (There were many more in the original post):
we’d have a precise enough understanding of emotions and their fulfilment space to recognize local maxima
The way to recognize local extrema is exactly to walk away from them far enough. If you know of another way, please elaborate, because I’d very much like to sell it myself if you don’t mind.
Another is that if humans ever become “content” with boredom, we cut off all possibility of further growth (however small).
> Yeah, that is a downside.
I would argue that is the most important point in fact. You assume that you are looking for an optimum in a static potential landscape. The dinosaurs kind of did the same.
The only way to keep surviving in a dynamic potential landscape, is to keep optimizing, and not tap yourself on the shoulder for a job well done, and just stop.
A simple example: Kids during puberty kind of seem to be doing the opposite of whatever their parents tell them. Why? Because they know (somehow) that there are other, better minima in reach (even if your parents are the god-kings of the earth) (Who wants to be a carpenter, when you can be a Youtuber, famous for Idontreallycare...)
Anyway, in my opinion, boredom is a solution for the same class of problem, just not intergenerational, but instead more in a day-to-day manner.
The way to recognize local extrema is exactly to walk away from them far enough. If you know of another way, please elaborate, because I’d very much like to sell it myself if you don’t mind.
Thinking it back after a couple days, I think my reply with finding maximums was still caught up in indirect measures of achieving hedons. We have complete control over our sensory inputs, we can give ourselves exactly whatever upper bound there exists. Less “semi-random walks in n-space to find extrema” and more “redefine the space so where you’re standing goes as high as your program allows”.
The only way to keep surviving in a dynamic potential landscape, is to keep optimizing, and not tap yourself on the shoulder for a job well done, and just stop.
For what it’s worth, that was just to keep in with the fictional scenario I was describing. In a more realistic scenario of that playing out, we would task AGI with optimizing; we’re just relatively standing around anyway.
In that scenario, though: why do we consider growth important? You talked about surviving, I’m not clear on that—this was assuming a point in the future when we don’t have to worry about existential risk (or they’re the kind we provably can’t solve, like the universe ending) or death of sentient lives. Yes, growth allows for more sophisticated methods of value attainment, but I also said that it’s plausible that we reach so high that there we start getting diminishing returns. Then, are the benefits of that future potential worth not reaping them to their maximum for a longer stretch of time?
I see, that kind of makes sense. I still don’t like it though, if that is the only process to optimize.
For me, in your fictional world, humans are to AI what in our world pets are to humans. I understand that it could come about, but I would not call it a “Utopia”.
this was assuming a point in the future when we don’t have to worry about existential risk
This is kind of what I meant before. Of course you can assume that, but it is such a powerful assumption, that you can use it to derive nearly anything at all. (Just like in math if you divide by zero). Of course optimizing for survival is not important, if you cannot die by definition.
For me, in your fictional world, humans are to AI what in our world pets are to humans.
If I understand your meaning of this correctly, I think you’re anthropomorphizing AI too much. In the scenario where AI is well aligned to our values (other scenarios probably not having much of a future to speak of), their role might be something without a good parallel to society today; maybe an active benevolent deity without innate desires.
Of course you can assume that, but it is such a powerful assumption, that you can use it to derive nearly anything at all. (Just like in math if you divide by zero). Of course optimizing for survival is not important, if you cannot die by definition.
I think it’s possible we would still die at the end of the universe. But even without AI, I think there would be a future point where we can be reasonably certain of our control over our environment to the extent that barring probably unprovable problems like simulation theory, we can rest easy until then.
I read the original post, and kind of liked it, but I also very much disagreed with it.
I am somewhat befuddled by the chain of reasoning in that post, as well as that of the community in general.
In mathematics, you may start from some assumptions, and derive lots of things, and if ever you come upon some inconsistencies, you normally conclude that one of your assumptions is wrong (if your derivation is okay).
Anyway, here it seems to me, that you make assumptions, derive something ludicrous, and then tap yourself on the shoulder and conclude, that obviously everything has to be correct. To me, that does not follow.
If you assume an omnipotent basilisk (if you multiply by infinity), then obviously you can derive anything you damn well please.
One concrete example (There were many more in the original post):
The way to recognize local extrema is exactly to walk away from them far enough. If you know of another way, please elaborate, because I’d very much like to sell it myself if you don’t mind.
I would argue that is the most important point in fact. You assume that you are looking for an optimum in a static potential landscape. The dinosaurs kind of did the same.
The only way to keep surviving in a dynamic potential landscape, is to keep optimizing, and not tap yourself on the shoulder for a job well done, and just stop.
A simple example: Kids during puberty kind of seem to be doing the opposite of whatever their parents tell them. Why? Because they know (somehow) that there are other, better minima in reach (even if your parents are the god-kings of the earth) (Who wants to be a carpenter, when you can be a Youtuber, famous for Idontreallycare...)
Anyway, in my opinion, boredom is a solution for the same class of problem, just not intergenerational, but instead more in a day-to-day manner.
Thinking it back after a couple days, I think my reply with finding maximums was still caught up in indirect measures of achieving hedons. We have complete control over our sensory inputs, we can give ourselves exactly whatever upper bound there exists. Less “semi-random walks in n-space to find extrema” and more “redefine the space so where you’re standing goes as high as your program allows”.
For what it’s worth, that was just to keep in with the fictional scenario I was describing. In a more realistic scenario of that playing out, we would task AGI with optimizing; we’re just relatively standing around anyway.
In that scenario, though: why do we consider growth important? You talked about surviving, I’m not clear on that—this was assuming a point in the future when we don’t have to worry about existential risk (or they’re the kind we provably can’t solve, like the universe ending) or death of sentient lives. Yes, growth allows for more sophisticated methods of value attainment, but I also said that it’s plausible that we reach so high that there we start getting diminishing returns. Then, are the benefits of that future potential worth not reaping them to their maximum for a longer stretch of time?
I see, that kind of makes sense. I still don’t like it though, if that is the only process to optimize.
For me, in your fictional world, humans are to AI what in our world pets are to humans. I understand that it could come about, but I would not call it a “Utopia”.
This is kind of what I meant before. Of course you can assume that, but it is such a powerful assumption, that you can use it to derive nearly anything at all. (Just like in math if you divide by zero). Of course optimizing for survival is not important, if you cannot die by definition.
If I understand your meaning of this correctly, I think you’re anthropomorphizing AI too much. In the scenario where AI is well aligned to our values (other scenarios probably not having much of a future to speak of), their role might be something without a good parallel to society today; maybe an active benevolent deity without innate desires.
I think it’s possible we would still die at the end of the universe. But even without AI, I think there would be a future point where we can be reasonably certain of our control over our environment to the extent that barring probably unprovable problems like simulation theory, we can rest easy until then.