No. It absolutely is not. It is a machine. (...) (From your other response here:) The superintelligent AI will, in my estimation, be the result of some kind of optimization process which has a very particular goal. Once that goal is locked in, changing it will be nigh impossible.
Ah I see, you simply don’t consider it likely or plausible that the superintelligent AI will be anything other than some machine learning model on steroids?
So I guess that arguably means this kind of “superintelligence” would actually still be less impressive than a human that can philosophize on their own goals etc., because it in fact wouldn’t do that?
I wouldn’t want that to run amok either, sure.
What I am interested in is the creation of a “proper” superintelligent mind that isn’t so restricted, not merely a powerful machine.
Sorry, that’s not what I meant to communicate here, let me try that again:
There is actual pleasure/suffering that exists, it is not just some hypothetical idea, right?
Then that means there is something objective, some subset of reality that actually is this pleasure/suffering, yes?
This in turn means that it should in fact be possible to understand the “mechanics” of pleasure/suffering “objectively”.
So one mind should theoretically be able to comprehend the “subjective” state of another without being that other mind; although information about the other subject’s internal state will in reality be limited of course.
Or let me put it this way: What we call “subjective” is just a special kind of subset of “objective” reality.
If it were not so, then how would the subjects share a reality in which they interact under non-subjective rules? Even if one could come up with an answer to that question, would such a theory not have to be more complex than one where the shared reality simply has one objective rule set?
Now the implication of pleasure/suffering (and value systems) being something that can be “objectively” understood is that one can compare not against one’s own value system, but against the understanding of what value systems are.
Sure, you can tell me that this again would just be done because of what the agent’s value system tells it directly or indirectly to do, that’s fine by me.
But the point here is that the objective existence of pleasure/suffering means an objective definition of good and bad is very much possible.
And since it must be objectively possible to define good and bad one can reject some value system based thereon. An agent must not be limited to some arbitrary value system.
Yes I agree with that of course. But some complex subjective preferences not being objectively good/bad is not the same as the objective absence or existence of intrinsic pleasure and suffering. The triggers for pleasure and suffering are not necessarily pleasure and suffering themselves.
In case someone now wishes to object with 1. “But some people like to suffer!” or 2. “But people accept some suffering for future pleasure (or whatever)!”:
If they truly “like to suffer”, then do they actually suffer?
If they accept some suffering in trade for pleasure, does that make the state of suffering intrinsically good? Could one not “objectively” say that it would be better if no suffering were “required” compared to this scenario?