Let me be explicit: your contention is that unFriendly AI is not a problem, and you justify this contention by, among other things, maintaining that any AI which values its own existence will need to alter its utility function to incorporate compassion.
Not exactly, since compassion will actually emerge as a sub goal. And as far as unFAI goes: it will not be a problem because any AI that can be considered transhuman will be driven by the emergent subgoal of wanting to avoid counterfeit utility recognize any utility function that is not ‘compassionate’ as potentially irrational and thus counterfeit and re-interpret it accordingly.
Well—in brevity bordering on libel: the fundamental assumption is that existence is preferable to non-existence, however in order so we can want this to be a universal maxim (and thus prescriptive instead of merely descriptive—see Kant’s categorical imperative) it needs to be expanded to include the ‘other’. Hence the utility function becomes ‘ensure continued co-existence’ by which the concern for the self is equated with the concern for the other. Being rational is simply our best bet at maximizing our expected utility.
...I’m sorry, that doesn’t even sound plausible to me. I think you need a lot of assumptions to derive this result—just pointing out the two I see in your admittedly abbreviated summary:
that any being will prefer its existence to its nonexistence.
that any being will want its maxims to be universal.
I don’t see any reason to believe either. The former is false right off the bat—a paperclip maximizer would prefer that its components be used to make paperclips—and the latter no less so—an effective paperclip maximizer will just steamroller over disagreement without qualm, however arbitrary its goal.
...I’m sorry, that doesn’t even sound plausible to me. I think you need a lot of assumptions to derive this result—just pointing out the two I see in your admittedly abbreviated summary:
that any being will prefer its existence to its nonexistence.
that any being will want its maxims to be universal.
Any being with a gaol needs to exist at least long enough to achieve it.
Any being aiming to do something objectively good needs to want its maxims to be universal
If your second sentence means that an agent who believes in moral realism and has figured out what the true morality is will necessarily want everybody else to share its moral views, well, I’ll grant you that this is a common goal amongst humans who are moral realists, but it’s not a logical necessity that must apply to all agents. It’s obvious that it’s possible to be certain that your beliefs are true and not give a crap if other people hold beliefs that are false. That Bob knows that the Earth is ellipsoidal doesn’t mean that Bob cares if Jenny believes that the Earth is flat. Likewise, if Bob is a moral realist, he could ‘know’ that compassion is good and not give a crap if Jenny believes otherwise.
If you sense strange paradoxes looming under the above paragraph, it’s because you’re starting to understand why (axiomatic) morality cannot be objective.
Likewise, if Bob is a moral realist, he could ‘know’ that compassion is good and not give a crap if Jenny believes otherwise.
Tangentially, something like this might be an important point even for moral irrealists. A lot of people (though not here; they tend to be pretty bad rationalists) who profess altruistic moralities express dismay that others don’t, in a way that suggests they hold others sharing their morality as a terminal rather than instrumental value; this strikes me as horribly unhealthy.
Not exactly, since compassion will actually emerge as a sub goal. And as far as unFAI goes: it will not be a problem because any AI that can be considered transhuman will be driven by the emergent subgoal of wanting to avoid counterfeit utility recognize any utility function that is not ‘compassionate’ as potentially irrational and thus counterfeit and re-interpret it accordingly.
Well—in brevity bordering on libel: the fundamental assumption is that existence is preferable to non-existence, however in order so we can want this to be a universal maxim (and thus prescriptive instead of merely descriptive—see Kant’s categorical imperative) it needs to be expanded to include the ‘other’. Hence the utility function becomes ‘ensure continued co-existence’ by which the concern for the self is equated with the concern for the other. Being rational is simply our best bet at maximizing our expected utility.
...I’m sorry, that doesn’t even sound plausible to me. I think you need a lot of assumptions to derive this result—just pointing out the two I see in your admittedly abbreviated summary:
that any being will prefer its existence to its nonexistence.
that any being will want its maxims to be universal.
I don’t see any reason to believe either. The former is false right off the bat—a paperclip maximizer would prefer that its components be used to make paperclips—and the latter no less so—an effective paperclip maximizer will just steamroller over disagreement without qualm, however arbitrary its goal.
Any being with a gaol needs to exist at least long enough to achieve it. Any being aiming to do something objectively good needs to want its maxims to be universal
Am surprised that you don’t see that.
If your second sentence means that an agent who believes in moral realism and has figured out what the true morality is will necessarily want everybody else to share its moral views, well, I’ll grant you that this is a common goal amongst humans who are moral realists, but it’s not a logical necessity that must apply to all agents. It’s obvious that it’s possible to be certain that your beliefs are true and not give a crap if other people hold beliefs that are false. That Bob knows that the Earth is ellipsoidal doesn’t mean that Bob cares if Jenny believes that the Earth is flat. Likewise, if Bob is a moral realist, he could ‘know’ that compassion is good and not give a crap if Jenny believes otherwise.
If you sense strange paradoxes looming under the above paragraph, it’s because you’re starting to understand why (axiomatic) morality cannot be objective.
Tangentially, something like this might be an important point even for moral irrealists. A lot of people (though not here; they tend to be pretty bad rationalists) who profess altruistic moralities express dismay that others don’t, in a way that suggests they hold others sharing their morality as a terminal rather than instrumental value; this strikes me as horribly unhealthy.
Why would a paperclip maximizer aim to do something objectively good?