As a toy model, let’s say the friendly utility function U has a hundred valuable components—friendship, love, autonomy, etc… - assumed to have positive numeric values. Then to ensure that we don’t lose any of these, U is defined as the minimum of all those hundred components.
Now define V as U, except we forgot the autonomy term. This will result in a terrible world, without autonomy or independence, and there will be wailing and gnashing of teeth (or there would, except the AI won’t let us do that).
That is far from a logical conclusion. Just because something isn’t explicitly being maximised that doesn’t mean it is not being produced in large quantities.
For example, the modern world is not actively maximising CO2 production—but, nontheless, it makes lots of CO2.
We have hundreds of instrumental values—and if one of them is not encoded as an ultimate preference it will make no difference at all—since it was never an ultimate preference in the first place. Autonomy is likely to be one of those. Humans don’t value autonomy for its own sake, it rather is valued instrumentally—since is one of the many things that lets humans achieve their actual goals.
The problem arises when people try to wire-in instrumental values. The point of instrumental values is that they can change depending on circumstances—unless they are foolishly wired in as ultimate values—in which case you get an inflexible system that can’t adapt to environmental changes so competently.
I know plenty of people who value autonomy as an inherent value. Many libertarians seem to even consider it more important than e.g. happiness. (“This government regulation might save lives and make people happier, true, but it is nevertheless morally wrong for government to regulate lives in such a manner.”)
One is how you know what their inherent values are.
The other is that many humans are pretty reprogrammable—and can be made to strongly value a lot of different things, if exposed to the appropriate memes. The catholic values the holy trinity and so on. Meme-derived values can often be shown to not be particularly “intrinisic”, though—since they can frequently be “washed away” by exposure to more memes—in phenomena such as religious conversions.
A catholic no more has an “intrinisic” preference for the holy trinity than a sick man has an “intrinsic” preference for sneezing. In both cases they are being manipulated by small mobile self-reproducing agents which they have a symbiosis with. Maybe if the symbiosis is permanent, the preferences could usefully be regarded as intrinisic—but similarly a permanent cough doesn’t necessarily indicate a preference for coughing.
One is how you know what their inherent values are.
We state them. (“We” in the sense that I am a libertarian who views personal autonomy as an inherent good and a goal to be optimized for.)
The other is that many humans are pretty reprogrammable
Neuroplasticity is extensive. But that lends itself to a rather difficult point of semantics: is it that you are questioning what values “humans” have as an abstracted entity devoid of physical existence, or are you questioning what values actual human beings have?
If you mean the former of the two, well… it’s a given that they have no values of any kind—as such an abstracted entity doesn’t have the capacity to be exposed to the things that permit for the existence of values. If you mean the latter, well—I am a physically instantiated example that this claim is invalid: I as a libertarian inherently (as in, definitionally) value liberty.
One is how you know what their inherent values are.
We state them. (“We” in the sense that I am a libertarian who views personal autonomy as an inherent good and a goal to be optimized for.)
Yes, but goals are used extensively for signalling purposes. Declared goals should not normally be be taken to be actual goals—but more as what the brain’s P.R. department would have others believe your goals to be.
Looking at conversionstories, libertarianism looks as though it can be acquired—and lost—rather like most other political and religious beliefs. As such, it doesn’t look terribly “intrinsic”.
Looking at conversion stories, libertarianism looks as though it can be acquired—and lost—rather like most other political and religious beliefs. As such, it doesn’t look terribly “intrinsic”.
… is there any particular reason that you are choosing to ignore entire paragraphs (multiple) by me that address the question of what it is you’re actually trying to say with statements like this, while also demonstrating that under at least half of the available valid definitions of the terms you are using, your stated conclusion is demonstrably false?
Yes, but goals are used extensively for signalling purposes. Declared goals should not normally be be taken to be actual goals—but more as what the brain’s P.R. department would have others believe your goals to be.
Just two days ago I offered to help a man who, as a complication of poorly managed diabetes and other symptoms, is now infirm to the point of requiring assistance to move about his own home, commit suicide if that was what he wanted—because I value personal autonomy and the requisite self-determination it implies. In other words, while some individuals might only be ‘mouthing the words’ of personal autonomy as an inherent good, I’m walking the walk over here. And I know for a fact that I am not the only person who does so.
So again: how does your—quite frankly, rather biased in appearances to me—epistemology account for the existence of individuals such as myself who do view personal liberty and autonomy as an inherent good and act upon that principle in our daily lives, even to the extent of risking personal harm (such as in my case felony conspiracy-to-commit or whatever such charges I exposed myself to with that commitment.)?
… is there any particular reason that you are choosing to ignore entire paragraphs (multiple) by me that address the question of what it is you’re actually trying to say with statements like this, while also demonstrating that under at least half of the available valid definitions of the terms you are using, your stated conclusion is demonstrably false?
That’s what lawyers call a “leading question”.
I do not accept your characterisation of the situation. FWIW, I ignore most of what I encounter on the internet—so don’t take it too personally.
Yes, but goals are used extensively for signalling purposes. Declared goals should not normally be be taken to be actual goals—but more as what the brain’s P.R. department would have others believe your goals to be.
Just two days ago I offered to help a man who, as a complication of poorly managed diabetes and other symptoms, is now infirm to the point of requiring assistance to move about his own home, commit suicide if that was what he wanted—because I value personal autonomy and the requisite self-determination it implies. In other words, while some individuals might only be ‘mouthing the words’ of personal autonomy as an inherent good, I’m walking the walk over here. And I know for a fact that I am not the only person who does so.
So: I was not suggesting that people do not do good deeds. Indeed: good deeds make for good P.R.
So again: how does your—quite frankly, rather biased in appearances to me—epistemology account for the existence of individuals such as myself who do view personal liberty and autonomy as an inherent good and act upon that principle in our daily lives
So: people believe deeply in all kinds of religious and political doctrines and values. That doesn’t mean that these are best modelled as being intrinsic values. When people change their religions and political systems, it is evidence against the associated values being intrinsic.
Valuing something instrumentally is not intended as some kind of insult. I value art and music instrumentally. It doesn’t bother me that these are not intrinsic values.
This would only be valid if and only if I were not relating an exactly accurate depiction of what was occurring. IF it is leading you to a specific response—it is a response that is in accordance with what’s really happening. This makes it no more “leading” than “would you care to tell the jury why you would be on this particular piece of film stabbing the victim twenty times with a knife, Mr. Defendant?”
I cannot help it that you dislike the necessary conclusions of the current reality; that’s a problem for you to handle.
I do not accept your characterisation of the situation. FWIW, I ignore most of what I encounter on the internet—so don’t take it too personally.
Then we’re done here. You’re rejecting reality, and I have no interest in carrying on dialogues with people who refuse to engage in honest dialogue.
Just because something isn’t explicitly being maximised that doesn’t mean it is not being produced in large quantities.
But most things that aren’t being maximised won’t be produced as by-products of other stuff. Of all the molecules possible in nature, only a few are being mass-produced by the modern world.
I used the example of autonomy for highly relevant philosophical reasons; ie because it would allow me to get in the line about wailing and the AI forbidding it :-)
We don’t observe “most things” in the first place. We see a miniscule subset of all things—a subset which is either the target or result of a maximisation process.
IMO, most things that look as though as though they are being maximised are, in fact, the products of instrumental maximisation—and so are not something that we need to include in the preferences of a machine intelligence—because we don’t really care about them in the first place. The products of instrumental maximisation often don’t look much like “by-products”. They often look more as though they are intrinsic preferences. However, they are not.
Humans don’t value autonomy for its own sake, it rather is valued instrumentally[.]
I am not sure whether that is the case. Certainly, it has some instrumental value—but I am not at all confident that it has has no (or negligible) intrinsic value.
That is far from a logical conclusion. Just because something isn’t explicitly being maximised that doesn’t mean it is not being produced in large quantities.
For example, the modern world is not actively maximising CO2 production—but, nontheless, it makes lots of CO2.
We have hundreds of instrumental values—and if one of them is not encoded as an ultimate preference it will make no difference at all—since it was never an ultimate preference in the first place. Autonomy is likely to be one of those. Humans don’t value autonomy for its own sake, it rather is valued instrumentally—since is one of the many things that lets humans achieve their actual goals.
The problem arises when people try to wire-in instrumental values. The point of instrumental values is that they can change depending on circumstances—unless they are foolishly wired in as ultimate values—in which case you get an inflexible system that can’t adapt to environmental changes so competently.
I know plenty of people who value autonomy as an inherent value. Many libertarians seem to even consider it more important than e.g. happiness. (“This government regulation might save lives and make people happier, true, but it is nevertheless morally wrong for government to regulate lives in such a manner.”)
That raises two issues.
One is how you know what their inherent values are.
The other is that many humans are pretty reprogrammable—and can be made to strongly value a lot of different things, if exposed to the appropriate memes. The catholic values the holy trinity and so on. Meme-derived values can often be shown to not be particularly “intrinisic”, though—since they can frequently be “washed away” by exposure to more memes—in phenomena such as religious conversions.
A catholic no more has an “intrinisic” preference for the holy trinity than a sick man has an “intrinsic” preference for sneezing. In both cases they are being manipulated by small mobile self-reproducing agents which they have a symbiosis with. Maybe if the symbiosis is permanent, the preferences could usefully be regarded as intrinisic—but similarly a permanent cough doesn’t necessarily indicate a preference for coughing.
We state them. (“We” in the sense that I am a libertarian who views personal autonomy as an inherent good and a goal to be optimized for.)
Neuroplasticity is extensive. But that lends itself to a rather difficult point of semantics: is it that you are questioning what values “humans” have as an abstracted entity devoid of physical existence, or are you questioning what values actual human beings have?
If you mean the former of the two, well… it’s a given that they have no values of any kind—as such an abstracted entity doesn’t have the capacity to be exposed to the things that permit for the existence of values. If you mean the latter, well—I am a physically instantiated example that this claim is invalid: I as a libertarian inherently (as in, definitionally) value liberty.
Yes, but goals are used extensively for signalling purposes. Declared goals should not normally be be taken to be actual goals—but more as what the brain’s P.R. department would have others believe your goals to be.
Looking at conversion stories, libertarianism looks as though it can be acquired—and lost—rather like most other political and religious beliefs. As such, it doesn’t look terribly “intrinsic”.
… is there any particular reason that you are choosing to ignore entire paragraphs (multiple) by me that address the question of what it is you’re actually trying to say with statements like this, while also demonstrating that under at least half of the available valid definitions of the terms you are using, your stated conclusion is demonstrably false?
Just two days ago I offered to help a man who, as a complication of poorly managed diabetes and other symptoms, is now infirm to the point of requiring assistance to move about his own home, commit suicide if that was what he wanted—because I value personal autonomy and the requisite self-determination it implies. In other words, while some individuals might only be ‘mouthing the words’ of personal autonomy as an inherent good, I’m walking the walk over here. And I know for a fact that I am not the only person who does so.
So again: how does your—quite frankly, rather biased in appearances to me—epistemology account for the existence of individuals such as myself who do view personal liberty and autonomy as an inherent good and act upon that principle in our daily lives, even to the extent of risking personal harm (such as in my case felony conspiracy-to-commit or whatever such charges I exposed myself to with that commitment.)?
That’s what lawyers call a “leading question”.
I do not accept your characterisation of the situation. FWIW, I ignore most of what I encounter on the internet—so don’t take it too personally.
So: I was not suggesting that people do not do good deeds. Indeed: good deeds make for good P.R.
So: people believe deeply in all kinds of religious and political doctrines and values. That doesn’t mean that these are best modelled as being intrinsic values. When people change their religions and political systems, it is evidence against the associated values being intrinsic.
Valuing something instrumentally is not intended as some kind of insult. I value art and music instrumentally. It doesn’t bother me that these are not intrinsic values.
This would only be valid if and only if I were not relating an exactly accurate depiction of what was occurring. IF it is leading you to a specific response—it is a response that is in accordance with what’s really happening. This makes it no more “leading” than “would you care to tell the jury why you would be on this particular piece of film stabbing the victim twenty times with a knife, Mr. Defendant?”
I cannot help it that you dislike the necessary conclusions of the current reality; that’s a problem for you to handle.
Then we’re done here. You’re rejecting reality, and I have no interest in carrying on dialogues with people who refuse to engage in honest dialogue.
But most things that aren’t being maximised won’t be produced as by-products of other stuff. Of all the molecules possible in nature, only a few are being mass-produced by the modern world.
I used the example of autonomy for highly relevant philosophical reasons; ie because it would allow me to get in the line about wailing and the AI forbidding it :-)
We don’t observe “most things” in the first place. We see a miniscule subset of all things—a subset which is either the target or result of a maximisation process.
IMO, most things that look as though as though they are being maximised are, in fact, the products of instrumental maximisation—and so are not something that we need to include in the preferences of a machine intelligence—because we don’t really care about them in the first place. The products of instrumental maximisation often don’t look much like “by-products”. They often look more as though they are intrinsic preferences. However, they are not.
I am not sure whether that is the case. Certainly, it has some instrumental value—but I am not at all confident that it has has no (or negligible) intrinsic value.
Right—so I said that autonomy is likely to be an instrumental value. I didn’t particulary intend to convey an absolutist position.
Ah, missed that. Might be worthwhile to make the language a little clearer, but eh.