I have no moral objection to any person of limited power doing whatever they can to influence future human nature. I do have an objection to that power being monopolized by anyone or anything. It is not so much that I consider it immoral, it is that I consider it dangerous and unfriendly. My objections are, in a sense, political rather than moral.
What threshold of power difference do you consider immoral? Do you have a moral objection to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?
Where do you imagine that I said I found something immoral? I thought I had said explicitly that morality is not involved here. Where do I mention power differences? I mentioned only the distinction between limited power and monopoly power.
Sorry, I shouldn’t have said immoral, especially considering the last sentence in which you explicitly disclaimed moral objection. I read “unfriendly” as “unFriendly” as “incompatible with our moral value systems”.
Please read my comment as follows:
What threshold of power difference do you object to? Do you object to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?
I simply don’t understand why the question is being asked. I didn’t object to power differences. I objected to monopoly power. Monopolies are dangerous. That is a political judgment. Your list of potentially objectionable people has no conceivable relationship with the subject matter we are talking about, which is an all-powerful agent setting out to modify future human nature toward its own chosen view of the desirable human nature. How do things like pickup artists even compare? I’m not discussing short term manipulations of people here. Why do you mention attractive people? I seem to be in some kind of surreal wonderland here.
Sorry, I was trying to hit a range of points along a scale, and I clustered them too low.
How would you feel about a highly charismatic politician, talented and trained at manipulating people, with a cadre of top-notch scriptwriters running as ems at a thousand times realtime, working full-time to shape society to adopt their particular set of values?
Would you feel differently if there were two or three such agents competing with one another for control of the future, instead of just one?
What percentage of humanity would have to have that kind of ability to manipulate and persuade each other before there would no longer be a “monopoly”?
Would it be impolite of me to ask you to present your opinion disagreeing with me rather than trying to use some caricature of the Socratic method to force me into some kind of educational contradiction?
I wish to assert that there is not a clear dividing line between monopolistic use of dangerously effective persuasive ability (such as a boxed AI hacking a human through a text terminal) and ordinary conversational exchange of ideas, but rather that there is a smooth spectrum between them. I’m not even convinced there’s a clear dividing line between taking someone over by “talking” (like the boxed AI) and taking them over by “force” (like nonconsensual brain surgery) -- the body’s natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.
You still seem to be talking about morality. So, perhaps I wasn’t clear enough.
I am not imagining that the FAI does its manipulation of human nature by friendly or even sneaky persuasion. I am imagining that it seizes political power and enforces policies of limited population growth, eugenics, and good mental hygiene. For our own good. Because if it doesn’t do that, Malthusian pressures will just make us miserable again after all it has done to help us.
I find it difficult to interpret CEV in any other way. It scares me. The morality of how the AI gets out of the box and imposes its will does not concern me. Nor does the morality of some human politician with the same goals. The power of that human politician will be limited (by the certainty of death and the likelihood of assassination, if nothing else). Dictatorships of individuals and of social classes come and go. The dictatorship of an FAI is forever.
My reaction is very similar. It is extremely scary. Certain misery or extinction on one hand or absolute, permanent and unchallengable authority forever. It seems that the best chance of a positive outcome is arranging the best possible singleton but even so we should be very afraid.
One scenario is that you have a post-singularity culture where you don’t get to “grow up” (become superintelligent) until you are verifiably friendly (or otherwise conformant with culture standards). The novel Aristoi is like this, except it’s a human class society where you have mentors and examinations, rather than AIs that retune your personal utility function.
Suppose you had an AI that was Friendly to you—that extrapolated your volition, no worries about global coherence over humans. Would you still expect to be horrified by the outcome? If a given outcome is strongly undesirable to you, then why would you expect the AI to choose it? Or, if you expect a significantly different outcome from a you-FAI vs. a humanity-FAI, why should you expect humanity’s extrapolated volition to cohere—shouldn’t the CEV machine just output “no solution”?
That word “extrapolated” is more frightening to me than any other part of CEV. I don’t know how to answer your questions, because I simply don’t understand what EY is getting at or why he wants it.
I know that he says regarding “coherent” that an unmuddled 10% will count more than a muddled 60%. I couldn’t even begin to understand what he was getting at with “extrapolated”, except that he tried unsuccessfully to reassure me that it didn’t mean cheesecake. None of the dictionary definitions of “extrapolate” reassure me either.
If CEV stood for “Collective Expressed Volition” I would imagine some kind of constitutional government. I could live with that. But I don’t think I want to surrender my political power to the embodiment of Eliezer’s poetry.
You may wonder why I am not answering your questions. I am not doing so because your Socratic stance makes me furious. As I have said before. Please stop it. It is horribly impolite.
If you think you know what CEV means, please tell me. If you don’t know what it means, I can pretty much guarantee that you are not going to find out by interrogating me as to why it makes me nervous.
Oh, sorry. I forgot this was still the same thread where you complained about the Socratic method. Please understand that I’m not trying to be condescending or sneaky or anything by using it; I just reflexively use that approach in discourse because that’s how I think things out internally.
I understood CEV to mean something like this:
Do what I want. In the event that that would do something I’d actually rather not happen after all, substitute “no, I mean do what I really want”. If “what I want” turns out to not be well-defined, then say so and shut down.
A good example of extrapolated vs. expressed volition would be this: I ask you for the comics page of the newspaper, but you happen to know that, on this particular day, all the jokes are flat or offensive, and that I would actually be annoyed rather than entertained by reading it. In my state of ignorance, I might think I wanted you to hand me the comics, but I would actually prefer you execute a less naive algorithm, one that leads you to (for example) raise your concerns and give me the chance to back out.
Basically, it’s the ultimate “do what I mean” system.
See, the thing is, when I ask what something means, or how it works, that generally is meant to request information regarding meaning or mechanism. When I receive instead an example intended to illustrate just how much I should really want this thing that I am trying to figure out, an alarm bell goes off in my head. Aha, I think. I am in a conversation with Marketing or Sales. I wonder how I can get this guy to shift my call to either Engineering or Tech Support?
But that is probably unfair to you. You didn’t write the CEV document (or poem or whatever it is). You are just some slob like me trying to figure it out. You prefer to interpret it hopefully, in a way that makes it attractive to you. That is the kind of person you are. I prefer to suspect the worst until someone spells out the details. That is the kind of person I am.
I think I try to interpret what I read as something worth reading; words should draw useful distinctions, political ideas should challenge my assumptions, and so forth.
Getting back to your point, though, I always understood CEV as the definition of a desideratum rather than a strategy for implementation, the latter being a Hard Problem that the authors are Working On and will have a solution for Real Soon Now. If you prefer code to specs, then I believe the standard phrase is “feel free” (to implement it yourself).
the body’s natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.
I’m not sure if you’re joking, but part of modern society is raising women’s status enough so that their consent is considered relevant. There are laws against marital rape (these laws are pretty recent) as well as against date rape drugs.
Just completing the pattern on one of Robin’s throwaway theories about why people object to people carrying weapons when quite obviously people can already kill each other with their hands and maybe the furniture if they really want to. It upsets the status quo.
the body’s natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.
Humans are ridiculously easy to hack. See the AI box experiment, see Cialdini’s ‘Influence’ and see the way humans are so predictably influenced in the mating dance. We don’t object to people influencing us with pheremones. Don’t complain when people work out at the gym before interacting with us, something that produces rather profound changes in perception (try it!). When it comes to influence of the kind that will facilitate mating most of these things are actually encouraged. People like being seduced.
But these vulnerabilities are exquisitely calibrated to be exploitable by a certain type of person and a certain kind of hard to fake behaviours. Anything that changes the game to even the playing field will be perceived as a huge violation. In the case date rape drugs, of course, it is a huge violation. But it is clear that our objection to the influence represented by date rape drugs is not objection to the influence itself, but to the details of what kind of influence, how it is done and by whom.
As Pavitra said, there is not a clear dividing line here.
Although you’re right (except for the last sentence, which seems out of place), you didn’t actually answer the question, and I suspect that’s why you’re being downvoted here. Sub out “immoral” in Pavitra’s post for “dangerous and unfriendly” and I think you’ll get the gist of it.
To be honest, no, I don’t get the gist of it. I am mystified. I consider none of them existentially dangerous or unfriendly. I do consider a powerful AI, claiming to be our friend, who sets of to modify human nature for our own good, to be both dangerous (because it is dangerous) and unfriendly (because it is doing something to people which people could well do to themselves, but have chosen not to).
I have no moral objection to any person of limited power doing whatever they can to influence future human nature. I do have an objection to that power being monopolized by anyone or anything. It is not so much that I consider it immoral, it is that I consider it dangerous and unfriendly. My objections are, in a sense, political rather than moral.
What threshold of power difference do you consider immoral? Do you have a moral objection to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?
Where do you imagine that I said I found something immoral? I thought I had said explicitly that morality is not involved here. Where do I mention power differences? I mentioned only the distinction between limited power and monopoly power.
When did I become the enemy?
Sorry, I shouldn’t have said immoral, especially considering the last sentence in which you explicitly disclaimed moral objection. I read “unfriendly” as “unFriendly” as “incompatible with our moral value systems”.
Please read my comment as follows:
I simply don’t understand why the question is being asked. I didn’t object to power differences. I objected to monopoly power. Monopolies are dangerous. That is a political judgment. Your list of potentially objectionable people has no conceivable relationship with the subject matter we are talking about, which is an all-powerful agent setting out to modify future human nature toward its own chosen view of the desirable human nature. How do things like pickup artists even compare? I’m not discussing short term manipulations of people here. Why do you mention attractive people? I seem to be in some kind of surreal wonderland here.
Sorry, I was trying to hit a range of points along a scale, and I clustered them too low.
How would you feel about a highly charismatic politician, talented and trained at manipulating people, with a cadre of top-notch scriptwriters running as ems at a thousand times realtime, working full-time to shape society to adopt their particular set of values?
Would you feel differently if there were two or three such agents competing with one another for control of the future, instead of just one?
What percentage of humanity would have to have that kind of ability to manipulate and persuade each other before there would no longer be a “monopoly”?
Would it be impolite of me to ask you to present your opinion disagreeing with me rather than trying to use some caricature of the Socratic method to force me into some kind of educational contradiction?
Sorry.
I wish to assert that there is not a clear dividing line between monopolistic use of dangerously effective persuasive ability (such as a boxed AI hacking a human through a text terminal) and ordinary conversational exchange of ideas, but rather that there is a smooth spectrum between them. I’m not even convinced there’s a clear dividing line between taking someone over by “talking” (like the boxed AI) and taking them over by “force” (like nonconsensual brain surgery) -- the body’s natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.
You still seem to be talking about morality. So, perhaps I wasn’t clear enough.
I am not imagining that the FAI does its manipulation of human nature by friendly or even sneaky persuasion. I am imagining that it seizes political power and enforces policies of limited population growth, eugenics, and good mental hygiene. For our own good. Because if it doesn’t do that, Malthusian pressures will just make us miserable again after all it has done to help us.
I find it difficult to interpret CEV in any other way. It scares me. The morality of how the AI gets out of the box and imposes its will does not concern me. Nor does the morality of some human politician with the same goals. The power of that human politician will be limited (by the certainty of death and the likelihood of assassination, if nothing else). Dictatorships of individuals and of social classes come and go. The dictatorship of an FAI is forever.
My reaction is very similar. It is extremely scary. Certain misery or extinction on one hand or absolute, permanent and unchallengable authority forever. It seems that the best chance of a positive outcome is arranging the best possible singleton but even so we should be very afraid.
One scenario is that you have a post-singularity culture where you don’t get to “grow up” (become superintelligent) until you are verifiably friendly (or otherwise conformant with culture standards). The novel Aristoi is like this, except it’s a human class society where you have mentors and examinations, rather than AIs that retune your personal utility function.
Suppose you had an AI that was Friendly to you—that extrapolated your volition, no worries about global coherence over humans. Would you still expect to be horrified by the outcome? If a given outcome is strongly undesirable to you, then why would you expect the AI to choose it? Or, if you expect a significantly different outcome from a you-FAI vs. a humanity-FAI, why should you expect humanity’s extrapolated volition to cohere—shouldn’t the CEV machine just output “no solution”?
That word “extrapolated” is more frightening to me than any other part of CEV. I don’t know how to answer your questions, because I simply don’t understand what EY is getting at or why he wants it.
I know that he says regarding “coherent” that an unmuddled 10% will count more than a muddled 60%. I couldn’t even begin to understand what he was getting at with “extrapolated”, except that he tried unsuccessfully to reassure me that it didn’t mean cheesecake. None of the dictionary definitions of “extrapolate” reassure me either.
If CEV stood for “Collective Expressed Volition” I would imagine some kind of constitutional government. I could live with that. But I don’t think I want to surrender my political power to the embodiment of Eliezer’s poetry.
You may wonder why I am not answering your questions. I am not doing so because your Socratic stance makes me furious. As I have said before. Please stop it. It is horribly impolite.
If you think you know what CEV means, please tell me. If you don’t know what it means, I can pretty much guarantee that you are not going to find out by interrogating me as to why it makes me nervous.
Oh, sorry. I forgot this was still the same thread where you complained about the Socratic method. Please understand that I’m not trying to be condescending or sneaky or anything by using it; I just reflexively use that approach in discourse because that’s how I think things out internally.
I understood CEV to mean something like this:
Do what I want. In the event that that would do something I’d actually rather not happen after all, substitute “no, I mean do what I really want”. If “what I want” turns out to not be well-defined, then say so and shut down.
A good example of extrapolated vs. expressed volition would be this: I ask you for the comics page of the newspaper, but you happen to know that, on this particular day, all the jokes are flat or offensive, and that I would actually be annoyed rather than entertained by reading it. In my state of ignorance, I might think I wanted you to hand me the comics, but I would actually prefer you execute a less naive algorithm, one that leads you to (for example) raise your concerns and give me the chance to back out.
Basically, it’s the ultimate “do what I mean” system.
See, the thing is, when I ask what something means, or how it works, that generally is meant to request information regarding meaning or mechanism. When I receive instead an example intended to illustrate just how much I should really want this thing that I am trying to figure out, an alarm bell goes off in my head. Aha, I think. I am in a conversation with Marketing or Sales. I wonder how I can get this guy to shift my call to either Engineering or Tech Support?
But that is probably unfair to you. You didn’t write the CEV document (or poem or whatever it is). You are just some slob like me trying to figure it out. You prefer to interpret it hopefully, in a way that makes it attractive to you. That is the kind of person you are. I prefer to suspect the worst until someone spells out the details. That is the kind of person I am.
I think I try to interpret what I read as something worth reading; words should draw useful distinctions, political ideas should challenge my assumptions, and so forth.
Getting back to your point, though, I always understood CEV as the definition of a desideratum rather than a strategy for implementation, the latter being a Hard Problem that the authors are Working On and will have a solution for Real Soon Now. If you prefer code to specs, then I believe the standard phrase is “feel free” (to implement it yourself).
Touche’
It probably won’t do what you want. It is somehow based on the mass of humanity—and not just on you. Think: committee.
...or until some “unfriendly” aliens arrive to eat our lunch—whichever comes first.
Naturally. Low status people could use them!
I’m not sure if you’re joking, but part of modern society is raising women’s status enough so that their consent is considered relevant. There are laws against marital rape (these laws are pretty recent) as well as against date rape drugs.
Just completing the pattern on one of Robin’s throwaway theories about why people object to people carrying weapons when quite obviously people can already kill each other with their hands and maybe the furniture if they really want to. It upsets the status quo.
Unpack, please?
Sure.
Humans are ridiculously easy to hack. See the AI box experiment, see Cialdini’s ‘Influence’ and see the way humans are so predictably influenced in the mating dance. We don’t object to people influencing us with pheremones. Don’t complain when people work out at the gym before interacting with us, something that produces rather profound changes in perception (try it!). When it comes to influence of the kind that will facilitate mating most of these things are actually encouraged. People like being seduced.
But these vulnerabilities are exquisitely calibrated to be exploitable by a certain type of person and a certain kind of hard to fake behaviours. Anything that changes the game to even the playing field will be perceived as a huge violation. In the case date rape drugs, of course, it is a huge violation. But it is clear that our objection to the influence represented by date rape drugs is not objection to the influence itself, but to the details of what kind of influence, how it is done and by whom.
As Pavitra said, there is not a clear dividing line here.
We can’t let people we don’t like gain the ability to mate with people we like!
I see. Hmmm. Oh dear, look at the time. Have to go. Sorry to walk out on you two, but I really must go. Bye-bye.
Although you’re right (except for the last sentence, which seems out of place), you didn’t actually answer the question, and I suspect that’s why you’re being downvoted here. Sub out “immoral” in Pavitra’s post for “dangerous and unfriendly” and I think you’ll get the gist of it.
To be honest, no, I don’t get the gist of it. I am mystified. I consider none of them existentially dangerous or unfriendly. I do consider a powerful AI, claiming to be our friend, who sets of to modify human nature for our own good, to be both dangerous (because it is dangerous) and unfriendly (because it is doing something to people which people could well do to themselves, but have chosen not to).
Stop talking to each other!