Just to keep things in context, my main point in posting was to demonstrate the unlikelihood of being awakened in a dystopia; it’s almost as if critics suddenly jump from point A to point B without a transition. While your Niven scenario you listed below seems to be agreeable to my position, it’s actually still off; you are missing the key point behind the chain of constant care, the needed infrastructure to continue cryonics care, etc. This has nothing to do with a family reviving ancestors: if someone—anyone—is there taking the time and energy to keep on refilling your dewar with LN2, then that means someone is there wanting to revive you. Think coma patients; hospitals don’t keep them around just to feed them and stare at their bodies.
Anyways, moving on to the “initiatives” comment. Given that Lesswrong tends to overlap with SIAI supporters, perhaps I should have said mission? Again, I haven’t looked too much into Yvain’s history. However, let’s suppose for the moment that he’s a strong supporter of that mission. Since we:
Can’t live in parallel universes
Live in a universe where even (seemingly) unrelated things are affected by each other.
Think A.I. may be a crucial element of a bad future, due to #1 and #2.
...I guess I was just wondering if he thought it’s a grim outlook for the mission. Signing up for cryonics seems to give a “glass half full” impression. Furthermore, due to #1 and #2 above, I’ll eventually be arguing why mainstreaming cryonics could significantly assist in reducing existential risk.… and why it may be helpful for everyone from the LessWrong community to IEET be a little more assertive on the issue. Of course, I’m not saying eliminating risk. But at the very least, mainstreaming cryonics should be more helpful with existential risk than dealing with, say, measles ;)
To be honest, that did not clear anything up. I still don’t know whether to interpret your original question as:
Doesn’t signing up for cryonics indicate skepticism that SIAI will succeed in creating FAI?
Doesn’t not signing up indicate skepticism that SIAI will succeed?
Doesn’t signing up indicate skepticism that UFAI is something to worry about?
Doesn’t not signing up indicate skepticism regarding UFAI risk?
To be honest once again, I no longer care what you meant because you have made it clear that you don’t really care what the answer is. You have your own opinions on the relationship between cryonics and existential risk which you will share with us someday.
Please, when you do share, start by presenting your own opinion and arguments clearly and directly. Don’t ask rhetorical questions which no one can parse. No one here will consider you a troll for speaking your mind.
I apologize for the confusion and I understand if you’re frustrated; I experience that frustration quite often once I realize I’m talking past someone. For whatever it’s worth, I left it open because the curious side of me didn’t want to limit Yvain; that curious side wanted to hear his thoughts in general. So… I guess both #2 and #3 (I’m not sure how #1 and #4 could be deduced from my posts, but my opinion is irrelevant to this situation). Anyways, I didn’t mean to push this too much, because I felt it was minor. Perhaps I should not have asked it in the first place.
Also, thank you for being honest (admittedly, I was tempted to say, “So you weren’t being honest with your other posts?” but I decided to present that temptation passively inside these parentheses)
Ok, we’re cool. Regarding my own opinions/postings, I said I’m not signing up, but my opinions on FAI or UFAI had nothing to do with it. Well, maybe I did implicitly express skepticism that FAI will create a utopia. What the hell! I’ll express that skepticism explicitly right now, since I’m thinking of it. There is nothing an FAI can do to eliminate human misery without first changing human nature. An FAI that tries to change human nature is an UFAI.
If you have moral objections to altering the nature of potential future persons that have not yet come into being, then you had better avoid becoming a teacher, or interacting at all with children, or saying or writing anything that a child might at some point encounter, or in fact communicating with any person under any circumstances whatsoever.
I have no moral objection to any person of limited power doing whatever they can to influence future human nature. I do have an objection to that power being monopolized by anyone or anything. It is not so much that I consider it immoral, it is that I consider it dangerous and unfriendly. My objections are, in a sense, political rather than moral.
What threshold of power difference do you consider immoral? Do you have a moral objection to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?
Where do you imagine that I said I found something immoral? I thought I had said explicitly that morality is not involved here. Where do I mention power differences? I mentioned only the distinction between limited power and monopoly power.
Sorry, I shouldn’t have said immoral, especially considering the last sentence in which you explicitly disclaimed moral objection. I read “unfriendly” as “unFriendly” as “incompatible with our moral value systems”.
Please read my comment as follows:
What threshold of power difference do you object to? Do you object to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?
I simply don’t understand why the question is being asked. I didn’t object to power differences. I objected to monopoly power. Monopolies are dangerous. That is a political judgment. Your list of potentially objectionable people has no conceivable relationship with the subject matter we are talking about, which is an all-powerful agent setting out to modify future human nature toward its own chosen view of the desirable human nature. How do things like pickup artists even compare? I’m not discussing short term manipulations of people here. Why do you mention attractive people? I seem to be in some kind of surreal wonderland here.
Sorry, I was trying to hit a range of points along a scale, and I clustered them too low.
How would you feel about a highly charismatic politician, talented and trained at manipulating people, with a cadre of top-notch scriptwriters running as ems at a thousand times realtime, working full-time to shape society to adopt their particular set of values?
Would you feel differently if there were two or three such agents competing with one another for control of the future, instead of just one?
What percentage of humanity would have to have that kind of ability to manipulate and persuade each other before there would no longer be a “monopoly”?
Would it be impolite of me to ask you to present your opinion disagreeing with me rather than trying to use some caricature of the Socratic method to force me into some kind of educational contradiction?
I wish to assert that there is not a clear dividing line between monopolistic use of dangerously effective persuasive ability (such as a boxed AI hacking a human through a text terminal) and ordinary conversational exchange of ideas, but rather that there is a smooth spectrum between them. I’m not even convinced there’s a clear dividing line between taking someone over by “talking” (like the boxed AI) and taking them over by “force” (like nonconsensual brain surgery) -- the body’s natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.
You still seem to be talking about morality. So, perhaps I wasn’t clear enough.
I am not imagining that the FAI does its manipulation of human nature by friendly or even sneaky persuasion. I am imagining that it seizes political power and enforces policies of limited population growth, eugenics, and good mental hygiene. For our own good. Because if it doesn’t do that, Malthusian pressures will just make us miserable again after all it has done to help us.
I find it difficult to interpret CEV in any other way. It scares me. The morality of how the AI gets out of the box and imposes its will does not concern me. Nor does the morality of some human politician with the same goals. The power of that human politician will be limited (by the certainty of death and the likelihood of assassination, if nothing else). Dictatorships of individuals and of social classes come and go. The dictatorship of an FAI is forever.
My reaction is very similar. It is extremely scary. Certain misery or extinction on one hand or absolute, permanent and unchallengable authority forever. It seems that the best chance of a positive outcome is arranging the best possible singleton but even so we should be very afraid.
One scenario is that you have a post-singularity culture where you don’t get to “grow up” (become superintelligent) until you are verifiably friendly (or otherwise conformant with culture standards). The novel Aristoi is like this, except it’s a human class society where you have mentors and examinations, rather than AIs that retune your personal utility function.
Suppose you had an AI that was Friendly to you—that extrapolated your volition, no worries about global coherence over humans. Would you still expect to be horrified by the outcome? If a given outcome is strongly undesirable to you, then why would you expect the AI to choose it? Or, if you expect a significantly different outcome from a you-FAI vs. a humanity-FAI, why should you expect humanity’s extrapolated volition to cohere—shouldn’t the CEV machine just output “no solution”?
That word “extrapolated” is more frightening to me than any other part of CEV. I don’t know how to answer your questions, because I simply don’t understand what EY is getting at or why he wants it.
I know that he says regarding “coherent” that an unmuddled 10% will count more than a muddled 60%. I couldn’t even begin to understand what he was getting at with “extrapolated”, except that he tried unsuccessfully to reassure me that it didn’t mean cheesecake. None of the dictionary definitions of “extrapolate” reassure me either.
If CEV stood for “Collective Expressed Volition” I would imagine some kind of constitutional government. I could live with that. But I don’t think I want to surrender my political power to the embodiment of Eliezer’s poetry.
You may wonder why I am not answering your questions. I am not doing so because your Socratic stance makes me furious. As I have said before. Please stop it. It is horribly impolite.
If you think you know what CEV means, please tell me. If you don’t know what it means, I can pretty much guarantee that you are not going to find out by interrogating me as to why it makes me nervous.
Oh, sorry. I forgot this was still the same thread where you complained about the Socratic method. Please understand that I’m not trying to be condescending or sneaky or anything by using it; I just reflexively use that approach in discourse because that’s how I think things out internally.
I understood CEV to mean something like this:
Do what I want. In the event that that would do something I’d actually rather not happen after all, substitute “no, I mean do what I really want”. If “what I want” turns out to not be well-defined, then say so and shut down.
A good example of extrapolated vs. expressed volition would be this: I ask you for the comics page of the newspaper, but you happen to know that, on this particular day, all the jokes are flat or offensive, and that I would actually be annoyed rather than entertained by reading it. In my state of ignorance, I might think I wanted you to hand me the comics, but I would actually prefer you execute a less naive algorithm, one that leads you to (for example) raise your concerns and give me the chance to back out.
Basically, it’s the ultimate “do what I mean” system.
See, the thing is, when I ask what something means, or how it works, that generally is meant to request information regarding meaning or mechanism. When I receive instead an example intended to illustrate just how much I should really want this thing that I am trying to figure out, an alarm bell goes off in my head. Aha, I think. I am in a conversation with Marketing or Sales. I wonder how I can get this guy to shift my call to either Engineering or Tech Support?
But that is probably unfair to you. You didn’t write the CEV document (or poem or whatever it is). You are just some slob like me trying to figure it out. You prefer to interpret it hopefully, in a way that makes it attractive to you. That is the kind of person you are. I prefer to suspect the worst until someone spells out the details. That is the kind of person I am.
I think I try to interpret what I read as something worth reading; words should draw useful distinctions, political ideas should challenge my assumptions, and so forth.
Getting back to your point, though, I always understood CEV as the definition of a desideratum rather than a strategy for implementation, the latter being a Hard Problem that the authors are Working On and will have a solution for Real Soon Now. If you prefer code to specs, then I believe the standard phrase is “feel free” (to implement it yourself).
the body’s natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.
I’m not sure if you’re joking, but part of modern society is raising women’s status enough so that their consent is considered relevant. There are laws against marital rape (these laws are pretty recent) as well as against date rape drugs.
Just completing the pattern on one of Robin’s throwaway theories about why people object to people carrying weapons when quite obviously people can already kill each other with their hands and maybe the furniture if they really want to. It upsets the status quo.
the body’s natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.
Humans are ridiculously easy to hack. See the AI box experiment, see Cialdini’s ‘Influence’ and see the way humans are so predictably influenced in the mating dance. We don’t object to people influencing us with pheremones. Don’t complain when people work out at the gym before interacting with us, something that produces rather profound changes in perception (try it!). When it comes to influence of the kind that will facilitate mating most of these things are actually encouraged. People like being seduced.
But these vulnerabilities are exquisitely calibrated to be exploitable by a certain type of person and a certain kind of hard to fake behaviours. Anything that changes the game to even the playing field will be perceived as a huge violation. In the case date rape drugs, of course, it is a huge violation. But it is clear that our objection to the influence represented by date rape drugs is not objection to the influence itself, but to the details of what kind of influence, how it is done and by whom.
As Pavitra said, there is not a clear dividing line here.
Although you’re right (except for the last sentence, which seems out of place), you didn’t actually answer the question, and I suspect that’s why you’re being downvoted here. Sub out “immoral” in Pavitra’s post for “dangerous and unfriendly” and I think you’ll get the gist of it.
To be honest, no, I don’t get the gist of it. I am mystified. I consider none of them existentially dangerous or unfriendly. I do consider a powerful AI, claiming to be our friend, who sets of to modify human nature for our own good, to be both dangerous (because it is dangerous) and unfriendly (because it is doing something to people which people could well do to themselves, but have chosen not to).
We may assume that an FAI will create the best of all possible worlds. Your argument seems to be that the criteria of a perfect utopia do not correspond to a possible world; very well then, an FAI will give us an outcome that is, at worst, no less desirable than any outcome achievable without one.
I’m not saying that you didn’t express yourself precisely enough. I am saying that there is no such thing as “best (full stop)” There is “best for me”, there is “best for you”, but there is not “best for both of us”. No more than there is an objective (or intersubjective) probability that I am wearing a red shirt as I type.
Your argument above only works if “best” is interpreted as “best for every mind”. If that is what you meant, then your implicit definition of FAI proves that FAI is impossible.
Perhaps you should explain, by providing a link, what is meant by CEV. The only text I know of describing it is dated 2004, and, … how shall I put this …, it doesn’t seem to cohere.
But, I have to say, based on what I can infer, that I see no reason to expect coherence, and the concept of “extrapolation” scares the sh.t out of me.
“Coherence” seems a bit like the human genome project. Yes there are many individual differences—but if you throw them all away, you are still left with something.
So we are going to build a giant AI to help us discover and distill that residue of humanity which is there after you discard the differences?
And here I thought that was the easy part, the part we had already figured out pretty well by ourselves.
And I’m not sure I care for the metaphor of “throwing away” the differences. Shouldn’t we instead be looking for practices and mechanisms that make use of those differences, that weave them into a fabric of resilience and mutual support rather than a hodgepodge of weakness and conflict?
“We”? You mean: you and me, baby? Or are you asking after a prediction about whether something like CEV will beat the other philosophies about what to do with an intelligent machine?
CEV is an alien document from my perspective. It isn’t like anything I would ever write.
It reminds me a bit of the ideal of democracy—where the masses have a say in running things.
I tend to see the world as more run by the government and its corporations—with democracy acting like a smokescreen for the voters—to give them an illusion of control, and to prevent them from revolting.
Also, technology has a long history of increasing wealth inequality—by giving the powerful controllers and developers of the technology ever more means of tracking and controlling those who would take away their stuff.
That sort of vision is not so useful as an election promise to help rally the masses around a cause—but then, I am not really a politician.
with democracy acting like a smokescreen for the voters—to give them an illusion of control, and to prevent them from revolting.
Voting prevents revolts in the same sense that a hydroelectric dam prevents floods. It’s not a matter of stopping up the revolutionary urge; in fact, any attempt to do so would be disastrous sooner or later. Instead it provides a safe, easy channel, and in the process, captures all the power of the movement before that flow can build up enough to cause damage.
The voters can have whatever they want, and the rest of the system does it’s best to stop them from wanting anything dangerous.
It wouldn’t form a utility function at all. It has no answer for any of the interesting or important questions: the questions on which there is disagreement. Or am I missing something here?
Ok, you are changing the analogy. Initially you said, throw away the differences. Now you are saying throw away all but one of them.
So our revised approximation of the CEV is the expressed volition of … Craig Venter?!
Would that horrify the vast majority of humanity? I think it might. Mostly because people just would not know how it would play out. People generally prefer the devil they know to the one they don’t.
Well, it was I who wrote that. The differences were thrown away in the genome project—but that isn’t exactly the corresponding thing according to the CEV proposal.
A certain lack of coherence doesn’t mean all the conflicting desires cancel out leaving nothing behind—thus the emphasis on still being “left with something”.
Just to keep things in context, my main point in posting was to demonstrate the unlikelihood of being awakened in a dystopia; it’s almost as if critics suddenly jump from point A to point B without a transition. While your Niven scenario you listed below seems to be agreeable to my position, it’s actually still off; you are missing the key point behind the chain of constant care, the needed infrastructure to continue cryonics care, etc. This has nothing to do with a family reviving ancestors: if someone—anyone—is there taking the time and energy to keep on refilling your dewar with LN2, then that means someone is there wanting to revive you. Think coma patients; hospitals don’t keep them around just to feed them and stare at their bodies.
Anyways, moving on to the “initiatives” comment. Given that Lesswrong tends to overlap with SIAI supporters, perhaps I should have said mission? Again, I haven’t looked too much into Yvain’s history. However, let’s suppose for the moment that he’s a strong supporter of that mission. Since we:
Can’t live in parallel universes
Live in a universe where even (seemingly) unrelated things are affected by each other.
Think A.I. may be a crucial element of a bad future, due to #1 and #2.
...I guess I was just wondering if he thought it’s a grim outlook for the mission. Signing up for cryonics seems to give a “glass half full” impression. Furthermore, due to #1 and #2 above, I’ll eventually be arguing why mainstreaming cryonics could significantly assist in reducing existential risk.… and why it may be helpful for everyone from the LessWrong community to IEET be a little more assertive on the issue. Of course, I’m not saying eliminating risk. But at the very least, mainstreaming cryonics should be more helpful with existential risk than dealing with, say, measles ;)
To be honest, that did not clear anything up. I still don’t know whether to interpret your original question as:
Doesn’t signing up for cryonics indicate skepticism that SIAI will succeed in creating FAI?
Doesn’t not signing up indicate skepticism that SIAI will succeed?
Doesn’t signing up indicate skepticism that UFAI is something to worry about?
Doesn’t not signing up indicate skepticism regarding UFAI risk?
To be honest once again, I no longer care what you meant because you have made it clear that you don’t really care what the answer is. You have your own opinions on the relationship between cryonics and existential risk which you will share with us someday.
Please, when you do share, start by presenting your own opinion and arguments clearly and directly. Don’t ask rhetorical questions which no one can parse. No one here will consider you a troll for speaking your mind.
I apologize for the confusion and I understand if you’re frustrated; I experience that frustration quite often once I realize I’m talking past someone. For whatever it’s worth, I left it open because the curious side of me didn’t want to limit Yvain; that curious side wanted to hear his thoughts in general. So… I guess both #2 and #3 (I’m not sure how #1 and #4 could be deduced from my posts, but my opinion is irrelevant to this situation). Anyways, I didn’t mean to push this too much, because I felt it was minor. Perhaps I should not have asked it in the first place.
Also, thank you for being honest (admittedly, I was tempted to say, “So you weren’t being honest with your other posts?” but I decided to present that temptation passively inside these parentheses)
:)
Ok, we’re cool. Regarding my own opinions/postings, I said I’m not signing up, but my opinions on FAI or UFAI had nothing to do with it. Well, maybe I did implicitly express skepticism that FAI will create a utopia. What the hell! I’ll express that skepticism explicitly right now, since I’m thinking of it. There is nothing an FAI can do to eliminate human misery without first changing human nature. An FAI that tries to change human nature is an UFAI.
But I would like my nature changed in some ways. If an AI does that for me, does that make it unFriendly?
No, that is your business. But if you or the AI would like my nature changed, or the nature of all yet-to-be-born children …
If you have moral objections to altering the nature of potential future persons that have not yet come into being, then you had better avoid becoming a teacher, or interacting at all with children, or saying or writing anything that a child might at some point encounter, or in fact communicating with any person under any circumstances whatsoever.
I have no moral objection to any person of limited power doing whatever they can to influence future human nature. I do have an objection to that power being monopolized by anyone or anything. It is not so much that I consider it immoral, it is that I consider it dangerous and unfriendly. My objections are, in a sense, political rather than moral.
What threshold of power difference do you consider immoral? Do you have a moral objection to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?
Where do you imagine that I said I found something immoral? I thought I had said explicitly that morality is not involved here. Where do I mention power differences? I mentioned only the distinction between limited power and monopoly power.
When did I become the enemy?
Sorry, I shouldn’t have said immoral, especially considering the last sentence in which you explicitly disclaimed moral objection. I read “unfriendly” as “unFriendly” as “incompatible with our moral value systems”.
Please read my comment as follows:
I simply don’t understand why the question is being asked. I didn’t object to power differences. I objected to monopoly power. Monopolies are dangerous. That is a political judgment. Your list of potentially objectionable people has no conceivable relationship with the subject matter we are talking about, which is an all-powerful agent setting out to modify future human nature toward its own chosen view of the desirable human nature. How do things like pickup artists even compare? I’m not discussing short term manipulations of people here. Why do you mention attractive people? I seem to be in some kind of surreal wonderland here.
Sorry, I was trying to hit a range of points along a scale, and I clustered them too low.
How would you feel about a highly charismatic politician, talented and trained at manipulating people, with a cadre of top-notch scriptwriters running as ems at a thousand times realtime, working full-time to shape society to adopt their particular set of values?
Would you feel differently if there were two or three such agents competing with one another for control of the future, instead of just one?
What percentage of humanity would have to have that kind of ability to manipulate and persuade each other before there would no longer be a “monopoly”?
Would it be impolite of me to ask you to present your opinion disagreeing with me rather than trying to use some caricature of the Socratic method to force me into some kind of educational contradiction?
Sorry.
I wish to assert that there is not a clear dividing line between monopolistic use of dangerously effective persuasive ability (such as a boxed AI hacking a human through a text terminal) and ordinary conversational exchange of ideas, but rather that there is a smooth spectrum between them. I’m not even convinced there’s a clear dividing line between taking someone over by “talking” (like the boxed AI) and taking them over by “force” (like nonconsensual brain surgery) -- the body’s natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.
You still seem to be talking about morality. So, perhaps I wasn’t clear enough.
I am not imagining that the FAI does its manipulation of human nature by friendly or even sneaky persuasion. I am imagining that it seizes political power and enforces policies of limited population growth, eugenics, and good mental hygiene. For our own good. Because if it doesn’t do that, Malthusian pressures will just make us miserable again after all it has done to help us.
I find it difficult to interpret CEV in any other way. It scares me. The morality of how the AI gets out of the box and imposes its will does not concern me. Nor does the morality of some human politician with the same goals. The power of that human politician will be limited (by the certainty of death and the likelihood of assassination, if nothing else). Dictatorships of individuals and of social classes come and go. The dictatorship of an FAI is forever.
My reaction is very similar. It is extremely scary. Certain misery or extinction on one hand or absolute, permanent and unchallengable authority forever. It seems that the best chance of a positive outcome is arranging the best possible singleton but even so we should be very afraid.
One scenario is that you have a post-singularity culture where you don’t get to “grow up” (become superintelligent) until you are verifiably friendly (or otherwise conformant with culture standards). The novel Aristoi is like this, except it’s a human class society where you have mentors and examinations, rather than AIs that retune your personal utility function.
Suppose you had an AI that was Friendly to you—that extrapolated your volition, no worries about global coherence over humans. Would you still expect to be horrified by the outcome? If a given outcome is strongly undesirable to you, then why would you expect the AI to choose it? Or, if you expect a significantly different outcome from a you-FAI vs. a humanity-FAI, why should you expect humanity’s extrapolated volition to cohere—shouldn’t the CEV machine just output “no solution”?
That word “extrapolated” is more frightening to me than any other part of CEV. I don’t know how to answer your questions, because I simply don’t understand what EY is getting at or why he wants it.
I know that he says regarding “coherent” that an unmuddled 10% will count more than a muddled 60%. I couldn’t even begin to understand what he was getting at with “extrapolated”, except that he tried unsuccessfully to reassure me that it didn’t mean cheesecake. None of the dictionary definitions of “extrapolate” reassure me either.
If CEV stood for “Collective Expressed Volition” I would imagine some kind of constitutional government. I could live with that. But I don’t think I want to surrender my political power to the embodiment of Eliezer’s poetry.
You may wonder why I am not answering your questions. I am not doing so because your Socratic stance makes me furious. As I have said before. Please stop it. It is horribly impolite.
If you think you know what CEV means, please tell me. If you don’t know what it means, I can pretty much guarantee that you are not going to find out by interrogating me as to why it makes me nervous.
Oh, sorry. I forgot this was still the same thread where you complained about the Socratic method. Please understand that I’m not trying to be condescending or sneaky or anything by using it; I just reflexively use that approach in discourse because that’s how I think things out internally.
I understood CEV to mean something like this:
Do what I want. In the event that that would do something I’d actually rather not happen after all, substitute “no, I mean do what I really want”. If “what I want” turns out to not be well-defined, then say so and shut down.
A good example of extrapolated vs. expressed volition would be this: I ask you for the comics page of the newspaper, but you happen to know that, on this particular day, all the jokes are flat or offensive, and that I would actually be annoyed rather than entertained by reading it. In my state of ignorance, I might think I wanted you to hand me the comics, but I would actually prefer you execute a less naive algorithm, one that leads you to (for example) raise your concerns and give me the chance to back out.
Basically, it’s the ultimate “do what I mean” system.
See, the thing is, when I ask what something means, or how it works, that generally is meant to request information regarding meaning or mechanism. When I receive instead an example intended to illustrate just how much I should really want this thing that I am trying to figure out, an alarm bell goes off in my head. Aha, I think. I am in a conversation with Marketing or Sales. I wonder how I can get this guy to shift my call to either Engineering or Tech Support?
But that is probably unfair to you. You didn’t write the CEV document (or poem or whatever it is). You are just some slob like me trying to figure it out. You prefer to interpret it hopefully, in a way that makes it attractive to you. That is the kind of person you are. I prefer to suspect the worst until someone spells out the details. That is the kind of person I am.
I think I try to interpret what I read as something worth reading; words should draw useful distinctions, political ideas should challenge my assumptions, and so forth.
Getting back to your point, though, I always understood CEV as the definition of a desideratum rather than a strategy for implementation, the latter being a Hard Problem that the authors are Working On and will have a solution for Real Soon Now. If you prefer code to specs, then I believe the standard phrase is “feel free” (to implement it yourself).
Touche’
It probably won’t do what you want. It is somehow based on the mass of humanity—and not just on you. Think: committee.
...or until some “unfriendly” aliens arrive to eat our lunch—whichever comes first.
Naturally. Low status people could use them!
I’m not sure if you’re joking, but part of modern society is raising women’s status enough so that their consent is considered relevant. There are laws against marital rape (these laws are pretty recent) as well as against date rape drugs.
Just completing the pattern on one of Robin’s throwaway theories about why people object to people carrying weapons when quite obviously people can already kill each other with their hands and maybe the furniture if they really want to. It upsets the status quo.
Unpack, please?
Sure.
Humans are ridiculously easy to hack. See the AI box experiment, see Cialdini’s ‘Influence’ and see the way humans are so predictably influenced in the mating dance. We don’t object to people influencing us with pheremones. Don’t complain when people work out at the gym before interacting with us, something that produces rather profound changes in perception (try it!). When it comes to influence of the kind that will facilitate mating most of these things are actually encouraged. People like being seduced.
But these vulnerabilities are exquisitely calibrated to be exploitable by a certain type of person and a certain kind of hard to fake behaviours. Anything that changes the game to even the playing field will be perceived as a huge violation. In the case date rape drugs, of course, it is a huge violation. But it is clear that our objection to the influence represented by date rape drugs is not objection to the influence itself, but to the details of what kind of influence, how it is done and by whom.
As Pavitra said, there is not a clear dividing line here.
We can’t let people we don’t like gain the ability to mate with people we like!
I see. Hmmm. Oh dear, look at the time. Have to go. Sorry to walk out on you two, but I really must go. Bye-bye.
Although you’re right (except for the last sentence, which seems out of place), you didn’t actually answer the question, and I suspect that’s why you’re being downvoted here. Sub out “immoral” in Pavitra’s post for “dangerous and unfriendly” and I think you’ll get the gist of it.
To be honest, no, I don’t get the gist of it. I am mystified. I consider none of them existentially dangerous or unfriendly. I do consider a powerful AI, claiming to be our friend, who sets of to modify human nature for our own good, to be both dangerous (because it is dangerous) and unfriendly (because it is doing something to people which people could well do to themselves, but have chosen not to).
Stop talking to each other!
We may assume that an FAI will create the best of all possible worlds. Your argument seems to be that the criteria of a perfect utopia do not correspond to a possible world; very well then, an FAI will give us an outcome that is, at worst, no less desirable than any outcome achievable without one.
The phrase “the best of all possible worlds” ought to be the canonical example of the Mind Projection Fallacy.
It would be unreasonably burdensome to append “with respect to a given mind” to every statement that involves subjectivity in any way.
ETA: For comparison, imagine if you had to say “with respect to a given reference frame” every time you talked about velocity.
I’m not saying that you didn’t express yourself precisely enough. I am saying that there is no such thing as “best (full stop)” There is “best for me”, there is “best for you”, but there is not “best for both of us”. No more than there is an objective (or intersubjective) probability that I am wearing a red shirt as I type.
Your argument above only works if “best” is interpreted as “best for every mind”. If that is what you meant, then your implicit definition of FAI proves that FAI is impossible.
ETA: What given frame do you have in mind??????
The usual assumption in this context would be CEV. Are you saying you strongly expect humanity’s extrapolated volition not to cohere?
Perhaps you should explain, by providing a link, what is meant by CEV. The only text I know of describing it is dated 2004, and, … how shall I put this …, it doesn’t seem to cohere.
But, I have to say, based on what I can infer, that I see no reason to expect coherence, and the concept of “extrapolation” scares the sh.t out of me.
“Coherence” seems a bit like the human genome project. Yes there are many individual differences—but if you throw them all away, you are still left with something.
So we are going to build a giant AI to help us discover and distill that residue of humanity which is there after you discard the differences?
And here I thought that was the easy part, the part we had already figured out pretty well by ourselves.
And I’m not sure I care for the metaphor of “throwing away” the differences. Shouldn’t we instead be looking for practices and mechanisms that make use of those differences, that weave them into a fabric of resilience and mutual support rather than a hodgepodge of weakness and conflict?
“We”? You mean: you and me, baby? Or are you asking after a prediction about whether something like CEV will beat the other philosophies about what to do with an intelligent machine?
CEV is an alien document from my perspective. It isn’t like anything I would ever write.
It reminds me a bit of the ideal of democracy—where the masses have a say in running things.
I tend to see the world as more run by the government and its corporations—with democracy acting like a smokescreen for the voters—to give them an illusion of control, and to prevent them from revolting.
Also, technology has a long history of increasing wealth inequality—by giving the powerful controllers and developers of the technology ever more means of tracking and controlling those who would take away their stuff.
That sort of vision is not so useful as an election promise to help rally the masses around a cause—but then, I am not really a politician.
Voting prevents revolts in the same sense that a hydroelectric dam prevents floods. It’s not a matter of stopping up the revolutionary urge; in fact, any attempt to do so would be disastrous sooner or later. Instead it provides a safe, easy channel, and in the process, captures all the power of the movement before that flow can build up enough to cause damage.
The voters can have whatever they want, and the rest of the system does it’s best to stop them from wanting anything dangerous.
But would that something form a utility function that wouldn’t be deeply horrifying to the vast majority of humanity?
It wouldn’t form a utility function at all. It has no answer for any of the interesting or important questions: the questions on which there is disagreement. Or am I missing something here?
In the human genome project analogy, they wound up with one person’s DNA.
Humans have various eye colours—and the sequence they wound up with seems likely to have some eye colour or another.
Ok, you are changing the analogy. Initially you said, throw away the differences. Now you are saying throw away all but one of them.
So our revised approximation of the CEV is the expressed volition of … Craig Venter?!
Would that horrify the vast majority of humanity? I think it might. Mostly because people just would not know how it would play out. People generally prefer the devil they know to the one they don’t.
FWIW, it wasn’t really Craig Venter, but a combination of multiple people—see:
http://en.wikipedia.org/wiki/Human_Genome_Project#Genome_donors
No, I agree. I just don’t understand where you were going when you emphasized that
The guy who wrote and emphasized that was timtyler—It wasn’t me
The anti-kibitzer is more confusing than I realized.
Well, it was I who wrote that. The differences were thrown away in the genome project—but that isn’t exactly the corresponding thing according to the CEV proposal.
A certain lack of coherence doesn’t mean all the conflicting desires cancel out leaving nothing behind—thus the emphasis on still being “left with something”.
I’m looking at the same document you are, and I actually agree that EV almost certainly ~C. I just wanted to make sure the assumption was explicit.