You still seem to be talking about morality. So, perhaps I wasn’t clear enough.
I am not imagining that the FAI does its manipulation of human nature by friendly or even sneaky persuasion. I am imagining that it seizes political power and enforces policies of limited population growth, eugenics, and good mental hygiene. For our own good. Because if it doesn’t do that, Malthusian pressures will just make us miserable again after all it has done to help us.
I find it difficult to interpret CEV in any other way. It scares me. The morality of how the AI gets out of the box and imposes its will does not concern me. Nor does the morality of some human politician with the same goals. The power of that human politician will be limited (by the certainty of death and the likelihood of assassination, if nothing else). Dictatorships of individuals and of social classes come and go. The dictatorship of an FAI is forever.
My reaction is very similar. It is extremely scary. Certain misery or extinction on one hand or absolute, permanent and unchallengable authority forever. It seems that the best chance of a positive outcome is arranging the best possible singleton but even so we should be very afraid.
One scenario is that you have a post-singularity culture where you don’t get to “grow up” (become superintelligent) until you are verifiably friendly (or otherwise conformant with culture standards). The novel Aristoi is like this, except it’s a human class society where you have mentors and examinations, rather than AIs that retune your personal utility function.
Suppose you had an AI that was Friendly to you—that extrapolated your volition, no worries about global coherence over humans. Would you still expect to be horrified by the outcome? If a given outcome is strongly undesirable to you, then why would you expect the AI to choose it? Or, if you expect a significantly different outcome from a you-FAI vs. a humanity-FAI, why should you expect humanity’s extrapolated volition to cohere—shouldn’t the CEV machine just output “no solution”?
That word “extrapolated” is more frightening to me than any other part of CEV. I don’t know how to answer your questions, because I simply don’t understand what EY is getting at or why he wants it.
I know that he says regarding “coherent” that an unmuddled 10% will count more than a muddled 60%. I couldn’t even begin to understand what he was getting at with “extrapolated”, except that he tried unsuccessfully to reassure me that it didn’t mean cheesecake. None of the dictionary definitions of “extrapolate” reassure me either.
If CEV stood for “Collective Expressed Volition” I would imagine some kind of constitutional government. I could live with that. But I don’t think I want to surrender my political power to the embodiment of Eliezer’s poetry.
You may wonder why I am not answering your questions. I am not doing so because your Socratic stance makes me furious. As I have said before. Please stop it. It is horribly impolite.
If you think you know what CEV means, please tell me. If you don’t know what it means, I can pretty much guarantee that you are not going to find out by interrogating me as to why it makes me nervous.
Oh, sorry. I forgot this was still the same thread where you complained about the Socratic method. Please understand that I’m not trying to be condescending or sneaky or anything by using it; I just reflexively use that approach in discourse because that’s how I think things out internally.
I understood CEV to mean something like this:
Do what I want. In the event that that would do something I’d actually rather not happen after all, substitute “no, I mean do what I really want”. If “what I want” turns out to not be well-defined, then say so and shut down.
A good example of extrapolated vs. expressed volition would be this: I ask you for the comics page of the newspaper, but you happen to know that, on this particular day, all the jokes are flat or offensive, and that I would actually be annoyed rather than entertained by reading it. In my state of ignorance, I might think I wanted you to hand me the comics, but I would actually prefer you execute a less naive algorithm, one that leads you to (for example) raise your concerns and give me the chance to back out.
Basically, it’s the ultimate “do what I mean” system.
See, the thing is, when I ask what something means, or how it works, that generally is meant to request information regarding meaning or mechanism. When I receive instead an example intended to illustrate just how much I should really want this thing that I am trying to figure out, an alarm bell goes off in my head. Aha, I think. I am in a conversation with Marketing or Sales. I wonder how I can get this guy to shift my call to either Engineering or Tech Support?
But that is probably unfair to you. You didn’t write the CEV document (or poem or whatever it is). You are just some slob like me trying to figure it out. You prefer to interpret it hopefully, in a way that makes it attractive to you. That is the kind of person you are. I prefer to suspect the worst until someone spells out the details. That is the kind of person I am.
I think I try to interpret what I read as something worth reading; words should draw useful distinctions, political ideas should challenge my assumptions, and so forth.
Getting back to your point, though, I always understood CEV as the definition of a desideratum rather than a strategy for implementation, the latter being a Hard Problem that the authors are Working On and will have a solution for Real Soon Now. If you prefer code to specs, then I believe the standard phrase is “feel free” (to implement it yourself).
You still seem to be talking about morality. So, perhaps I wasn’t clear enough.
I am not imagining that the FAI does its manipulation of human nature by friendly or even sneaky persuasion. I am imagining that it seizes political power and enforces policies of limited population growth, eugenics, and good mental hygiene. For our own good. Because if it doesn’t do that, Malthusian pressures will just make us miserable again after all it has done to help us.
I find it difficult to interpret CEV in any other way. It scares me. The morality of how the AI gets out of the box and imposes its will does not concern me. Nor does the morality of some human politician with the same goals. The power of that human politician will be limited (by the certainty of death and the likelihood of assassination, if nothing else). Dictatorships of individuals and of social classes come and go. The dictatorship of an FAI is forever.
My reaction is very similar. It is extremely scary. Certain misery or extinction on one hand or absolute, permanent and unchallengable authority forever. It seems that the best chance of a positive outcome is arranging the best possible singleton but even so we should be very afraid.
One scenario is that you have a post-singularity culture where you don’t get to “grow up” (become superintelligent) until you are verifiably friendly (or otherwise conformant with culture standards). The novel Aristoi is like this, except it’s a human class society where you have mentors and examinations, rather than AIs that retune your personal utility function.
Suppose you had an AI that was Friendly to you—that extrapolated your volition, no worries about global coherence over humans. Would you still expect to be horrified by the outcome? If a given outcome is strongly undesirable to you, then why would you expect the AI to choose it? Or, if you expect a significantly different outcome from a you-FAI vs. a humanity-FAI, why should you expect humanity’s extrapolated volition to cohere—shouldn’t the CEV machine just output “no solution”?
That word “extrapolated” is more frightening to me than any other part of CEV. I don’t know how to answer your questions, because I simply don’t understand what EY is getting at or why he wants it.
I know that he says regarding “coherent” that an unmuddled 10% will count more than a muddled 60%. I couldn’t even begin to understand what he was getting at with “extrapolated”, except that he tried unsuccessfully to reassure me that it didn’t mean cheesecake. None of the dictionary definitions of “extrapolate” reassure me either.
If CEV stood for “Collective Expressed Volition” I would imagine some kind of constitutional government. I could live with that. But I don’t think I want to surrender my political power to the embodiment of Eliezer’s poetry.
You may wonder why I am not answering your questions. I am not doing so because your Socratic stance makes me furious. As I have said before. Please stop it. It is horribly impolite.
If you think you know what CEV means, please tell me. If you don’t know what it means, I can pretty much guarantee that you are not going to find out by interrogating me as to why it makes me nervous.
Oh, sorry. I forgot this was still the same thread where you complained about the Socratic method. Please understand that I’m not trying to be condescending or sneaky or anything by using it; I just reflexively use that approach in discourse because that’s how I think things out internally.
I understood CEV to mean something like this:
Do what I want. In the event that that would do something I’d actually rather not happen after all, substitute “no, I mean do what I really want”. If “what I want” turns out to not be well-defined, then say so and shut down.
A good example of extrapolated vs. expressed volition would be this: I ask you for the comics page of the newspaper, but you happen to know that, on this particular day, all the jokes are flat or offensive, and that I would actually be annoyed rather than entertained by reading it. In my state of ignorance, I might think I wanted you to hand me the comics, but I would actually prefer you execute a less naive algorithm, one that leads you to (for example) raise your concerns and give me the chance to back out.
Basically, it’s the ultimate “do what I mean” system.
See, the thing is, when I ask what something means, or how it works, that generally is meant to request information regarding meaning or mechanism. When I receive instead an example intended to illustrate just how much I should really want this thing that I am trying to figure out, an alarm bell goes off in my head. Aha, I think. I am in a conversation with Marketing or Sales. I wonder how I can get this guy to shift my call to either Engineering or Tech Support?
But that is probably unfair to you. You didn’t write the CEV document (or poem or whatever it is). You are just some slob like me trying to figure it out. You prefer to interpret it hopefully, in a way that makes it attractive to you. That is the kind of person you are. I prefer to suspect the worst until someone spells out the details. That is the kind of person I am.
I think I try to interpret what I read as something worth reading; words should draw useful distinctions, political ideas should challenge my assumptions, and so forth.
Getting back to your point, though, I always understood CEV as the definition of a desideratum rather than a strategy for implementation, the latter being a Hard Problem that the authors are Working On and will have a solution for Real Soon Now. If you prefer code to specs, then I believe the standard phrase is “feel free” (to implement it yourself).
Touche’
It probably won’t do what you want. It is somehow based on the mass of humanity—and not just on you. Think: committee.
...or until some “unfriendly” aliens arrive to eat our lunch—whichever comes first.