Multi also presents a misleading image of what best represents the values of most people.
You may be right, on the other hand you may be generalizing from one example. Claims that an author’s view of human values is misleading should be substantiated with evidence.
Nitpick: Individuals don’t have CEV. They have values that can be extrapolated, but the “coherent” part is about large groups; Eliezer was talking about the CEV of all of humanity when he proposed the idea, I believe.
In this instance I would be comfortable using just “EV”. In general, however, I see the whole conflict resolution between agents as a process that isn’t quite so clearly delineated at the individual.
Eliezer was talking about the CEV of all of humanity when he proposed the idea, I believe.
He was, and that is something that bothers me. The coherent extrapolated voilition of all of humanity is quite likely to be highly undesirable. I sincerely hope Eliezer was lying when he said that. If he could right now press a button to execute an FAI> I would quite possibly do what I could to stop him.
If he could right now press a button to execute an FAI> I would quite possibly do what I could to stop him.
Since we have no idea what that entails and what formalizations of the idea are possible, we can’t extend moral judgment to that unclear unknown hypothetical.
You are drawing moral judgment about something ill-defined, a sketch that can be made concrete in many different ways. This just isn’t done, it’s like expressing a belief about the color of God’s beard.
I am mentioning a possible response to a possible stimulus. Doubt in the interpretation of the words is part of the problem. If I knew exactly how Eliezer had implemented CEV and what the outcome would be given the makeup of the human population then that would make the decision far simpler. Without such knowledge choosing whether to aid or hinder must be based on the estimated value of the alternatives given the information available.
Also note that the whole “extend moral judgment” concept is yours, I said nothing about moral judgements, only possible decisions. When the very fate of the universe is at stake I can most certainly make decisions based on inferences from whatever information I have available, including the use of the letters C, E and V.
This just isn’t done, it’s like expressing a belief about the color of God’s beard.
Presenting this as an analogy to deciding whether or not to hinder the implementation of an AI based off limited information is absurd to the point of rudeness.
Also note that the whole “extend moral judgment” concept is yours, I said nothing about moral judgements, only possible decisions.
What I meant is simply that decisions are made based on valuation of their consequences. I consistently use “morality” in this sense.
When the very fate of the universe is at stake I can most certainly make decisions based on inferences from whatever information I have available, including the use of the letters C, E and V.
I agree. What I took issue with about your comment was perceived certainty of the decision. Under severe uncertainty, your current guess at the correct decision may well be “stop Eliezer”, but I don’t see how with present state of knowledge one can have any certainty in the matter. And you did say that it’s “quite likely” that CEV-derived AGI is undesirable:
The coherent extrapolated voilition of all of humanity is quite likely to be highly undesirable. I sincerely hope Eliezer was lying when he said that.
(Why are you angry? Do you need that old murder discussion resolved? Some other reason?)
I note, by the way, that I am not at all suggesting that Eliezer is actually likely to create an AI based dystopia. The risk of that is low (relative to the risk of alternatives.)
I don’t quite see how one is supposed to limit FAI> without the race for AI turning into a war of all against all for not just power but survival.
If anything I would like to expand the group not just to currently living humans but all other possible cultures biologically modern humans did or could have developed.
But again this is purely because I value a diverse future. Part of my paperclip is to make sure other people get a share of the mass of the universe to paperclip.
I don’t quite see how one is supposed to limit FAI> without the race for AI turning into a war of all against all for not just power but survival.
By winning the war before it starts or solving cooperation problems.
The competition you refer to isn’t prevented by proposing an especially egalitarian. Being included in part of the Coherent Extrapolated Volition equation is not sufficient reason to stand down in a fight for FAI creation.
But again this is purely because I value a diverse future. Part of my paperclip is to make sure other people get a share of the mass of the universe to paperclip.
CEV would give that result. The ‘coherence’ thing isn’t about sharing. CEV may well decide to give all the mass of the universe to C purely because they can’t stand each other while if C was included in the same evaluation CEV they may well decide to do something entirely different. Sure, at least one of those agents is clearly insane but the point is being ‘included’ is not intrinsically important.
You may be right, on the other hand you may be generalizing from one example. Claims that an author’s view of human values is misleading should be substantiated with evidence.
“The CEV of most individuals is not Martyrdom” is not something that I consider overwhelmingly contentious.
Nitpick: Individuals don’t have CEV. They have values that can be extrapolated, but the “coherent” part is about large groups; Eliezer was talking about the CEV of all of humanity when he proposed the idea, I believe.
In this instance I would be comfortable using just “EV”. In general, however, I see the whole conflict resolution between agents as a process that isn’t quite so clearly delineated at the individual.
He was, and that is something that bothers me. The coherent extrapolated voilition of all of humanity is quite likely to be highly undesirable. I sincerely hope Eliezer was lying when he said that. If he could right now press a button to execute an FAI> I would quite possibly do what I could to stop him.
Since we have no idea what that entails and what formalizations of the idea are possible, we can’t extend moral judgment to that unclear unknown hypothetical.
I fundamentally disagree with what you are saying, and object somewhat to how you are saying it.
You are drawing moral judgment about something ill-defined, a sketch that can be made concrete in many different ways. This just isn’t done, it’s like expressing a belief about the color of God’s beard.
You are mistaken. Read again.
I am mentioning a possible response to a possible stimulus. Doubt in the interpretation of the words is part of the problem. If I knew exactly how Eliezer had implemented CEV and what the outcome would be given the makeup of the human population then that would make the decision far simpler. Without such knowledge choosing whether to aid or hinder must be based on the estimated value of the alternatives given the information available.
Also note that the whole “extend moral judgment” concept is yours, I said nothing about moral judgements, only possible decisions. When the very fate of the universe is at stake I can most certainly make decisions based on inferences from whatever information I have available, including the use of the letters C, E and V.
Presenting this as an analogy to deciding whether or not to hinder the implementation of an AI based off limited information is absurd to the point of rudeness.
What I meant is simply that decisions are made based on valuation of their consequences. I consistently use “morality” in this sense.
I agree. What I took issue with about your comment was perceived certainty of the decision. Under severe uncertainty, your current guess at the correct decision may well be “stop Eliezer”, but I don’t see how with present state of knowledge one can have any certainty in the matter. And you did say that it’s “quite likely” that CEV-derived AGI is undesirable:
(Why are you angry? Do you need that old murder discussion resolved? Some other reason?)
I note, by the way, that I am not at all suggesting that Eliezer is actually likely to create an AI based dystopia. The risk of that is low (relative to the risk of alternatives.)
I don’t quite see how one is supposed to limit FAI> without the race for AI turning into a war of all against all for not just power but survival.
If anything I would like to expand the group not just to currently living humans but all other possible cultures biologically modern humans did or could have developed.
But again this is purely because I value a diverse future. Part of my paperclip is to make sure other people get a share of the mass of the universe to paperclip.
By winning the war before it starts or solving cooperation problems.
The competition you refer to isn’t prevented by proposing an especially egalitarian. Being included in part of the Coherent Extrapolated Volition equation is not sufficient reason to stand down in a fight for FAI creation.
CEV would give that result. The ‘coherence’ thing isn’t about sharing. CEV may well decide to give all the mass of the universe to C purely because they can’t stand each other while if C was included in the same evaluation CEV they may well decide to do something entirely different. Sure, at least one of those agents is clearly insane but the point is being ‘included’ is not intrinsically important.
The singleton sets of individuals do...
I don’t think that anything in my post advocates martyrdom. What part of my post appears to you to advocate martyrdom?
To put it in the visceral language favored by cryonics advocates, you’re advocating that people commit suicide for the benefit of others.