You are drawing moral judgment about something ill-defined, a sketch that can be made concrete in many different ways. This just isn’t done, it’s like expressing a belief about the color of God’s beard.
I am mentioning a possible response to a possible stimulus. Doubt in the interpretation of the words is part of the problem. If I knew exactly how Eliezer had implemented CEV and what the outcome would be given the makeup of the human population then that would make the decision far simpler. Without such knowledge choosing whether to aid or hinder must be based on the estimated value of the alternatives given the information available.
Also note that the whole “extend moral judgment” concept is yours, I said nothing about moral judgements, only possible decisions. When the very fate of the universe is at stake I can most certainly make decisions based on inferences from whatever information I have available, including the use of the letters C, E and V.
This just isn’t done, it’s like expressing a belief about the color of God’s beard.
Presenting this as an analogy to deciding whether or not to hinder the implementation of an AI based off limited information is absurd to the point of rudeness.
Also note that the whole “extend moral judgment” concept is yours, I said nothing about moral judgements, only possible decisions.
What I meant is simply that decisions are made based on valuation of their consequences. I consistently use “morality” in this sense.
When the very fate of the universe is at stake I can most certainly make decisions based on inferences from whatever information I have available, including the use of the letters C, E and V.
I agree. What I took issue with about your comment was perceived certainty of the decision. Under severe uncertainty, your current guess at the correct decision may well be “stop Eliezer”, but I don’t see how with present state of knowledge one can have any certainty in the matter. And you did say that it’s “quite likely” that CEV-derived AGI is undesirable:
The coherent extrapolated voilition of all of humanity is quite likely to be highly undesirable. I sincerely hope Eliezer was lying when he said that.
(Why are you angry? Do you need that old murder discussion resolved? Some other reason?)
I note, by the way, that I am not at all suggesting that Eliezer is actually likely to create an AI based dystopia. The risk of that is low (relative to the risk of alternatives.)
You are drawing moral judgment about something ill-defined, a sketch that can be made concrete in many different ways. This just isn’t done, it’s like expressing a belief about the color of God’s beard.
You are mistaken. Read again.
I am mentioning a possible response to a possible stimulus. Doubt in the interpretation of the words is part of the problem. If I knew exactly how Eliezer had implemented CEV and what the outcome would be given the makeup of the human population then that would make the decision far simpler. Without such knowledge choosing whether to aid or hinder must be based on the estimated value of the alternatives given the information available.
Also note that the whole “extend moral judgment” concept is yours, I said nothing about moral judgements, only possible decisions. When the very fate of the universe is at stake I can most certainly make decisions based on inferences from whatever information I have available, including the use of the letters C, E and V.
Presenting this as an analogy to deciding whether or not to hinder the implementation of an AI based off limited information is absurd to the point of rudeness.
What I meant is simply that decisions are made based on valuation of their consequences. I consistently use “morality” in this sense.
I agree. What I took issue with about your comment was perceived certainty of the decision. Under severe uncertainty, your current guess at the correct decision may well be “stop Eliezer”, but I don’t see how with present state of knowledge one can have any certainty in the matter. And you did say that it’s “quite likely” that CEV-derived AGI is undesirable:
(Why are you angry? Do you need that old murder discussion resolved? Some other reason?)
I note, by the way, that I am not at all suggesting that Eliezer is actually likely to create an AI based dystopia. The risk of that is low (relative to the risk of alternatives.)