That is so. Why unfortunately? Also, why “under the impression”? If you were to tell me some of your terminal values, I’d give you the courtesy of assuming you are telling the truth as you subjectively perceive it (you have privileged access to your values, and at least concerning your conscious values, subjective is objective).
I get it that you hold nothing on Earth more sacred than a hypothetical sufficiently high number of sparrows, we differ on that. It is not a question of epistemic beliefs about the world state, of creating a better match between map and territory. It is a difference about values. If Omega gave me a button choice decision, I’m very sure what I would do. That’s where it counts.
For consolidation purposes, this is also meant to answer “How sure? Based on what? What would persuade you otherwise?”—As sure as I can be, based on “what I value above all else”, persuade you otherwise: nothing short of a brain reprogram.
Why unfortunately? Also, why “under the impression”? If you were to tell me some of your terminal values, I’d give you the courtesy of assuming you are telling the truth as you subjectively perceive it (you have privileged access to your values, and at least concerning your conscious values, subjective is objective).
If your values conflict with those of greater humanity (in aggregate,) then you are roughly equivalent to Clippy—not dangerous unless you actually end up being decisive regarding existential risk, but nevertheless only co-operating based on self-interest and bargaining, not because we have a common cause.
Humans are usually operating based on cached thoughts, heuristics whih may conflict with their actual terminal values. Picture a Nazi measuring utility in Jews eliminated. He doesn’t actually, terminally value killing people—but he was persuaded that Jews are undermining civilization, and his brain cached the thought that Jews=Bad. But he isn’t a Paperclipper—if he reexamines this cached thought in light of the truth that Jews are, generally speaking, neurotypical human beings then he will stop killing them.
I get it that you hold nothing on Earth more sacred than a hypothetical sufficiently high number of sparrows, we differ on that.
Well, sacred value is a technical term.
If you genuinely attached infinite utility to your family’s lives, then we could remove the finite terms in your utility function without affecting it’s output. You are not valuing their lives above all else, you are refusing to trade them to gain anything else. There is a difference. Rejecting certain deals because the cost is emotionally charged is suboptimal. Human, but stupid. I (probably) wouldn’t kill to save the sparrows, or for that matter to steal money for children dying in Africa, but that’s not the right choice. That’s just bias/akrasia/the sort of this this site is supposed to fight. If I could press a button and turn into an FAI, then I would. Without question. The fact that I’m not perfectly Friendly is a bad thing.
Anyway.
Considering you’re not typing from a bunker, and indeed probably drive a car, I’m guessing you’re willing to accept small risks to your family. So my question for you is this: how small?
Incidentally, considering the quote this particular branch of this discussion sprouted from, you do realize that killing your son might be the only way to save the rest of your family? Now, if He was claiming that you terminally value killing your son, that would be another thing …
If you genuinely attached infinite utility to your family’s lives, then we could remove the finite terms in your utility function without affecting it’s output. You are not valuing their lives above all else (...)
You do have a point, but there is another explanation to resolve that, see this comment.
We still have a fundamental disagreement on whether rationality is in any way involved when reflecting on your terminal values. I claim that rationality will help the closet murderer who is firm in valuing pain and suffering the same as the altruist, the paperclipper or the FAI. It helps us in pursuing our goals, not in setting the axioms of our value systems (the terminal values).
There is no aspect of Bayes or any reasoning mechanism that tells you whether to value happy humans or dead humans. Reasoning helps you in better achieving your goals, nefarious or angelic as they may be.
My point is that, while an agent that is not confused about its values will not change them in response to rationality (obviously,) one that is confused will. For example, a Nazi realizing Jews are people after all.
Hairyfigment’s answer would also work. The point is that they are as worthy of moral consideration as everyone else, and, to a lesser extent, that they aren’t congenitally predisposed to undermine civilization and so on and so forth.
That is so. Why unfortunately? Also, why “under the impression”? If you were to tell me some of your terminal values, I’d give you the courtesy of assuming you are telling the truth as you subjectively perceive it (you have privileged access to your values, and at least concerning your conscious values, subjective is objective).
I get it that you hold nothing on Earth more sacred than a hypothetical sufficiently high number of sparrows, we differ on that. It is not a question of epistemic beliefs about the world state, of creating a better match between map and territory. It is a difference about values. If Omega gave me a button choice decision, I’m very sure what I would do. That’s where it counts.
For consolidation purposes, this is also meant to answer “How sure? Based on what? What would persuade you otherwise?”—As sure as I can be, based on “what I value above all else”, persuade you otherwise: nothing short of a brain reprogram.
Diction impaired by C2H6O.
If your values conflict with those of greater humanity (in aggregate,) then you are roughly equivalent to Clippy—not dangerous unless you actually end up being decisive regarding existential risk, but nevertheless only co-operating based on self-interest and bargaining, not because we have a common cause.
Humans are usually operating based on cached thoughts, heuristics whih may conflict with their actual terminal values. Picture a Nazi measuring utility in Jews eliminated. He doesn’t actually, terminally value killing people—but he was persuaded that Jews are undermining civilization, and his brain cached the thought that Jews=Bad. But he isn’t a Paperclipper—if he reexamines this cached thought in light of the truth that Jews are, generally speaking, neurotypical human beings then he will stop killing them.
Well, sacred value is a technical term.
If you genuinely attached infinite utility to your family’s lives, then we could remove the finite terms in your utility function without affecting it’s output. You are not valuing their lives above all else, you are refusing to trade them to gain anything else. There is a difference. Rejecting certain deals because the cost is emotionally charged is suboptimal. Human, but stupid. I (probably) wouldn’t kill to save the sparrows, or for that matter to steal money for children dying in Africa, but that’s not the right choice. That’s just bias/akrasia/the sort of this this site is supposed to fight. If I could press a button and turn into an FAI, then I would. Without question. The fact that I’m not perfectly Friendly is a bad thing.
Anyway.
Considering you’re not typing from a bunker, and indeed probably drive a car, I’m guessing you’re willing to accept small risks to your family. So my question for you is this: how small?
Incidentally, considering the quote this particular branch of this discussion sprouted from, you do realize that killing your son might be the only way to save the rest of your family? Now, if He was claiming that you terminally value killing your son, that would be another thing …
You do have a point, but there is another explanation to resolve that, see this comment.
We still have a fundamental disagreement on whether rationality is in any way involved when reflecting on your terminal values. I claim that rationality will help the closet murderer who is firm in valuing pain and suffering the same as the altruist, the paperclipper or the FAI. It helps us in pursuing our goals, not in setting the axioms of our value systems (the terminal values).
There is no aspect of Bayes or any reasoning mechanism that tells you whether to value happy humans or dead humans. Reasoning helps you in better achieving your goals, nefarious or angelic as they may be.
I see your psychopath and raise you one Nazi.
I’m sorry, does that label impact our debate whether rationality implies terminal values?
My point is that, while an agent that is not confused about its values will not change them in response to rationality (obviously,) one that is confused will. For example, a Nazi realizing Jews are people after all.
Sorry if that wasn’t clear.
Taboo “people”.
‘Share many human characteristics with the Nazi, and in particular suffered in similar ways from the economic conditions that helped produce Nazism.’
“not Evil Mutants”
Hairyfigment’s answer would also work. The point is that they are as worthy of moral consideration as everyone else, and, to a lesser extent, that they aren’t congenitally predisposed to undermine civilization and so on and so forth.