There’s no necessity for any sort of outside force guiding human evolution, or any pre-existing thing it’s trying to mimic, therefore we shouldn’t presume one.
I didn’t say anything about an outside force guiding us. I am saying that if the structure of reality has characteristics in which certain moral values produce evolutionary successful outcomes, it follows that these moral values correspond to an objective evolutionary reality.
Does this remove meaning from wetness, and from thirst?
You are talking about meanings referring to what something is. But moral values are concerned with how we should act in the world. It is the old “ought from an is” issue. You can always drill in with horrific thought experiments concerning good and evil. For example:
Would it be OK to enslave half of humanity and use them as constantly tortured, self replicating, power supplies for the other half if we can find a system that would guaranty that they can never escape to threaten our own safety? If the system is efficient and you have no concept of good and evil why do you think that is wrong? Whatever your answer is try to ask why again until you reach the point where you get an “ought from an is” without a value presupposition.
I didn’t say anything about an outside force guiding us. I am saying that if the structure of reality has characteristics in which certain moral values produce evolutionary successful outcomes, it follows that these moral values correspond to an objective evolutionary reality
I agree that this is a perfectly fine way to think of things. We may not disagree on any factual questions.
Here’s a factual question some people get tripped up by: would any sufficiently intelligent being rediscover and be motivated by a morality that looks a lot like human morality? Like, suppose there was a race of aliens that evolved intelligence without knowing their kin—would we expect them to be motivated by filial love, once we explained it to them and gave them technology to track down their relatives? Would a superinntelligent AI never destroy humans, because superintelligence implies an understanding of the value of life?
Would it be OK to enslave half of humanity...
No. Why? Because I would prefer not. Isn’t that all that can be sufficient to motivate my decision? A little glib, I know, but I really don’t see this as a hard question.
When people say “what is right?, I always think of this as being like “by what standard would we act, if we could choose standards for ourselves?” rather than like “what does the external rightness-object say?”
We can think as if we’re consulting the rightness-object when working cooperatively with other humans—it will make no difference. But when people disagree, the approximation breaks down, and it becomes counter-productive to think you have access to The Truth. When people disagree about the morality of abortion, it’s not that (at least) one of them is factually mistaken about the rightness-object, they are disagreeing about which standard to use for acting.
Here’s a factual question some people get tripped up by: would any sufficiently intelligent being rediscover and be motivated by a morality that looks a lot like human morality?
Though tempting, I will resist answering this as it would only be speculation based on my current (certainly incomplete) understanding of reality. Who knows how many forms of mind exist in the universe.
Would a superinntelligent AI never destroy humans, because superintelligence implies an understanding of the value of life?
If by intelligence you mean human-like intelligence and if the AI is immortal or at least sufficiently long living it should extract the same moral principles (assuming that I am right and they are characteristics of reality). Apart from that your sentence uses the words ‘understand’ and ‘value’ which are connected to consciousness. Since we do not understand consciousness and the possibility of constructing it algorithmically is in doubt (to put it lightly) I would say that the AI will do whatever the conscious humans programmed it to do.
No. Why? Because I would prefer not. Isn’t that all that can be sufficient to motivate my decision? A little glib, I know, but I really don’t see this as a hard question.
No sorry, that is not sufficient. You have a reason and you need to dig deeper until you find your fundamental presuppositions. If you want to follow my line of thought that is...
I didn’t say anything about an outside force guiding us. I am saying that if the structure of reality has characteristics in which certain moral values produce evolutionary successful outcomes, it follows that these moral values correspond to an objective evolutionary reality.
You are talking about meanings referring to what something is. But moral values are concerned with how we should act in the world. It is the old “ought from an is” issue. You can always drill in with horrific thought experiments concerning good and evil. For example:
Would it be OK to enslave half of humanity and use them as constantly tortured, self replicating, power supplies for the other half if we can find a system that would guaranty that they can never escape to threaten our own safety? If the system is efficient and you have no concept of good and evil why do you think that is wrong? Whatever your answer is try to ask why again until you reach the point where you get an “ought from an is” without a value presupposition.
I agree that this is a perfectly fine way to think of things. We may not disagree on any factual questions.
Here’s a factual question some people get tripped up by: would any sufficiently intelligent being rediscover and be motivated by a morality that looks a lot like human morality? Like, suppose there was a race of aliens that evolved intelligence without knowing their kin—would we expect them to be motivated by filial love, once we explained it to them and gave them technology to track down their relatives? Would a superinntelligent AI never destroy humans, because superintelligence implies an understanding of the value of life?
No. Why? Because I would prefer not. Isn’t that all that can be sufficient to motivate my decision? A little glib, I know, but I really don’t see this as a hard question.
When people say “what is right?, I always think of this as being like “by what standard would we act, if we could choose standards for ourselves?” rather than like “what does the external rightness-object say?”
We can think as if we’re consulting the rightness-object when working cooperatively with other humans—it will make no difference. But when people disagree, the approximation breaks down, and it becomes counter-productive to think you have access to The Truth. When people disagree about the morality of abortion, it’s not that (at least) one of them is factually mistaken about the rightness-object, they are disagreeing about which standard to use for acting.
Though tempting, I will resist answering this as it would only be speculation based on my current (certainly incomplete) understanding of reality. Who knows how many forms of mind exist in the universe.
If by intelligence you mean human-like intelligence and if the AI is immortal or at least sufficiently long living it should extract the same moral principles (assuming that I am right and they are characteristics of reality). Apart from that your sentence uses the words ‘understand’ and ‘value’ which are connected to consciousness. Since we do not understand consciousness and the possibility of constructing it algorithmically is in doubt (to put it lightly) I would say that the AI will do whatever the conscious humans programmed it to do.
No sorry, that is not sufficient. You have a reason and you need to dig deeper until you find your fundamental presuppositions. If you want to follow my line of thought that is...