Depends in which sense you mean a moral system to be “absolute”.
I would agree that there is probably an “absolute moral system” that all humans would agree on, even if we may not be able to precisely formulate it right now (or at least, a system that most non-pathological humans could be convinced they agree with).
However, that doesn’t mean that any intelligence (AI or alien) would eventually settle on those morals.
But I believe that there is one single right answer. Otherwise, it becomes quite confusing.
That doesn’t sound like a very good reason to believe something.
(I would agree that there is probably a single right answer for humans)
Well, the absolute moral system I meant does encompass everything, incl. AI and alien intelligence. It is true that different moral problems require different solutions, that is also true to physics. Objects in vacuum behave differently than in the atmosphere. Water behaves differently than ice, but they are all governed by the same physics, so I assume.
A similar problem may have a different solution if the situation is different. An Edo-ero samurai and a Wall Street banker may behave perfectly moral even if they act differently to the same problem due to the social environment.
Maybe it is perfectly moral for AIs to kill and annihilate all humans, as much as it is perfectly possible that 218 of Russell’s teapots are revolving around Gliese 581 g.
That doesn’t sound like a very good reason to believe something.
Well, I formulated it wrongly. I meant that all answers are logically consistent. There might be more than one answer, but they do not contradict each other. So there is only one set of logically consistent answers. Otherwise, it becomes absurd.
Depends in which sense you mean a moral system to be “absolute”.
I would agree that there is probably an “absolute moral system” that all humans would agree on, even if we may not be able to precisely formulate it right now (or at least, a system that most non-pathological humans could be convinced they agree with).
However, that doesn’t mean that any intelligence (AI or alien) would eventually settle on those morals.
That doesn’t sound like a very good reason to believe something.
(I would agree that there is probably a single right answer for humans)
Well, the absolute moral system I meant does encompass everything, incl. AI and alien intelligence. It is true that different moral problems require different solutions, that is also true to physics. Objects in vacuum behave differently than in the atmosphere. Water behaves differently than ice, but they are all governed by the same physics, so I assume.
A similar problem may have a different solution if the situation is different. An Edo-ero samurai and a Wall Street banker may behave perfectly moral even if they act differently to the same problem due to the social environment.
Maybe it is perfectly moral for AIs to kill and annihilate all humans, as much as it is perfectly possible that 218 of Russell’s teapots are revolving around Gliese 581 g.
Well, I formulated it wrongly. I meant that all answers are logically consistent. There might be more than one answer, but they do not contradict each other. So there is only one set of logically consistent answers. Otherwise, it becomes absurd.