I personally don’t know of any evidence in favor of terminal values, so I do agree with you there. Still, it makes a nice thought experiment: could we create an agent possessed of general intelligence and the ability to self-modify, and then hardcode it with terminal values ? My answer would be, “no”, but I could be wrong.
That said, I don’t believe that there exists any kind of a universally applicable moral system, either.
You can make the evidence compatble with the theory of terminal values, but there is still no support for the theory of terminal values.
I personally don’t know of any evidence in favor of terminal values, so I do agree with you there. Still, it makes a nice thought experiment: could we create an agent possessed of general intelligence and the ability to self-modify, and then hardcode it with terminal values ? My answer would be, “no”, but I could be wrong.
That said, I don’t believe that there exists any kind of a universally applicable moral system, either.