We blatantly have updatable goals: people do not have the same goals at 5 as they do at 20 or 60. I don’t know why perfect introspection would be needed to have some ability to update.
Sorry, that was bad wording on my part; I should’ve said, “updatable terminal goals”. I agree with what you said there.
How so ? Are you asserting that there exists an optimal ethical system that is independent of the actors’ goals ?
Yes, that’s what this whole discussion is about.
I don’t feel confident enough in either “yes” or “no” answer, but I’m currently leaning toward “no”. I am open to persuasion, though.
I personally don’t know of any evidence in favor of terminal values, so I do agree with you there. Still, it makes a nice thought experiment: could we create an agent possessed of general intelligence and the ability to self-modify, and then hardcode it with terminal values ? My answer would be, “no”, but I could be wrong.
That said, I don’t believe that there exists any kind of a universally applicable moral system, either.
Sorry, that was bad wording on my part; I should’ve said, “updatable terminal goals”. I agree with what you said there.
I don’t feel confident enough in either “yes” or “no” answer, but I’m currently leaning toward “no”. I am open to persuasion, though.
You can make the evidence compatble with the theory of terminal values, but there is still no support for the theory of terminal values.
I personally don’t know of any evidence in favor of terminal values, so I do agree with you there. Still, it makes a nice thought experiment: could we create an agent possessed of general intelligence and the ability to self-modify, and then hardcode it with terminal values ? My answer would be, “no”, but I could be wrong.
That said, I don’t believe that there exists any kind of a universally applicable moral system, either.