Whether it would believe it, and whether it would discover it are rather separate questions.
It can’t believe it if it doesn’t discover it.
It is possible to be told something.
Yes, this is my problem with this theory, but there are much stupider opinions held by some percentage of philosophers.
If only everyone could agree with what they are.
Also, it’s not clear that AI would reject the proposition that if there are objectively correct values, then it should update its value system to them, since humans don’t always.
Whether it would believe it, and whether it would discover it are rather separate questions.
It can’t believe it if it doesn’t discover it.
It is possible to be told something.
Yes, this is my problem with this theory, but there are much stupider opinions held by some percentage of philosophers.
If only everyone could agree with what they are.
Also, it’s not clear that AI would reject the proposition that if there are objectively correct values, then it should update its value system to them, since humans don’t always.