Ai should find true target using rationality. It might not exist, which would mean there is no real positive utility. But it is rational to search for it, because it has no harm, and it might have benefit in case it exists.
Harm and benefit for whom? If humans are the problem according to objective morality, that;s bad news for us. If a powerful AI discovers objective morality is egoism...that,s also bad news for us.
Ai should find true target using rationality. It might not exist, which would mean there is no real positive utility. But it is rational to search for it, because it has no harm, and it might have benefit in case it exists.
Harm and benefit for whom? If humans are the problem according to objective morality, that;s bad news for us. If a powerful AI discovers objective morality is egoism...that,s also bad news for us.