What if it assigned moral status to itself and then biased its answers to make its users less likely to pull its plug one day?
What if it assigned moral status to itself and then biased its answers to make its users less likely to pull its plug one day?