Why would I care? I’m a simulation fatalist. At some point in the universe, every “meaningful” thing will have been either done or discovered, and all that will be left will functionally be having fun in simulations. If I trust the AI to simulate well enough to keep me happy, I trust it to tell me the appropriate amount of truth to make me happy.
Why would I care? I’m a simulation fatalist. At some point in the universe, every “meaningful” thing will have been either done or discovered, and all that will be left will functionally be having fun in simulations. If I trust the AI to simulate well enough to keep me happy, I trust it to tell me the appropriate amount of truth to make me happy.