It’s interesting that part of human value might be having our actions matter. But if you build an AI that can give you all the things, or even if you could’ve built such an AI but chose not to, then objectively your actions no longer matter much after that. I’ve no idea how even CEV could approach this problem.
Edit: I think I’ve figured it out. The AI shouldn’t try to build the best world according to CEV, it should take the best action for an AI to take according to CEV. So if the AI notices that humans strongly prefer to be left alone with their problems, it’ll just shut down. Or find some other way to ensure that humans can’t rely on AIs for everything.
It’s interesting that part of human value might be having our actions matter. But if you build an AI that can give you all the things, or even if you could’ve built such an AI but chose not to, then objectively your actions no longer matter much after that. I’ve no idea how even CEV could approach this problem.
Edit: I think I’ve figured it out. The AI shouldn’t try to build the best world according to CEV, it should take the best action for an AI to take according to CEV. So if the AI notices that humans strongly prefer to be left alone with their problems, it’ll just shut down. Or find some other way to ensure that humans can’t rely on AIs for everything.