told the AI to do whatever it thought best, or to do whatever maximised the QALYs for humanity
Well, it’s formulating a definition for the Q in QALY good enough for an AI to understand it without screwing up that’s the hard part.
Yes. To be fair, we also don’t have a great deal of clarity on what we really mean by L, either, but we seem content to treat “you know, lives of systems sufficiently like us” as an answer.
Well, it’s formulating a definition for the Q in QALY good enough for an AI to understand it without screwing up that’s the hard part.
Yes. To be fair, we also don’t have a great deal of clarity on what we really mean by L, either, but we seem content to treat “you know, lives of systems sufficiently like us” as an answer.