Thanks. I understand now. Just needed to sleep on it, and today, your explanation makes sense.
Basically, the AI’s actions don’t matter if the unlikely event doesn’t happen, so it will take whatever actions would maximize its utility if the event did happen. This maximizes expected utility
Maximizing [P(no TM) C + P(TM) u(TM, A))] is the same as maximizing u(A) under assumption TM.
Thanks. I understand now. Just needed to sleep on it, and today, your explanation makes sense.
Basically, the AI’s actions don’t matter if the unlikely event doesn’t happen, so it will take whatever actions would maximize its utility if the event did happen. This maximizes expected utility
Maximizing [P(no TM) C + P(TM) u(TM, A))] is the same as maximizing u(A) under assumption TM.
Yes, that’s a clear way of phrasing it.