Hmmm. I’m confused about this. My tentative thoughts are as follows:
--For EUmax we care about the raw difference in probabilities.
--However, I don’t have a good sense of that difference. I don’t know whether base AI risk is 99% or 1%. All I know (or all I guess, even) is that a hub in Singapore reduces AI risk by 16%. So that would be ~16% in the former case and ~0.16% in the latter.
--However, this seems actually fine, because we are choosing between options like “Go to Singapore” and “Stay in the Bay” and the common currency we use to measure both options is how much AI risk reduction we can do.
But for EU maximization you really do care about the raw difference in probabilities!
Hmmm. I’m confused about this. My tentative thoughts are as follows:
--For EUmax we care about the raw difference in probabilities.
--However, I don’t have a good sense of that difference. I don’t know whether base AI risk is 99% or 1%. All I know (or all I guess, even) is that a hub in Singapore reduces AI risk by 16%. So that would be ~16% in the former case and ~0.16% in the latter.
--However, this seems actually fine, because we are choosing between options like “Go to Singapore” and “Stay in the Bay” and the common currency we use to measure both options is how much AI risk reduction we can do.
Yeah, I had this in mind I said I’m not sure if I would endorse this if I thought about it more. I am still uncertain.
I guess the bit reduction estimate is probably more portable across different people’s models, which is nice?