I’ll make another version of the thought experiment, in which we can get a genetic upgrade in which it gives you +1000 utils if you have it for a 70% chance, or it gives −1000 utils at a 30% chance.
Should you take it?
The answer is yes, in expectation, and it will give you +400 utils in expectation.
This is related to a general principle: As long as the probabilities of positive outcomes are over 50% and the costs and benefits are symmetrical, it is a good thing to do that activity.
And my contention is that AGI/ASI is just a larger version of the thought experiment above. AGI/ASI is a symmetric technology wrt good and bad outcomes, so that’s why it’s okay to increase capabilities.
I’ll make another version of the thought experiment, in which we can get a genetic upgrade in which it gives you +1000 utils if you have it for a 70% chance, or it gives −1000 utils at a 30% chance.
Should you take it?
The answer is yes, in expectation, and it will give you +400 utils in expectation.
This is related to a general principle: As long as the probabilities of positive outcomes are over 50% and the costs and benefits are symmetrical, it is a good thing to do that activity.
And my contention is that AGI/ASI is just a larger version of the thought experiment above. AGI/ASI is a symmetric technology wrt good and bad outcomes, so that’s why it’s okay to increase capabilities.