Does an optimal superintelligence regret? They know they couldn’t have made a better choice given its past information about the environment. How is regret useful in that case?
An optimal superintelligence has a regret probability of epsilon, and it’s not actually useful. This regret construction is meant to construct loss functions for strictly non-optimal agents.
(idle bemusement)
Does an optimal superintelligence regret? They know they couldn’t have made a better choice given its past information about the environment. How is regret useful in that case?
An optimal superintelligence has a regret probability of epsilon, and it’s not actually useful. This regret construction is meant to construct loss functions for strictly non-optimal agents.