A summary that might be informative to other people: Where does the ω(23) requirement on the growth rate of the “rationality parameter” β come from?
Well, the expected loss of the agent comes from two sources. Making a suboptimal choice on its own, and incurring a loss from consulting a not-fully-rational advisor. The policy of the agent is basically “defer to the advisor when the expected loss over all time of acting (relative to the optimal move by an agent who knew the true environment) is too high”. Too high, in this case, cashes out as “higher than β(t)−1t−1/x”, where t is the time discount parameter and β is the level-of-rationality parameter. Note that as the operator gets more rational, the agent gets less reluctant about deferring. Also note that t is reversed from what you might think, high values of t mean that the agent has a very distant planning horizon, low values mean the agent is more present-oriented.
On most rounds, the agent acts on its own, so the expected all-time loss on a single round from taking suboptimal choices is on the order of β(t)−1t−1/x, and also we’re summing up over about t rounds (technically exponential discount, but they’re similar enough). So the loss from acting on its own ends up being about β(t)−1t(x−1)/x.
On the other hand, delegation will happen on at most ~t2/x rounds, with a loss of β(t)−1 value, so the loss from delegation ends up being around β(t)−1t2/x.
Setting these two losses equal to each other/minimizing the exponent on the t when they are smooshed together gets you x=3. And then β(t) must grow asymptotically faster than t2/3 to have the loss shrink to 0. So that’s basically where the 2⁄3 comes from, it comes from setting the delegation threshold to equalize long-term losses from the AI acting on its own, and the human picking bad choices, as the time horizon t goes to infinity.
A summary that might be informative to other people: Where does the ω(23) requirement on the growth rate of the “rationality parameter” β come from?
Well, the expected loss of the agent comes from two sources. Making a suboptimal choice on its own, and incurring a loss from consulting a not-fully-rational advisor. The policy of the agent is basically “defer to the advisor when the expected loss over all time of acting (relative to the optimal move by an agent who knew the true environment) is too high”. Too high, in this case, cashes out as “higher than β(t)−1t−1/x”, where t is the time discount parameter and β is the level-of-rationality parameter. Note that as the operator gets more rational, the agent gets less reluctant about deferring. Also note that t is reversed from what you might think, high values of t mean that the agent has a very distant planning horizon, low values mean the agent is more present-oriented.
On most rounds, the agent acts on its own, so the expected all-time loss on a single round from taking suboptimal choices is on the order of β(t)−1t−1/x, and also we’re summing up over about t rounds (technically exponential discount, but they’re similar enough). So the loss from acting on its own ends up being about β(t)−1t(x−1)/x.
On the other hand, delegation will happen on at most ~t2/x rounds, with a loss of β(t)−1 value, so the loss from delegation ends up being around β(t)−1t2/x.
Setting these two losses equal to each other/minimizing the exponent on the t when they are smooshed together gets you x=3. And then β(t) must grow asymptotically faster than t2/3 to have the loss shrink to 0. So that’s basically where the 2⁄3 comes from, it comes from setting the delegation threshold to equalize long-term losses from the AI acting on its own, and the human picking bad choices, as the time horizon t goes to infinity.