Coming from a complete novice to Go—did Kellin Pelrine beat a nerfed version of KataGo? At the top of the article you mention KataGo did 10 million visits per move, while in the FAR article it says Pelrine beat a version of KataGo that did 100K visits per move.
Kei
I feel like the implicit model of the world you are using here is going to have effect sizes adding up to much more than the actual variance at stake.
That’s not always the wrong thing to do—the sum of counterfactual impacts of the actions of many actors often sums up to greater than their total combined impact. A simple example would be if two co-founders of an impactful company wouldn’t have been a founder without the other. Then the sum of their counterfactual impacts is equivalent to 2 times the total impact of the company.
While I don’t have an opinion on this particular case, you could imagine that additional AI investment may not have happened if either of the following were true:
1. The original RLHF proof of concept from OpenAI didn’t happen—because Google’s leadership wouldn’t have the incentive for further investment.
2. If Google’s leadership were different—because they may not have thought to invest more money in AI.
My intuition is it should be small in most cases, but there are some scenarios where this could be important.
Let’s imagine we are training a reinforcement learning agent AGI that discounts rewards in time by some parameter d with 0 < d < 1 (so an expected reward r that is gotten n timesteps from now is worth d*r^n at the current time step). Let’s further assume the wireheading problem has been solved (the AI can’t change the reward calculating process, and give itself, say, infinite reward), and that there is a maximum possible reward of M per time step. This maximum reward could be coded in explicitly in the reward function, or could be implicitly calculated using physical limits.
If deployment is at least b time steps away, the maximum possible deployment reward considering discounting (as calculated at the current time step) is M*d^b/(1-d). If there is any deception the AGI could do that would sacrifice its short-term reward by more than that amount, then it will not do that deception. There seems to be a bit of a tradeoff, where the bigger the multiplier is between the development reward and the training reward, the larger M becomes, and the smaller d needs to be in order to encourage the AGI to engage in non-deceptive behavior.
If we want d to be larger, which will enable an aligned AI to make useful long-term plans, this argument claims we should use a smaller multiplier between development reward and training reward in order for an unaligned AI to be incentived to not deceive us. Alternatively, this argument claims that you should wait a long time between when you test if your model is deceptive and when you deploy it—running a lot of time steps with 0 reward between your deception test and deployment will increase b and thus shrink the size of M*d^b/(1-d), which should make deception much less encouraged.
This seems like it would raise the incentive for AGI to be deceptive in their training environments. An un-aligned AGI has the decision of acting to maximize its goals in training and getting a higher short-term reward, or deceptively pretending to be aligned in training, and getting a lower short-term reward. The benefit to the AGI of pretending to be aligned is it increases the probability of it being deployed, and thus being able to get a higher long-term reward in deployment.
Thus the bigger the discrepancy in reward an AGI would get between deployment environment and training environment, the more important the long-term deployment reward becomes, and the higher the incentive there is of the AGI being deceptive during training.
I asked Sydney to reconstruct the board position on the 50th move of two different games, and saw what Simon predicted—a significant drop in performance. Here’s a link of two games I tried using your prompt: https://imgur.com/a/ch9U6oZ
While there is some overlap, what Sydney thinks the games look like doesn’t have much resemblance to the actual games.
I also repeatedly asked Sydney to continue the games using Stockfish (with a few slightly different prompts), but for some reason once the game description is long enough, Sydney refuses to do anything. It either says it can’t access Stockfish, or that using Stockfish would be cheating.