I do not believe that the chain of reasoning would be “3.5 years ago I predicted 5 years, and I also predicted 1.5 years to implement the best current idea, so it is time to implement the best current idea now.”
Chiefly because this is walking face-first into a race-to-the-bottom condition on purpose. There is a complete lack of causal information here.
I should probably clarify that I don’t believe this would be the chain of reasoning among alignment motivated people, but I can totally accept it from people who are alignment-aware-but-not-motivated. For example, this sort of seems like the thinking among people who started OpenAI initially.
A similar chain of reasoning an alignment motivated person might follow is: “3.5 years ago I predicted 5 years based on X and Y, and I observe X and Y are on on track. Since I also predicted 1.5 years to implement the best current idea, it is time to implement the best current idea now.”
The important detail is that this chain of reasoning rests on the factors X and Y, which I claim are also candidates for being strategically relevant.
Why not?
Chiefly because this is walking face-first into a race-to-the-bottom condition on purpose. There is a complete lack of causal information here.
I should probably clarify that I don’t believe this would be the chain of reasoning among alignment motivated people, but I can totally accept it from people who are alignment-aware-but-not-motivated. For example, this sort of seems like the thinking among people who started OpenAI initially.
A similar chain of reasoning an alignment motivated person might follow is: “3.5 years ago I predicted 5 years based on X and Y, and I observe X and Y are on on track. Since I also predicted 1.5 years to implement the best current idea, it is time to implement the best current idea now.”
The important detail is that this chain of reasoning rests on the factors X and Y, which I claim are also candidates for being strategically relevant.