I claim (with some confidence) that Updateless Decision Theory and Logical Induction don’t have much to do with understanding AlphaGo or OpenAI Five, and you are better off understanding those systems using standard AI/ML thinking.
Eh, this is true, but it’s also true that causal decision theory, game theory, and probability theory have a lot to do with how to understand how to build AlphaZero or OpenAI Five (and by extension, those systems themselves). I think the relevant question here must be whether you think the embedded agency program can succeed as much as the classical decision theory/probability theory program, and whether conditional on that success it can be as influential (probably with a shorter lag between the program succeeding and wanting to influence AI development).
Yeah, my second claim is intended to include that scenario as well. That is, if embedded agency succeeded and significantly influenced the development of the first powerful AI systems, I would consider my second claim to be false.
This scenario (of embedded agency influencing AI development) would surprise me conditional on short timelines. Conditional on long timelines, I’m not sure, and would want to think about it more.
Note also that in a world where you can’t build powerful AI without Agent Foundations, it’s not a big loss if you don’t work on Agent Foundations right now. The worry is in a world where you can build powerful AI without Agent Foundations, but it leads to catastrophe. I’m focusing on the worlds in which that is true and in which powerful AI is developed soon.
That is all sensible, I was just slightly annoyed by what I read as an implication that “AlphaGo doesn’t use UDT therefore advanced AI won’t” or something.
Eh, this is true, but it’s also true that causal decision theory, game theory, and probability theory have a lot to do with how to understand how to build AlphaZero or OpenAI Five (and by extension, those systems themselves). I think the relevant question here must be whether you think the embedded agency program can succeed as much as the classical decision theory/probability theory program, and whether conditional on that success it can be as influential (probably with a shorter lag between the program succeeding and wanting to influence AI development).
Yeah, my second claim is intended to include that scenario as well. That is, if embedded agency succeeded and significantly influenced the development of the first powerful AI systems, I would consider my second claim to be false.
This scenario (of embedded agency influencing AI development) would surprise me conditional on short timelines. Conditional on long timelines, I’m not sure, and would want to think about it more.
Note also that in a world where you can’t build powerful AI without Agent Foundations, it’s not a big loss if you don’t work on Agent Foundations right now. The worry is in a world where you can build powerful AI without Agent Foundations, but it leads to catastrophe. I’m focusing on the worlds in which that is true and in which powerful AI is developed soon.
That is all sensible, I was just slightly annoyed by what I read as an implication that “AlphaGo doesn’t use UDT therefore advanced AI won’t” or something.