That’s true, but there’s a natural and historical relationship here with what was in the past termed “seed AI”, even if this is not an approach anyone is actively pursuing, which is the kind of thing I was hoping to point at without using that outmoded term.
We aren’t working on decision theory in order to make sure that AGI systems are decision-theoretic, whatever that would involve. We’re working on decision theory because there’s a cluster of confusing issues here (e.g., counterfactuals, updatelessness, coordination) that represent a lot of holes or anomalies in our current best understanding of what high-quality reasoning is and how it works.
[...] The idea behind looking at (e.g.) counterfactual reasoning is that counterfactual reasoning is central to what we’re talking about when we talk about “AGI,” and going into the development process without a decent understanding of what counterfactual reasoning is and how it works means you’ll to a significantly greater extent be flying blind when it comes to designing, inspecting, repairing, etc. your system. The goal is to be able to put AGI developers in a position where they can make advance plans and predictions, shoot for narrow design targets, and understand what they’re doing well enough to avoid the kinds of kludgey, opaque, non-modular, etc. approaches that aren’t really compatible with how secure or robust software is developed.
“The reason why I care about logical uncertainty and decision theory problems is something more like this: The whole AI problem can be thought of as a particular logical uncertainty problem, namely, the problem of taking a certain function f : Q → R and finding an input that makes the output large. To see this, let f be the function that takes the AI agent’s next action (encoded in Q) and determines how ‘good’ the universe is if the agent takes that action. The reason we need a principled theory of logical uncertainty is so that we can do function optimization, and the reason we need a principled decision theory is so we can pick the right version of the ‘if the AI system takes that action...’ function.”
The work you use to get to AGI presumably won’t look like probability theory, but it’s still the case that you’re building a system to do probabilistic reasoning, and understanding what probabilistic reasoning is is likely to be very valuable for doing that without relying on brute force and trial-and-error.
[...] Eliezer adds: “I do also remark that there are multiple fixpoints in decision theory. CDT does not evolve into FDT but into a weirder system Son-of-CDT. So, as with utility functions, there are bits we want that the AI does not necessarily generate from self-improvement or local competence gains.”
When I think about key distinctions and branching points in alignment, I usually think about things like:
Does the approach require human modeling? Lots of risks can be avoided if the system doesn’t do human modeling, or if it only does small amounts of human modeling; but this constrains the options for value learning and learning-in-general.
Is the goal to make a task-directed AGI system, vs. an open-ended optimizer? When you say “there’s a natural and historical relationship here with what was in the past termed ‘seed AI’, even if this is not an approach anyone is actively pursuing”, it calls to mind for me the transition from MIRI thinking about open-ended optimizers to instead treating task AGI as the place to start.
I’m not actually sure what you mean. I think ‘seed AI’ means something like ‘first case in an iterative/recursive process’ of self-improvement, which applies pretty well to the iterated amplification setup (which is a recursively self-improving AI) and lots of other examples that Evan wrote about in his 11-examples post. It still seems to me to be a pretty general term.
That’s true, but there’s a natural and historical relationship here with what was in the past termed “seed AI”, even if this is not an approach anyone is actively pursuing, which is the kind of thing I was hoping to point at without using that outmoded term.
I agree with Ben and Richard’s summaries; see https://www.lesswrong.com/posts/uKbxi2EJ3KBNRDGpL/comment-on-decision-theory:
When I think about key distinctions and branching points in alignment, I usually think about things like:
Does the approach require human modeling? Lots of risks can be avoided if the system doesn’t do human modeling, or if it only does small amounts of human modeling; but this constrains the options for value learning and learning-in-general.
Current ML is notoriously opaque. Different approaches try to achieve greater understanding and inspectability to different degrees and in different ways (e.g., embedded agency vs. MIRI’s “new research directions” vs. the kind of work OpenAI Clarity does), or try to achieve alignment without needing to crack open the black box.
Is the goal to make a task-directed AGI system, vs. an open-ended optimizer? When you say “there’s a natural and historical relationship here with what was in the past termed ‘seed AI’, even if this is not an approach anyone is actively pursuing”, it calls to mind for me the transition from MIRI thinking about open-ended optimizers to instead treating task AGI as the place to start.
I’m not actually sure what you mean. I think ‘seed AI’ means something like ‘first case in an iterative/recursive process’ of self-improvement, which applies pretty well to the iterated amplification setup (which is a recursively self-improving AI) and lots of other examples that Evan wrote about in his 11-examples post. It still seems to me to be a pretty general term.