I interpret agent foundations as being more about providing formal specifications of metaphilosophical competence, to [...] allow us to formally verify whether a computational process will satisfy desirable metaphilosophical properties
“Adding conceptual clarity” is a key motivation, but formal verification isn’t a key motivation.
The point of things like logical induction isn’t “we can use the logical induction criterion to verify that the system isn’t making reasoning errors”; as I understand it, it’s more “logical induction helps move us toward a better understanding of what good reasoning is, with a goal of ensuring developers aren’t flying blind when they’re actually building good reasoners”.
Daniel Dewey’s summary of the motivation behind HRAD is:
2) If we fundamentally “don’t know what we’re doing” because we don’t have a satisfying description of how an AI system should reason and make decisions, then we will probably make lots of mistakes in the design of an advanced AI system.
3) Even minor mistakes in an advanced AI system’s design are likely to cause catastrophic misalignment.
I think this is a decent summary of why we prioritize HRAD research. I would rephrase 3 as “There are many intuitively small mistakes one can make early in the design process that cause resultant systems to be extremely difficult to align with operators’ intentions.” I’d compare these mistakes to the “small” decision in the early 1970s to use null-terminated instead of length-prefixed strings in the C programming language, which continues to be a major source of software vulnerabilities decades later.
I’d also clarify that I expect any large software product to exhibit plenty of actually-trivial flaws, and that I don’t expect that AGI code needs to be literally bug-free or literally proven-safe in order to be worth running.
The position of the AI community is something like the position researchers would be in if they wanted to build a space rocket, but hadn’t developed calculus or orbital mechanics yet. Maybe with enough trial and error (and explosives) you’ll eventually be able to get a payload off the planet that way, but if you want things to actually work correctly on the first go, you’ll need to do some basic research to cover core gaps in what you know.
To say that calculus or orbital mechanics help you “formally verify” that the system’s parts are going to work correctly is missing where the main benefit lies, which is in knowing what you’re doing at all, not in being able to machine-verify everything you’d like to. You need to formalize how good reasoning works because even if you can’t always apply conventional formal methods, you still need to understand what you’re building if you want robustness properties.
“Adding conceptual clarity” is a key motivation, but formal verification isn’t a key motivation.
The point of things like logical induction isn’t “we can use the logical induction criterion to verify that the system isn’t making reasoning errors”; as I understand it, it’s more “logical induction helps move us toward a better understanding of what good reasoning is, with a goal of ensuring developers aren’t flying blind when they’re actually building good reasoners”.
Daniel Dewey’s summary of the motivation behind HRAD is:
To which Nate replied at the time:
The position of the AI community is something like the position researchers would be in if they wanted to build a space rocket, but hadn’t developed calculus or orbital mechanics yet. Maybe with enough trial and error (and explosives) you’ll eventually be able to get a payload off the planet that way, but if you want things to actually work correctly on the first go, you’ll need to do some basic research to cover core gaps in what you know.
To say that calculus or orbital mechanics help you “formally verify” that the system’s parts are going to work correctly is missing where the main benefit lies, which is in knowing what you’re doing at all, not in being able to machine-verify everything you’d like to. You need to formalize how good reasoning works because even if you can’t always apply conventional formal methods, you still need to understand what you’re building if you want robustness properties.