Perhaps this is a crux in this debate: If you think the ‘agent-agnostic perspective’ is useful, you also think a relatively steady state of ‘AI Safety via Constant Vigilance’ is possible. This would be a situation where systems that aren’t significantly inner misaligned (otherwise they’d have no incentive to care about governing systems, feedback or other incentives) but are somewhat outer misaligned (so they are honestly and accurately aiming to maximise some complicated measure of profitability or approval, not directly aiming to do what we want them to do), can be kept in check by reducing competitive pressures, building the right institutions and monitoring systems, and ensuring we have a high degree of oversight.
Paul thinks that it’s basically always easier to just go in and fix the original cause of the misalignment, while Andrew thinks that there are at least some circumstances where it’s more realistic to build better oversight and institutions to reduce said competitive pressures, and the agent-agnostic perspective is useful for the latter of these project, which is why he endorses it.
I think that this scenario of Safety via Constant Vigilance is worth investigating—I take Paul’s later failure storyto be a counterexample to such a thing being possible, as it’s a case where this solution was attempted and works for a little while before catastrophically failing. This also means that the practical difference between the RAAP 1a-d failure stories and Paul’s story just comes down to whether there is an ‘out’ in the form of safety by vigilance
Perhaps this is a crux in this debate: If you think the ‘agent-agnostic perspective’ is useful, you also think a relatively steady state of ‘AI Safety via Constant Vigilance’ is possible. This would be a situation where systems that aren’t significantly inner misaligned (otherwise they’d have no incentive to care about governing systems, feedback or other incentives) but are somewhat outer misaligned (so they are honestly and accurately aiming to maximise some complicated measure of profitability or approval, not directly aiming to do what we want them to do), can be kept in check by reducing competitive pressures, building the right institutions and monitoring systems, and ensuring we have a high degree of oversight.
Paul thinks that it’s basically always easier to just go in and fix the original cause of the misalignment, while Andrew thinks that there are at least some circumstances where it’s more realistic to build better oversight and institutions to reduce said competitive pressures, and the agent-agnostic perspective is useful for the latter of these project, which is why he endorses it.
I think that this scenario of Safety via Constant Vigilance is worth investigating—I take Paul’s later failure story to be a counterexample to such a thing being possible, as it’s a case where this solution was attempted and works for a little while before catastrophically failing. This also means that the practical difference between the RAAP 1a-d failure stories and Paul’s story just comes down to whether there is an ‘out’ in the form of safety by vigilance