I’m not aware of others explicitly trying to deduce our native algorithm for impact. No one was claiming the ontological theories explain our intuitions, and they didn’t have the same “is this a big deal?” question in mind. However, we need to actually understand the problem we’re solving, and providing that understanding is one responsibility of an impact measure! Understanding our own intuitions is crucial not just for producing nice equations, but also for getting an intuition for what a “low-impact” Frank would do.
I wish you’d expanded on this point a bit more. To me, it seems like to come up with “low-impact” AI, you should be pretty grounded in situations where your AI system might behave in an undesirably “high-impact” way, and generalise the commonalities between those situations into some neat theory (and maybe do some philosophy about which commonalities you think are important to generalise vs accidental), rather than doing analytic philosophy on what the English word “impact” means. Could you say more about why the test-case-driven approach is less compelling to you? Or is this just a matter of the method of exposition you’ve chosen for this sequence?
Most of the reason is indeed exposition: our intuitions about AU-impact are surprisingly clear-cut and lead naturally to the thing we want “low impact” AIs to do (not be incentivized to catastrophically decrease our attainable utilities, yet still execute decent plans). If our intuitions about impact were garbage and misleading, then I would have taken a different (and perhaps test-case-driven) approach. Plus, I already know that the chain of reasoning leads to a compact understanding of the test cases anyways.
I’ve also found that test-case based discussion (without first knowing what we want) can lead to a blending of concerns, where someone might think the low-impact agent should do X because agents who generally do X are safer (and they don’t see a way around that), where someone might secretly have a different conception of the problems that low-impact agency should solve, etc.
I wish you’d expanded on this point a bit more. To me, it seems like to come up with “low-impact” AI, you should be pretty grounded in situations where your AI system might behave in an undesirably “high-impact” way, and generalise the commonalities between those situations into some neat theory (and maybe do some philosophy about which commonalities you think are important to generalise vs accidental), rather than doing analytic philosophy on what the English word “impact” means. Could you say more about why the test-case-driven approach is less compelling to you? Or is this just a matter of the method of exposition you’ve chosen for this sequence?
Most of the reason is indeed exposition: our intuitions about AU-impact are surprisingly clear-cut and lead naturally to the thing we want “low impact” AIs to do (not be incentivized to catastrophically decrease our attainable utilities, yet still execute decent plans). If our intuitions about impact were garbage and misleading, then I would have taken a different (and perhaps test-case-driven) approach. Plus, I already know that the chain of reasoning leads to a compact understanding of the test cases anyways.
I’ve also found that test-case based discussion (without first knowing what we want) can lead to a blending of concerns, where someone might think the low-impact agent should do X because agents who generally do X are safer (and they don’t see a way around that), where someone might secretly have a different conception of the problems that low-impact agency should solve, etc.