Primarily does not mean exclusively, and lack of confidence in implications between desiderata doesn’t imply lack of confidence in opinions about how to modify impact measures, which itself doesn’t imply lack of opinions about how to modify impact measures.
People keep saying things like [‘it’s non-trivial to relax impact measures’], and it might be true. But on what data are we basing this?
This is according to my intuitions about what theories do what things, which have had as input a bunch of learning mathematics, reading about algorithms in AI, and thinking about impact measures. This isn’t a rigorous argument, or even necessarily an extremely reliable method of ascertaining truth (I’m probably quite sub-optimal in converting experience into intuitions), but it’s still my impulse.
True, but avoiding lock-in seems value laden for any approach doing that, reducing back to the full problem: what “kinds of things” can change? Even if we knew that, who can change things? But this is the clinginess / scapegoating tradeoff again.
My sense is that we agree that this looks hard but shouldn’t be dismissed as impossible.
Primarily does not mean exclusively, and lack of confidence in implications between desiderata doesn’t imply lack of confidence in opinions about how to modify impact measures, which itself doesn’t imply lack of opinions about how to modify impact measures.
This is according to my intuitions about what theories do what things, which have had as input a bunch of learning mathematics, reading about algorithms in AI, and thinking about impact measures. This isn’t a rigorous argument, or even necessarily an extremely reliable method of ascertaining truth (I’m probably quite sub-optimal in converting experience into intuitions), but it’s still my impulse.
My sense is that we agree that this looks hard but shouldn’t be dismissed as impossible.