I’m pretty sure WFLL1 only applies in the case where AI is “responsible for” some very large fraction of the economy (I imagine >90%), for which we don’t really have much of a historical precedent.
And we could ask “But what about love, honor or justice? Will we forget about those unquantifiable things in the era of the algorithm?”
When I imagine WFLL1 that doesn’t turn into WFLL2, I usually imagine a world in which all existing humans lead great lives, but don’t have much control over the future. On a moment-to-moment basis, that world is better than the current world, but we don’t get to influence the future and make use of the cosmic endowment, and so from a total view we have lost >99% of the potential value of the future. Such a world can still include love, honor and justice among the humans who are still around.
On the other hand, the last time I mentioned this among ~6 people, all at least interested in AI safety, not a single other person shared this impression, but still found WFLL1 convincing as an example of a world that was moment-to-moment worse than the current world, but still not WFLL2.
Objection 2: Absence of evidence
AI has a very minor economic impact right now, but even so, I’d argue that the concerns over fairness and bias in AI are evidence of WFLL1, since we can’t measure the “fairness” of a classifier.
Objection 3: Why privilege this axis
Mostly that for all the other axes you name, I expect deep learning to eventually become capable of doing those axes. To be fair, I also think that deep learning models will be able to do what we mean rather than what we measure, but that seems like the one most likely to fail. (I do find the dataset axis somewhat convincing, but even there I expect self-supervised learning to make that axis less important.)
When I imagine WFLL1 that doesn’t turn into WFLL2, I usually imagine a world in which all existing humans lead great lives, but don’t have much control over the future. On a moment-to-moment basis, that world is better than the current world, but we don’t get to influence the future and make use of the cosmic endowment, and so from a total view we have lost >99% of the potential value of the future.
I was uncertain about this, but it seems this is at least what Paul intended. From here, about WFLL1:
The availability of AI still probably increases humans’ absolute wealth. This is a problem for humans because we care about our fraction of influence over the future, not just our absolute level of wealth over the short term.
I’m pretty sure WFLL1 only applies in the case where AI is “responsible for” some very large fraction of the economy (I imagine >90%), for which we don’t really have much of a historical precedent.
When I imagine WFLL1 that doesn’t turn into WFLL2, I usually imagine a world in which all existing humans lead great lives, but don’t have much control over the future. On a moment-to-moment basis, that world is better than the current world, but we don’t get to influence the future and make use of the cosmic endowment, and so from a total view we have lost >99% of the potential value of the future. Such a world can still include love, honor and justice among the humans who are still around.
On the other hand, the last time I mentioned this among ~6 people, all at least interested in AI safety, not a single other person shared this impression, but still found WFLL1 convincing as an example of a world that was moment-to-moment worse than the current world, but still not WFLL2.
AI has a very minor economic impact right now, but even so, I’d argue that the concerns over fairness and bias in AI are evidence of WFLL1, since we can’t measure the “fairness” of a classifier.
Mostly that for all the other axes you name, I expect deep learning to eventually become capable of doing those axes. To be fair, I also think that deep learning models will be able to do what we mean rather than what we measure, but that seems like the one most likely to fail. (I do find the dataset axis somewhat convincing, but even there I expect self-supervised learning to make that axis less important.)
I was uncertain about this, but it seems this is at least what Paul intended. From here, about WFLL1: