The agency literature is there to model real agency relations in the world. Those real relations no doubt contain plenty of “unawareness”. If models without unawareness were failing to capture and explain a big fraction of real agency problems, there would be plenty of scope for people to try to fill that gap via models that include it. The claim that this couldn’t work because such models are limited seems just arbitrary and wrong to me. So either one must claim that AI-related unawareness is of a very different type or scale from ordinary human cases in our world today, or one must implicitly claim that unawareness modeling would in fact be a contribution to the agency literature. It seems to me a mild burden of proof sits on advocates for this latter case to in fact create such contributions.
The claim that this couldn’t work because such models are limited seems just arbitrary and wrong to me.
The economists I spoke to seemed to think that in agency unawareness models conclusions follow pretty immediately from the assumptions and so don’t teach you much. It’s not that they can’t model real agency problems, just that you don’t learn much from the model. Perhaps if we’d spoken to more economists there would have been more disagreement on this point.
We have lots of models that are useful even when the conclusions follow pretty directly. Such as supply and demand. The question is whether such models are useful, not if they are simple.
So either one must claim that AI-related unawareness is of a very different type or scale from ordinary human cases in our world today, or one must implicitly claim that unawareness modeling would in fact be a contribution to the agency literature.
I agree that the Bostrom/Yudkowsky scenario implies AI-related unawareness is of a very different scale from ordinary human cases. From an outside view perspective, this is a strike against the scenario. However, this deviation from past trends does follow fairly naturally (though not necessarily) from the hypothesis of a sudden and massive intelligence gap
The agency literature is there to model real agency relations in the world. Those real relations no doubt contain plenty of “unawareness”. If models without unawareness were failing to capture and explain a big fraction of real agency problems, there would be plenty of scope for people to try to fill that gap via models that include it. The claim that this couldn’t work because such models are limited seems just arbitrary and wrong to me. So either one must claim that AI-related unawareness is of a very different type or scale from ordinary human cases in our world today, or one must implicitly claim that unawareness modeling would in fact be a contribution to the agency literature. It seems to me a mild burden of proof sits on advocates for this latter case to in fact create such contributions.
The economists I spoke to seemed to think that in agency unawareness models conclusions follow pretty immediately from the assumptions and so don’t teach you much. It’s not that they can’t model real agency problems, just that you don’t learn much from the model. Perhaps if we’d spoken to more economists there would have been more disagreement on this point.
We have lots of models that are useful even when the conclusions follow pretty directly. Such as supply and demand. The question is whether such models are useful, not if they are simple.
I agree that the Bostrom/Yudkowsky scenario implies AI-related unawareness is of a very different scale from ordinary human cases. From an outside view perspective, this is a strike against the scenario. However, this deviation from past trends does follow fairly naturally (though not necessarily) from the hypothesis of a sudden and massive intelligence gap