I’d consider those to be “in-scope” for the database, so the database would include any such estimates that I was aware of and that weren’t too private to share in the database.
If I recall correctly, some estimates in the database are decently related to that, e.g. are framed as “What % of the total possible moral value of the future will be realized?” or “What % of the total possible moral value of the future is lost in expectation due to AI risk?”
But I haven’t seen many estimates of that type, and I don’t remember seeing any that were explicitly framed as “What fraction of the accessible universe’s resources will be used in a way optimized for ‘the correct moral theory’?”
If you know of some, feel free to comment in the database to suggest they be added :)
Glad to hear that!
I do feel excited about this being used as a sort of “201 level” overview of AI strategy and what work it might be useful to do. And I’m aware of the report being included in the reading lists / curricula for two training programs for people getting into AI governance or related work, which was gratifying.
Unfortunately we did this survey before ChatGPT and various other events since then, which have majorly changed the landscape of AI governance work to be done, e.g. opening various policy windows. So I imagine people reading this report today may feel it has some odd omissions / vibes. But I still think it serves as a good 201 level overview despite that. Perhaps we’ll run a followup in a year or two to provide an updated version.