These look like good criteria, but I wonder how many organizations are satisfactory in this regard. My expectation would be ~0.
The only ones I can think of which are even cognizant of epistemic considerations at the executive level are places like the Federal Reserve and the CDC. I can think of more organizations that think about equilibria, for liberal interpretations of the word, but they are mainly dedicated to preventing us from falling into a worse one (national defense). Moral uncertainty seems like the hardest hurdle to clear; most organizations are explicit in either their amorality or the scope of their morality, and there is very little discretion to change. Happily feedback mechanisms seem to do alright; though I come up short of examples where the feedback mechanisms improve things at the meta level.
All that aside, we can surely start with a simple case and build up from there. Suppose all of these criteria were met to your satisfaction, and a decision was made which was very risky for you personally. How would you think about this? What would you do?
I’ve been re-reading a sci-fi book which has the interesting Existential Risk scenario where most people are going to die. But some may survive.
If you are a person on earth in the book, you have the choice of helping out people and definitely dieing or trying desperately to be one of the ones to survive (even if you personally might not be the best person to help humanity survive).
In that situation I would definitely be in the “helping people better suited for surviving” camp. Following orders because the situation was too complex to keep in one persons head. Danger is fine because you are literally a dead person walking.
It becomes harder when the danger isn’t so clear and present. I’ll think about it a bit more.
These look like good criteria, but I wonder how many organizations are satisfactory in this regard. My expectation would be ~0.
The only ones I can think of which are even cognizant of epistemic considerations at the executive level are places like the Federal Reserve and the CDC. I can think of more organizations that think about equilibria, for liberal interpretations of the word, but they are mainly dedicated to preventing us from falling into a worse one (national defense). Moral uncertainty seems like the hardest hurdle to clear; most organizations are explicit in either their amorality or the scope of their morality, and there is very little discretion to change. Happily feedback mechanisms seem to do alright; though I come up short of examples where the feedback mechanisms improve things at the meta level.
All that aside, we can surely start with a simple case and build up from there. Suppose all of these criteria were met to your satisfaction, and a decision was made which was very risky for you personally. How would you think about this? What would you do?
I’ve been re-reading a sci-fi book which has the interesting Existential Risk scenario where most people are going to die. But some may survive.
If you are a person on earth in the book, you have the choice of helping out people and definitely dieing or trying desperately to be one of the ones to survive (even if you personally might not be the best person to help humanity survive).
In that situation I would definitely be in the “helping people better suited for surviving” camp. Following orders because the situation was too complex to keep in one persons head. Danger is fine because you are literally a dead person walking.
It becomes harder when the danger isn’t so clear and present. I’ll think about it a bit more.
The title of the book is frirarirf (rot13)
rm -f double-post