Sadly, I don’t have any really good answers for you.
Thanks, it’s actually very interesting and important information.
I don’t know of specific cases, but for example I think it is quite common for people to start studying meta-ethics because of frustration at finding answers to questions in normative ethics.
I’ve noticed (and stated in the OP) that normative ethics seems to be an exception where it’s common to express uncertainty/confusion/difficulty. But I think, from both my inside and outside views, that this should be common in most philosophical fields (because e.g. we’ve been trying to solve them for centuries without coming up with broadly convincing solutions), and there should be a steady stream of all kinds of philosophers going up the meta ladder all the way to metaphilosophy. It recently dawned on me that this doesn’t seem to be the case.
Many of the philosophers I know who work on AI safety would love for there to be an AI pause, in part because they think alignment is very difficult. But I don’t know if any of us have explicitly called for an AI pause, in part because it seems useless, but may have opportunity cost.
What seems useless, calling for an AI pause, or the AI pause itself? Have trouble figuring out because if “calling for an AI pause”, what is the opportunity cost (seems easy enough to write or sign an open letter), and if “AI pause itself”, “seems useless” contradicts “would love”. In either case, this seems extremely important to openly discuss/debate! Can you please ask these philosophers to share their views of this on LW (or their preferred venue), and share your own views?
FTR I’d probably be up for helping out logistically with such an open letter (e.g. making the website and any other parts of it). I previously made this open letter.
Sorry for being unclear, I meant that calling for a pause seems useless because it won’t happen. I think calling for the pause has opportunity cost because of limited attention and limited signalling value; reputation can only be used so many times; better to channel pressure towards asks that could plausibly get done.
I think there’s a steady stream of philosophy getting interested in various questions in metaphilosophy
Thanks for this info and the references. I guess by “metaphilosophy” I meant something more meta than metaethics or metaepistemology, i.e., a field that tries to understand all philosophical reasoning in some unified or systematic way, including reasoning used in metaethics and metaepistemology, and metaphilosophy itself. (This may differ from standard academic terminology, in which case please let me know if there’s a preferred term for the concept I’m pointing at.) My reasoning being that metaethics itself seems like a hard problem that has defied solution for centuries, so why stop there instead of going even more meta?
Sorry for being unclear, I meant that calling for a pause seems useless because it won’t happen.
I think you (and other philosophers) may be too certain that a pause won’t happen, but I’m not sure I can convince you (at least not easily). What about calling for it in a low cost way, e.g., instead of doing something high profile like an open letter (with perceived high opportunity costs), just write a blog post or even a tweet saying that you wish for an AI pause, because …? What if many people privately prefer an AI pause, but nobody knows because nobody says anything? What if by keeping silent, you’re helping to keep society in a highly suboptimal equilibrium?
I think there are also good arguments for doing something like this from a deontological or contractualist perspective (i.e. you have a duty/obligation to honestly and publicly report your beliefs on important matters related to your specialization), which sidestep the “opportunity cost” issue, but I’m not sure if you’re open to that kind of argument. I think they should have some weight given moral uncertainty.
Thanks, it’s actually very interesting and important information.
I’ve noticed (and stated in the OP) that normative ethics seems to be an exception where it’s common to express uncertainty/confusion/difficulty. But I think, from both my inside and outside views, that this should be common in most philosophical fields (because e.g. we’ve been trying to solve them for centuries without coming up with broadly convincing solutions), and there should be a steady stream of all kinds of philosophers going up the meta ladder all the way to metaphilosophy. It recently dawned on me that this doesn’t seem to be the case.
What seems useless, calling for an AI pause, or the AI pause itself? Have trouble figuring out because if “calling for an AI pause”, what is the opportunity cost (seems easy enough to write or sign an open letter), and if “AI pause itself”, “seems useless” contradicts “would love”. In either case, this seems extremely important to openly discuss/debate! Can you please ask these philosophers to share their views of this on LW (or their preferred venue), and share your own views?
FTR I’d probably be up for helping out logistically with such an open letter (e.g. making the website and any other parts of it). I previously made this open letter.
I think there’s a steady stream of philosophy getting interested in various questions in metaphilosophy; metaethics is just the most salient to me. One example is the recent trend towards conceptual engineering (https://philpapers.org/browse/conceptual-engineering). Metametaphysics has also gotten a lot of attention in the last 10-20 years https://www.oxfordbibliographies.com/display/document/obo-9780195396577/obo-9780195396577-0217.xml. There is also some recent work in metaepistemology, but maybe less so because the debates tend to recapitulate previous work in metaethics https://plato.stanford.edu/entries/metaepistemology/.
Sorry for being unclear, I meant that calling for a pause seems useless because it won’t happen. I think calling for the pause has opportunity cost because of limited attention and limited signalling value; reputation can only be used so many times; better to channel pressure towards asks that could plausibly get done.
Thanks for this info and the references. I guess by “metaphilosophy” I meant something more meta than metaethics or metaepistemology, i.e., a field that tries to understand all philosophical reasoning in some unified or systematic way, including reasoning used in metaethics and metaepistemology, and metaphilosophy itself. (This may differ from standard academic terminology, in which case please let me know if there’s a preferred term for the concept I’m pointing at.) My reasoning being that metaethics itself seems like a hard problem that has defied solution for centuries, so why stop there instead of going even more meta?
I think you (and other philosophers) may be too certain that a pause won’t happen, but I’m not sure I can convince you (at least not easily). What about calling for it in a low cost way, e.g., instead of doing something high profile like an open letter (with perceived high opportunity costs), just write a blog post or even a tweet saying that you wish for an AI pause, because …? What if many people privately prefer an AI pause, but nobody knows because nobody says anything? What if by keeping silent, you’re helping to keep society in a highly suboptimal equilibrium?
I think there are also good arguments for doing something like this from a deontological or contractualist perspective (i.e. you have a duty/obligation to honestly and publicly report your beliefs on important matters related to your specialization), which sidestep the “opportunity cost” issue, but I’m not sure if you’re open to that kind of argument. I think they should have some weight given moral uncertainty.