:) There’s something good about “common sense” that isn’t in “effective epistemics”, though—something about wanting not to lose the robustness of the ordinary vetted-by-experience functioning patterns. (Even though this is really hard, plausibly impossible, when we need to reach toward contexts far from those in which our experiences were based.)
I nominate “Society of Effective Epistemics For AI Risk” or SEE-FAR for short.
:) There’s something good about “common sense” that isn’t in “effective epistemics”, though—something about wanting not to lose the robustness of the ordinary vetted-by-experience functioning patterns. (Even though this is really hard, plausibly impossible, when we need to reach toward contexts far from those in which our experiences were based.)
This is the best idea I’ve heard yet.
It would be pretty confusing to people, and yet...