What I meant by #2 is “a crowd of people who are trying to be more X, but who, currently, aren’t any more X than you (or indeed very X at all, in the grand scheme of things)”, not that they’re already very X but are trying to be even more X.
Fair. Nevertheless, if the average of the group is around my own level, that’s good enough for me if they’re also actively trying. (Pretty much by definition of the average, really...)
Empirically, it seems rather hard, in fact.
Well, either that, or a whole lot of people seem to have some reason for pretending not to be able to tell…
… Okay, sorry, two place function. I don’t seem to have much trouble distinguishing.
(And yes, you can reasonably ask how I know I’m right, and whether or not I myself are good enough at the relevant Xs to tell, etc etc, but… well, at some point that all turns into wasted motions. Let’s just say that I am good enough at distinguishing to arrive at the extremely obvious answers, so I’m fairly confident I’ll at least not be easily mislead.)
Suppose they don’t? I have at least one that AFAICT doesn’t do anything worse than take researchers/resources away from AI alignment in most bad-ends and even in the worst case scenario “just” generates a paperclipper anyway. Which, to be clear, is bad, but not any worse than the current timeline.
(Namely, actual literal time travel and outcome pumps. There is some reason to believe that an outcome pump with a sufficiently short time horizon is easier to safely get hypercompute out of than an AGI, and that a “time machine” that moves an electron back a microsecond is at least energetically within bounds of near-term technology.
You are welcome to complain that time travel is completely incoherent if you like; I’m not exactly convinced myself. But so far, the laws of physics have avoided actually banning CTCs outright.)