I had a quick look for an online reference to link to before posting this, and couldn’t find anything. It’s not a particularly complicated theory, though: “purple” ideas are vague, intuitive, pre-theoretic; “orange” ones are explicable, describable and model-able. A lot of AI safety ideas are purple, hence why CFAR tells people not just to ignore them like they would in many technical contexts.
I’ll publish a follow-up post with arguments for and against realism about rationality.
I had a quick look for an online reference to link to before posting this, and couldn’t find anything. It’s not a particularly complicated theory, though: “purple” ideas are vague, intuitive, pre-theoretic; “orange” ones are explicable, describable and model-able. A lot of AI safety ideas are purple, hence why CFAR tells people not just to ignore them like they would in many technical contexts.
I’ll publish a follow-up post with arguments for and against realism about rationality.
Or you could say vague and precise.
Thanks for the explanation!