I think you have to present people with authoritative knowledge though… without that, you are forced to re-develop the entire history of science in one lifetime, which humans just aren’t smart enough to do. Maybe an ideal AI could do it, but we aren’t so we can’t.
I think a better plan is this: Give authoritative knowledge that tells you to distrust authoritative knowledge. This forces the mind into cognitive dissonance which then gets resolved by saying “Authoritative knowledge is useful—but not absolutely certain.”
I think you have to present people with authoritative knowledge though… without that, you are forced to re-develop the entire history of science in one lifetime, which humans just aren’t smart enough to do. Maybe an ideal AI could do it, but we aren’t so we can’t.
I think a better plan is this: Give authoritative knowledge that tells you to distrust authoritative knowledge. This forces the mind into cognitive dissonance which then gets resolved by saying “Authoritative knowledge is useful—but not absolutely certain.”