Could rationalists be the worst people to be put in charge of keeping an AI in the box? We can assume:
Smarter people can convince perfectly rational dumber people of anything, even against their best interests.
Evolution figured out it can’t be guaranteed to produce the smartest brain and needs to give everyone built-in safeguards.
I’ve seen AI box transcripts in which the consensus was: the AI stayed in the box because the AI player was too “dumb” to be swayed by the nuanced, rational arguments. Instead of attributing stubbornness to lack of intelligence, is it a deliberate installation by evolution to defend against clever wordplay and manipulation? Is rationalism the ultimate “cult” as its defining feature is a volitional removal of a safeguard in place to prevent domination of values and goals by those with superior wordplay?
Are there loopholes in this mechanism specifically for one’s parents? Is that why the values given to you by your parents, whether about religion or automobile brands, are so difficult to displace? If religion’s propagation is primarily through a parental trust loophole closed to others, would that be easy to test by seeing if a preference for Ford or Chevy is just as generationally robust?
I haven’t heard a nonrationalist say “what you are saying sounds smart but I don’t know enough about the topic to evaluate your argument, so I won’t let myself get convinced” but have heard that from a rationalist (as in someone coming to the LessWrong community weekend).
It’s not rational to let yourself be argued into anything when faced with a powerful and potentially manipulative actor.
Could rationalists be the worst people to be put in charge of keeping an AI in the box? We can assume:
Smarter people can convince perfectly rational dumber people of anything, even against their best interests.
Evolution figured out it can’t be guaranteed to produce the smartest brain and needs to give everyone built-in safeguards.
I’ve seen AI box transcripts in which the consensus was: the AI stayed in the box because the AI player was too “dumb” to be swayed by the nuanced, rational arguments. Instead of attributing stubbornness to lack of intelligence, is it a deliberate installation by evolution to defend against clever wordplay and manipulation? Is rationalism the ultimate “cult” as its defining feature is a volitional removal of a safeguard in place to prevent domination of values and goals by those with superior wordplay?
Are there loopholes in this mechanism specifically for one’s parents? Is that why the values given to you by your parents, whether about religion or automobile brands, are so difficult to displace? If religion’s propagation is primarily through a parental trust loophole closed to others, would that be easy to test by seeing if a preference for Ford or Chevy is just as generationally robust?
If you haven’t read it yet, you might be very interested in Reason as memetic immune disorder.
I haven’t heard a nonrationalist say “what you are saying sounds smart but I don’t know enough about the topic to evaluate your argument, so I won’t let myself get convinced” but have heard that from a rationalist (as in someone coming to the LessWrong community weekend).
It’s not rational to let yourself be argued into anything when faced with a powerful and potentially manipulative actor.