I’m probably not processing evidence any differently from “rationalism”. But starting an argument with “your entire way of thinking is wrong” gets interpreted by the audience as “you’re stupid” and things go downhill from there.
There are definitely such people for sure. The question is whether people who don’t want to learn to process evidence correctly (because the idea of having been doing it the wrong way until now offends them) were ever going to contribute to AI alignment in the first place.
Fair point. My position is simply that, when trying to make the case for alignment, we should focus on object level arguments. It’s not a good use of our time trying to reteach philosophy when the object level arguments are the crux.
I’m probably not processing evidence any differently from “rationalism”. But starting an argument with “your entire way of thinking is wrong” gets interpreted by the audience as “you’re stupid” and things go downhill from there.
There are definitely such people for sure. The question is whether people who don’t want to learn to process evidence correctly (because the idea of having been doing it the wrong way until now offends them) were ever going to contribute to AI alignment in the first place.
Fair point. My position is simply that, when trying to make the case for alignment, we should focus on object level arguments. It’s not a good use of our time trying to reteach philosophy when the object level arguments are the crux.
That’s generally true… unless both parties process the object-level arguments differently, because they have different rules for updating on evidence.