Is rationalism really necessary to understanding MIRI-type views on AI alignment? I personally find rationalism offputting and I don’t think it’s very persuasive to say “you have to accept a complex philosophical system and rewire your brain to process evidence and arguments differently to understand one little thing.” If that’s the deal, I don’t think you’ll find many takers outside of those already convinced.
I’m probably not processing evidence any differently from “rationalism”. But starting an argument with “your entire way of thinking is wrong” gets interpreted by the audience as “you’re stupid” and things go downhill from there.
There are definitely such people for sure. The question is whether people who don’t want to learn to process evidence correctly (because the idea of having been doing it the wrong way until now offends them) were ever going to contribute to AI alignment in the first place.
Fair point. My position is simply that, when trying to make the case for alignment, we should focus on object level arguments. It’s not a good use of our time trying to reteach philosophy when the object level arguments are the crux.
EY originally blamed failure to agree with his obviously correct arguments about AI on poor thinking skills, then set about to correct that. But other explanations are possible.
Is rationalism really necessary to understanding MIRI-type views on AI alignment? I personally find rationalism offputting and I don’t think it’s very persuasive to say “you have to accept a complex philosophical system and rewire your brain to process evidence and arguments differently to understand one little thing.” If that’s the deal, I don’t think you’ll find many takers outside of those already convinced.
In what way are you processing evidence differently from “rationalism”?
I’m probably not processing evidence any differently from “rationalism”. But starting an argument with “your entire way of thinking is wrong” gets interpreted by the audience as “you’re stupid” and things go downhill from there.
There are definitely such people for sure. The question is whether people who don’t want to learn to process evidence correctly (because the idea of having been doing it the wrong way until now offends them) were ever going to contribute to AI alignment in the first place.
Fair point. My position is simply that, when trying to make the case for alignment, we should focus on object level arguments. It’s not a good use of our time trying to reteach philosophy when the object level arguments are the crux.
That’s generally true… unless both parties process the object-level arguments differently, because they have different rules for updating on evidence.
EY originally blamed failure to agree with his obviously correct arguments about AI on poor thinking skills, then set about to correct that. But other explanations are possible.
Yeah, that’s not a very persuasive story to skeptics.