Interesting point. But that’s very weak evidence (because as I said the two known instances have significant differences). Also, this is a heuristic and produces many false positives.
At best it motivates me to remain open to arguments that there might be more kinds of ‘truth’, which I am. But the mere argument that there might be is not interesting, unless someone can provide an argument for a concrete example. Or even a suggestion of what a concrete example might be like.
At best it motivates me to remain open to arguments that there might be more kinds of ‘truth’, which I am. But the mere argument that there might be is not interesting, unless someone can provide an argument for a concrete example.
You should study more history of ideas; once you see several examples of seemingly-unsolvable philosophical problems that were later solved by intellectual paradigm shifts, you become much less willing to believe that a particular problem is unsolvable simple because we currently don’t have any idea how to solve it.
I don’t believe a problem is unsolvable. I don’t see a problem in the first place. I don’t have any unsolved questions in my world model.
You keep saying I should be more open to new ideas and unsure of my existing ideas. But you do not suggest any concrete new idea. You also do not point to the need for a new idea, such as an unsolved problem. You’re not saying anything that isn’t fully general and applicable to all of everyone’s beliefs.
I’m not interested in dialogue with physical anti-realists. Certain mutual assumptions are necessary to hold a meaningful conversation, and some kind of physical realism is one of them. Another example is the Past Hypothesis: we must assume the past had lower entropy, otherwise we would believe that we are Boltzmann brains and be unable to trust our memories or senses. A third example is induction: believing the universe is more likely to be lawful than not, that our past experience is at least in principle a guide to the future.
If moral realists are on the same level as physical realists—if they have no meaningful arguments for their position based on shared assumptions, but rather say “we are moral realists first and everything else follows, and if it conflicts with any other epistemological principles so much the worse for them”—then I’m not interested in talking to them (about moral realism). And I expect a very large proportion of people who agree with LW norms on rational thinking would say the same.
The zero-one-infinity hueristic.
Interesting point. But that’s very weak evidence (because as I said the two known instances have significant differences). Also, this is a heuristic and produces many false positives.
At best it motivates me to remain open to arguments that there might be more kinds of ‘truth’, which I am. But the mere argument that there might be is not interesting, unless someone can provide an argument for a concrete example. Or even a suggestion of what a concrete example might be like.
You should study more history of ideas; once you see several examples of seemingly-unsolvable philosophical problems that were later solved by intellectual paradigm shifts, you become much less willing to believe that a particular problem is unsolvable simple because we currently don’t have any idea how to solve it.
I don’t believe a problem is unsolvable. I don’t see a problem in the first place. I don’t have any unsolved questions in my world model.
You keep saying I should be more open to new ideas and unsure of my existing ideas. But you do not suggest any concrete new idea. You also do not point to the need for a new idea, such as an unsolved problem. You’re not saying anything that isn’t fully general and applicable to all of everyone’s beliefs.
The physical anti-realist doesn’t see any problem in his world view either.
I’m not interested in dialogue with physical anti-realists. Certain mutual assumptions are necessary to hold a meaningful conversation, and some kind of physical realism is one of them. Another example is the Past Hypothesis: we must assume the past had lower entropy, otherwise we would believe that we are Boltzmann brains and be unable to trust our memories or senses. A third example is induction: believing the universe is more likely to be lawful than not, that our past experience is at least in principle a guide to the future.
If moral realists are on the same level as physical realists—if they have no meaningful arguments for their position based on shared assumptions, but rather say “we are moral realists first and everything else follows, and if it conflicts with any other epistemological principles so much the worse for them”—then I’m not interested in talking to them (about moral realism). And I expect a very large proportion of people who agree with LW norms on rational thinking would say the same.