What about all the angst people had over things like irrational numbers ,infinitesimals, non-smooth functions, infinite cardinalities, non-euclidian geometries?
I think what you’re saying about needing some way to change our minds is a good point though. And I certainly wouldn’t say that every single object-level belief I hold is more secure than every meta belief. I’ll even grant you that for certain decisions, like how to set public health policy, some sort of QALY-based shut up and calculate approach is the right way to go.
But I don’t think that’s the way to change our minds about something like how we deal with homosexuality, either on a descriptive or a normative level. Nobody read Bentham and said, “you know what guys I don’t think being gay actually costs any utils! I guess it’s fine”. And if they did, it would have been bad moral epistemology. If you put yourself in the mind of an average Victorian, “don’t be gay” sits very securely in your web of belief. It’s bolstered by what you think about virtue, religion, deontology, and even health. And what you think about those things is more or less consistent with and confirmed by what you think about everything else. It’s like moral-epistemic page rank. The “don’t be gay” node has strongly weighted edges from the strongest cluster of nodes in your belief system. And they all point at each other. Compared to those nodes, meta level stuff like utilitarianism is in a distant and unimportant backwater region of the graph. If anything an arrow from utilitarianism to “being gay is ok” looks to you like a reason not to take utilitarianism too seriously. In order for you to change your mind about homosexuality, you need to change your mind about everything. You need to move all that moral pagerank to totally different regions of the graph. And picking a meta theory to rule them all and assigning it a massive weight seems like a crazy reckless way to do that. If you’re doing that you’re basically saying you prioritize meta-ethical consistency over all the object level things that you actually care about. It seems to me the only sane way to update is to slowly alter the object level stuff as you learn new facts, or discover inconsistencies in what you value, and try to maintain as much reflective consistency as you can while you do it.
PS. I guess I kind of made it sound like I believe the Whig theory of moral history, where modern western values are clearly true scion of Victorian values, and if we could just tell them what we know and walk them though the arguments we could convince the Victorians that we were right, even by their own standards. I’m undecided on that and I’ll admit it might be the case that we just fundamentally disagree on values, and that “moral progress” is a random walk. Or not. Or it’s a mix. I have no idea.
What about all the angst people had over things like irrational numbers ,infinitesimals, non-smooth functions, infinite cardinalities, non-euclidian geometries?
I think what you’re saying about needing some way to change our minds is a good point though. And I certainly wouldn’t say that every single object-level belief I hold is more secure than every meta belief. I’ll even grant you that for certain decisions, like how to set public health policy, some sort of QALY-based shut up and calculate approach is the right way to go.
But I don’t think that’s the way to change our minds about something like how we deal with homosexuality, either on a descriptive or a normative level. Nobody read Bentham and said, “you know what guys I don’t think being gay actually costs any utils! I guess it’s fine”. And if they did, it would have been bad moral epistemology. If you put yourself in the mind of an average Victorian, “don’t be gay” sits very securely in your web of belief. It’s bolstered by what you think about virtue, religion, deontology, and even health. And what you think about those things is more or less consistent with and confirmed by what you think about everything else. It’s like moral-epistemic page rank. The “don’t be gay” node has strongly weighted edges from the strongest cluster of nodes in your belief system. And they all point at each other. Compared to those nodes, meta level stuff like utilitarianism is in a distant and unimportant backwater region of the graph. If anything an arrow from utilitarianism to “being gay is ok” looks to you like a reason not to take utilitarianism too seriously. In order for you to change your mind about homosexuality, you need to change your mind about everything. You need to move all that moral pagerank to totally different regions of the graph. And picking a meta theory to rule them all and assigning it a massive weight seems like a crazy reckless way to do that. If you’re doing that you’re basically saying you prioritize meta-ethical consistency over all the object level things that you actually care about. It seems to me the only sane way to update is to slowly alter the object level stuff as you learn new facts, or discover inconsistencies in what you value, and try to maintain as much reflective consistency as you can while you do it.
PS. I guess I kind of made it sound like I believe the Whig theory of moral history, where modern western values are clearly true scion of Victorian values, and if we could just tell them what we know and walk them though the arguments we could convince the Victorians that we were right, even by their own standards. I’m undecided on that and I’ll admit it might be the case that we just fundamentally disagree on values, and that “moral progress” is a random walk. Or not. Or it’s a mix. I have no idea.