Indeed, the fact that there’s nothing resembling a consensus among professional philosophers about almost anything you’ve described as achievements [...]
Really? As far as I can tell, the consensus for Bayesian updating and expected utility maximization among professional philosophers is near total. Most of them haven’t heard of UDT yet, but on Less Wrong and at SIAI there also seems to be a consensus that UDT is, if not quite right, at least on the right track.
For many branches of learning, the key to success has been to mathematicize the areas.
But how do you mathematicize an area, except by doing philosophy? I mean real world problems do not come to you in the form of equations to be solved, or algorithms to be run.
Really? As far as I can tell, the consensus for Bayesian updating and expected utility maximization among professional philosophers is near total. Most of them haven’t heard of UDT yet, but on Less Wrong and at SIAI there also seems to be a consensus that UDT is, if not quite right, at least on the right track.
From my (anecdotal but varied) experience talking to professional philosophers about them, I’d (off-the-cuff) estimate 80% are not familiar with expected utility maximization (in the sense of multiplying the probability of outcome by the utility) or Bayesian updating, and of the rest, a significant portion think that the Bayesian approach to probability is wrong or nonsensical, or that “expected utility maximization” is obviously wrongheaded because it sounds like Utilitarianism.
“Utilitarianism” is a term for a specific concept hogging a perfectly good name that could be used for something more general: utility-based decision making.
I run into a fair number of epistemologists who are not keen on describing beliefs in terms of probabilities and want to use binary “believe” vs “not believe” terms, or binary “justification.” Bayesian updating and utility-maximization decision theory are pretty dominant among philosophers of probability and decision theorists, but not universal among philosophers.
I’m a philosophy grad student. While I agree that many epistemologists still think it is important to talk in terms of believe/not-believe and justified/non-justfied, I find relatively few epistemologists who reject the notion of credence or think that credences shouldn’t be probabilities. Of those who think credences shouldn’t be probability functions, most would not object to using a weaker system of imprecise probabilities (Reference: James M. Joyce (2005). How Probabilities Reflect Evidence. Philosophical Perspectives 19 (1):153–178). These people are still pretty much on team Bayesianism.
So, in a way, the Bayesian domination is pretty strong. In another way, it isn’t: few debates in traditional epistemology have been translated in Bayesian terms and solved (though this would probably solve very many of them). And many epistemologists doubt that Bayesianism will be genuinely helpful with respect to their concerns.
Just skim the Stanford Encyclopedia of Philosophy articles on probability and see how uncontroversial philosophers in general regard Bayesian inference. I think you’ll see that they consider it problematic and controversial in general.
Really? As far as I can tell, the consensus for Bayesian updating and expected utility maximization among professional philosophers is near total.
According to the The PhilPapers Survey, 25.8% (ETA:Wrong number, 23.6% is the correct value. I quoted from the wrong entry) of surveyed philosophers were consequentialists of some form. That makes it hard to argue for a consensus about maximizing expected utility.
But how do you mathematicize an area, except by doing philosophy? I mean real world problems do not come to you in the form of equations to be solved, or algorithms to be run.
This seems to run into SilasBarta’s inquiry above about what you mean by philosophy. I wouldn’t for example think of the work of people like Galileo and Newton to be doing philosophy, but they took physics and put it on solid mathematical grounding. Similar remarks apply to Lavoisier or many people in other fields.
According to the The PhilPapers Survey, 25.8% of surveyed philosophers were consequentialists of some form. That makes it hard to argue for a consensus about maximizing expected utility.
There are a lot of philosophers who buy into maximizing expected utility, but aren’t consequentialists. Proof: If you look at philosophers specializing in decision theory, 58% buy into consequentialism link. Of this group, the vast majority would go for something very close to expected utility maximization.
Part of this has to do with consequentialism not having a crisp definition that fits philosophers’ intuitive usage. Some think consequentialism must be agent-neutral and get off the boat there (but could still be EU maximizers). Others have preferences that could (if made more coherent) satisfy the axioms of decision theory, but don’t think that the utility function that represents those preferences also orders outcomes in terms of goodness. I.e., these people want to be EU maximizers, but don’t want to maximize goodness (maybe they want to maximize some weighting of goodness and keeping their hands clean).
Valid point. The question asked was “Normative ethics: deontology, consequentialism, or virtue ethics?” (Note I actually quoted from the wrong entry above with the correct value as 23.6% but this makes little difference). It seems fair that the vast majority of deontologists and virtue ethicists are not EU maximizers. So, let’s include everyone who picked consequentalist or “other” as an option. This should presumably overestimate the fraction which we care about for this purpose. That’s a total of 55.9%, only slightly over half. Is that a consensus?
Really? As far as I can tell, the consensus for Bayesian updating and expected utility maximization among professional philosophers is near total. Most of them haven’t heard of UDT yet, but on Less Wrong and at SIAI there also seems to be a consensus that UDT is, if not quite right, at least on the right track.
But how do you mathematicize an area, except by doing philosophy? I mean real world problems do not come to you in the form of equations to be solved, or algorithms to be run.
From my (anecdotal but varied) experience talking to professional philosophers about them, I’d (off-the-cuff) estimate 80% are not familiar with expected utility maximization (in the sense of multiplying the probability of outcome by the utility) or Bayesian updating, and of the rest, a significant portion think that the Bayesian approach to probability is wrong or nonsensical, or that “expected utility maximization” is obviously wrongheaded because it sounds like Utilitarianism.
That matches my anecdotal and varied experience, and as we know, the singular of anecdote is ‘update’ and the plural is ‘update more’.
Should I quote you for this one, or was it someone else originally?
“Utilitarianism” is a term for a specific concept hogging a perfectly good name that could be used for something more general: utility-based decision making.
I run into a fair number of epistemologists who are not keen on describing beliefs in terms of probabilities and want to use binary “believe” vs “not believe” terms, or binary “justification.” Bayesian updating and utility-maximization decision theory are pretty dominant among philosophers of probability and decision theorists, but not universal among philosophers.
I’m a philosophy grad student. While I agree that many epistemologists still think it is important to talk in terms of believe/not-believe and justified/non-justfied, I find relatively few epistemologists who reject the notion of credence or think that credences shouldn’t be probabilities. Of those who think credences shouldn’t be probability functions, most would not object to using a weaker system of imprecise probabilities (Reference: James M. Joyce (2005). How Probabilities Reflect Evidence. Philosophical Perspectives 19 (1):153–178). These people are still pretty much on team Bayesianism.
So, in a way, the Bayesian domination is pretty strong. In another way, it isn’t: few debates in traditional epistemology have been translated in Bayesian terms and solved (though this would probably solve very many of them). And many epistemologists doubt that Bayesianism will be genuinely helpful with respect to their concerns.
I mostly agree with this.
Just skim the Stanford Encyclopedia of Philosophy articles on probability and see how uncontroversial philosophers in general regard Bayesian inference. I think you’ll see that they consider it problematic and controversial in general.
According to the The PhilPapers Survey, 25.8% (ETA:Wrong number, 23.6% is the correct value. I quoted from the wrong entry) of surveyed philosophers were consequentialists of some form. That makes it hard to argue for a consensus about maximizing expected utility.
This seems to run into SilasBarta’s inquiry above about what you mean by philosophy. I wouldn’t for example think of the work of people like Galileo and Newton to be doing philosophy, but they took physics and put it on solid mathematical grounding. Similar remarks apply to Lavoisier or many people in other fields.
There are a lot of philosophers who buy into maximizing expected utility, but aren’t consequentialists. Proof: If you look at philosophers specializing in decision theory, 58% buy into consequentialism link. Of this group, the vast majority would go for something very close to expected utility maximization.
Part of this has to do with consequentialism not having a crisp definition that fits philosophers’ intuitive usage. Some think consequentialism must be agent-neutral and get off the boat there (but could still be EU maximizers). Others have preferences that could (if made more coherent) satisfy the axioms of decision theory, but don’t think that the utility function that represents those preferences also orders outcomes in terms of goodness. I.e., these people want to be EU maximizers, but don’t want to maximize goodness (maybe they want to maximize some weighting of goodness and keeping their hands clean).
Valid point. The question asked was “Normative ethics: deontology, consequentialism, or virtue ethics?” (Note I actually quoted from the wrong entry above with the correct value as 23.6% but this makes little difference). It seems fair that the vast majority of deontologists and virtue ethicists are not EU maximizers. So, let’s include everyone who picked consequentalist or “other” as an option. This should presumably overestimate the fraction which we care about for this purpose. That’s a total of 55.9%, only slightly over half. Is that a consensus?