This post seems confused about utility maximisation.
It’s possible for an argument to fail to consider some evidence and so mislead, but this isn’t a problem with expected utility maximisation, it’s just assigning an incorrect distribution for the marginal utilities. Overly formal analyses can certainly fail for real-world problems, but half-Bayesian ad-hoc mathematics won’t help.
EDIT: The mathematical meat of the post is the linked-to analysis done by Dario Amodei. This is perfectly valid. But the post muddies the mathematics by comparing the unbiased measurement considered in that analysis with estimates of charities’ worth. The people giving these estimates will have already used their own priors, and so you should only adjust their estimates to the extent to which your priors differ from theirs.
It’s possible for an argument to fail to consider some evidence and so mislead, but this isn’t a problem with expected utility maximisation, it’s just assigning an incorrect distribution for the marginal utilities. Certainly overly formal analyses can fail for real-world problems, but half-Bayesian ad-hoc mathematics won’t help.
This was exactly my initial reaction to Holden’s post. But either myself or somebody else needs to explain this response in more detail.
This post seems confused about utility maximisation.
It’s possible for an argument to fail to consider some evidence and so mislead, but this isn’t a problem with expected utility maximisation, it’s just assigning an incorrect distribution for the marginal utilities.
As Bongo noted, the post doesn’t argue against expected utility maximization.
Certainly overly formal analyses can fail for real-world problems, but half-Bayesian ad-hoc mathematics won’t help
This is along the lines of the final section of the post titled “Generalizing the Bayesian approach.”
The people giving these estimates will have already used their own priors, and so you should only adjust their estimates to the extent to which your priors differ from theirs.
No. The people giving these estimates may be reasoning poorly and/or put insufficient time into thinking about the relevant issuesand consequently fail to fully utilize their Bayesian prior. (Of course, this characterization applies to everyone in some measure; it’s a matter of degree).
This post seems confused about utility maximisation.
It’s possible for an argument to fail to consider some evidence and so mislead, but this isn’t a problem with expected utility maximisation, it’s just assigning an incorrect distribution for the marginal utilities. Overly formal analyses can certainly fail for real-world problems, but half-Bayesian ad-hoc mathematics won’t help.
EDIT: The mathematical meat of the post is the linked-to analysis done by Dario Amodei. This is perfectly valid. But the post muddies the mathematics by comparing the unbiased measurement considered in that analysis with estimates of charities’ worth. The people giving these estimates will have already used their own priors, and so you should only adjust their estimates to the extent to which your priors differ from theirs.
This was exactly my initial reaction to Holden’s post. But either myself or somebody else needs to explain this response in more detail.
As Bongo noted, the post doesn’t argue against expected utility maximization.
This is along the lines of the final section of the post titled “Generalizing the Bayesian approach.”
No. The people giving these estimates may be reasoning poorly and/or put insufficient time into thinking about the relevant issuesand consequently fail to fully utilize their Bayesian prior. (Of course, this characterization applies to everyone in some measure; it’s a matter of degree).