Many of these issues seem related to arrow’s impossibility theorem; if groups have genuinely different values, and we optimize for one set not another, ants get tiny apartments and people starve, or we destroy the world economy because we discount too much, etc.
To clarify, I think LessWrong thinks most issues are simple, because we know little about them; we want to just fix it. As an example, poverty isn’t solved for good reasons; it’s hard to balance incentives and growth, and deal with heterogeneity, there exist absolute limits on current wealth and the ability to move it around, and the competing priorities of nations and individuals. It’s not unsolved because people are too stupid to give money to feed the poor charities. We underestimate the rest of of the world because we’re really good at one thing, and think everyone is stupid for not being good at it—and even if we’re right, we’re not good at (understanding) many other things, and some of those things matter for fixing these problems.
Note: Arrow’s Impossibility Theorem is not actually a serious philosophical hurdle for a utilitarian (though related issues such as the Gibbard-Satterthwaite theorem may be). That is to say: it is absolutely trivial to create a social utility function which meets all of Arrow’s “impossible” criteria, if you simply allow cardinal instead of just ordinal utility. (Arrow’s theorem is based on a restriction to ordinal cases.)
Thank you for the clarification; despite this, cardinal utility is difficult because it assumes that we care about different preferences the same amount, or definably different amounts.
Unless there is a commodity that can adequately represent preferences (like money) and a fair redistribution mechanism, we still have problems maximizing overall welfare.
No argument here. It’s hard to build a good social welfare function in theory (ie, even if you can assume away information limitations), and harder in practice (with people actively manipulating it). My point was that it is a mistake to think that Arrow showed it was impossible.
(Also: I appreciate the “thank you”, but it would feel more sincere if it came with an upvote.)
I had upvoted you. Also, I used Arrow as a shorthand for that class of theorem, since they all show that a class of group decision problem is unsolvable—mostly because I can never remember how to spell Satterthewaite.
Many of these issues seem related to arrow’s impossibility theorem; if groups have genuinely different values, and we optimize for one set not another, ants get tiny apartments and people starve, or we destroy the world economy because we discount too much, etc.
To clarify, I think LessWrong thinks most issues are simple, because we know little about them; we want to just fix it. As an example, poverty isn’t solved for good reasons; it’s hard to balance incentives and growth, and deal with heterogeneity, there exist absolute limits on current wealth and the ability to move it around, and the competing priorities of nations and individuals. It’s not unsolved because people are too stupid to give money to feed the poor charities. We underestimate the rest of of the world because we’re really good at one thing, and think everyone is stupid for not being good at it—and even if we’re right, we’re not good at (understanding) many other things, and some of those things matter for fixing these problems.
Note: Arrow’s Impossibility Theorem is not actually a serious philosophical hurdle for a utilitarian (though related issues such as the Gibbard-Satterthwaite theorem may be). That is to say: it is absolutely trivial to create a social utility function which meets all of Arrow’s “impossible” criteria, if you simply allow cardinal instead of just ordinal utility. (Arrow’s theorem is based on a restriction to ordinal cases.)
Thank you for the clarification; despite this, cardinal utility is difficult because it assumes that we care about different preferences the same amount, or definably different amounts.
Unless there is a commodity that can adequately represent preferences (like money) and a fair redistribution mechanism, we still have problems maximizing overall welfare.
No argument here. It’s hard to build a good social welfare function in theory (ie, even if you can assume away information limitations), and harder in practice (with people actively manipulating it). My point was that it is a mistake to think that Arrow showed it was impossible.
(Also: I appreciate the “thank you”, but it would feel more sincere if it came with an upvote.)
I had upvoted you. Also, I used Arrow as a shorthand for that class of theorem, since they all show that a class of group decision problem is unsolvable—mostly because I can never remember how to spell Satterthewaite.