As an ignorant child and young adult, giving small amounts of money made me slightly happier. It made me feel good about myself and I vastly overestimated the effect.
The moderate sums I later gave for explicitly utilitarian reasons didn’t buy me as much happiness as self-centered spending would have. I’m hoping they bought more total happiness though.
Giving nontrivial money is like swimming upstream for me, motivationally speaking. The motivation is further decreased by knowledge of man-made inefficiencies, e.g. bans on taboo but pareto-improving voluntary exchange (example: for the right amount of money, I would gladly donate one of my kidneys).
Really effective causes tend to have speculative or otherwise indirect elements to them. For instance, veg*anism advocacy to reduce animal suffering or research funding for high-stakes topics. These may make most utilitarian sense, but they are less about allocating resources and more about putting some faith into causalities that may well be false. You pay, you get ads, but from there there’s still a jump to actual reductions of actual suffering. You pay, you get funds for research institutions, but from there there’s still a jump to actual applications that improve lives etc.
Yes, I know. It’s less about somewhat realistic expectations and more about visceral motivation. Especially when other people’s choices are involved (meat consumption, using research in the rights ways, improvements that only pan out if people are somewhat rational and benevolent etc).
Technically, we can treat other people as systems to be manipulated, and I guess it even works. But psychologically, it feels dissatisfying. In addition, it feels low status, as it gives others the power to destroy my money’s worth. This is even true for crucial research, the applications of which can just be banned, never used in benevolent ways, rejected by an irrational public for bad reasons, etc.
The methods to measure probabilities and impacts are also somewhat unclear to me. As is estimating unintended consequences.
As an ignorant child and young adult, giving small amounts of money made me slightly happier. It made me feel good about myself and I vastly overestimated the effect.
The moderate sums I later gave for explicitly utilitarian reasons didn’t buy me as much happiness as self-centered spending would have. I’m hoping they bought more total happiness though.
Giving nontrivial money is like swimming upstream for me, motivationally speaking. The motivation is further decreased by knowledge of man-made inefficiencies, e.g. bans on taboo but pareto-improving voluntary exchange (example: for the right amount of money, I would gladly donate one of my kidneys).
Really effective causes tend to have speculative or otherwise indirect elements to them. For instance, veg*anism advocacy to reduce animal suffering or research funding for high-stakes topics. These may make most utilitarian sense, but they are less about allocating resources and more about putting some faith into causalities that may well be false. You pay, you get ads, but from there there’s still a jump to actual reductions of actual suffering. You pay, you get funds for research institutions, but from there there’s still a jump to actual applications that improve lives etc.
Try thinking of units of expected value as things you are purchasing with your giving?
Yes, I know. It’s less about somewhat realistic expectations and more about visceral motivation. Especially when other people’s choices are involved (meat consumption, using research in the rights ways, improvements that only pan out if people are somewhat rational and benevolent etc).
Technically, we can treat other people as systems to be manipulated, and I guess it even works. But psychologically, it feels dissatisfying. In addition, it feels low status, as it gives others the power to destroy my money’s worth. This is even true for crucial research, the applications of which can just be banned, never used in benevolent ways, rejected by an irrational public for bad reasons, etc.
The methods to measure probabilities and impacts are also somewhat unclear to me. As is estimating unintended consequences.