especially given that suffering-focused ethics seems to somehow be connected with distrust of philosophical deliberation
Can you elaborate on what you mean by this? People like Brian or others at FRI don’t seem particularly averse to philosophical deliberation to me...
This also seems like an attractive compromise more broadly: we all spend a bit of time thinking about s-risk reduction and taking the low-hanging fruit, and suffering-focused EAs do less stuff that tends to lead to the destruction of the world.
I support this compromise and agree not to destroy the world. :-)
Those of us who sympathize with suffering-focused ethics have an incentive to encourage others to think about their values now, at least in crudely enough terms to take a stance on prioritizing preventing s-risks vs. making sure we get to a position where everyone can safely deliberate their values further and then everything gets fulfilled. Conversely, if one (normatively!) thinks the downsides of bad futures are unlikely to be much worse than the upsides of good futures, then one is incentivized to promote caution about taking confident stances on anything population-ethics-related, and instead value deeper philosophical reflection. The latter also has the upside of being good from a cooperation point of view: Everyone can work on the same priority (building safe AI that helps with philosophical reflection) regardless of one’s inklings about how personal value extrapolation is likely to turn out.
(The situation becomes more interesting/complicated for suffering-focused altruists once we add considerations of multiverse-wide compromise via coordinated decision-making, which, in extreme versions at least, would call for being “updateless” about the direction of one’s own values.)
Can you elaborate on what you mean by this? People like Brian or others at FRI don’t seem particularly averse to philosophical deliberation to me...
People vary in what kinds of values change they would consider drift vs. endorsed deliberation. Brian has in the past publicly come down unusually far on the side of “change = drift,” I’ve encountered similar views on one other occasion from this crowd, and I had heard second hand that this was relatively common.
Brian or someone more familiar with his views could speak more authoritatively to that aspect of the question, and I might be mistaken about the views of the suffering-focused utilitarians more broadly.
Can you elaborate on what you mean by this? People like Brian or others at FRI don’t seem particularly averse to philosophical deliberation to me...
I support this compromise and agree not to destroy the world. :-)
Those of us who sympathize with suffering-focused ethics have an incentive to encourage others to think about their values now, at least in crudely enough terms to take a stance on prioritizing preventing s-risks vs. making sure we get to a position where everyone can safely deliberate their values further and then everything gets fulfilled. Conversely, if one (normatively!) thinks the downsides of bad futures are unlikely to be much worse than the upsides of good futures, then one is incentivized to promote caution about taking confident stances on anything population-ethics-related, and instead value deeper philosophical reflection. The latter also has the upside of being good from a cooperation point of view: Everyone can work on the same priority (building safe AI that helps with philosophical reflection) regardless of one’s inklings about how personal value extrapolation is likely to turn out.
(The situation becomes more interesting/complicated for suffering-focused altruists once we add considerations of multiverse-wide compromise via coordinated decision-making, which, in extreme versions at least, would call for being “updateless” about the direction of one’s own values.)
People vary in what kinds of values change they would consider drift vs. endorsed deliberation. Brian has in the past publicly come down unusually far on the side of “change = drift,” I’ve encountered similar views on one other occasion from this crowd, and I had heard second hand that this was relatively common.
Brian or someone more familiar with his views could speak more authoritatively to that aspect of the question, and I might be mistaken about the views of the suffering-focused utilitarians more broadly.