Mm, k. I was trying more to say that I got the same sense from your post that Nick Bostrom seems to have gotten at the point where he worried about completely general and perfectly sterile analytic philosophy. Maxipok isn’t derived just from the astronomical waste part, it’s derived from pragmatic features of actual x-risk problems that lead to ubiquitous threshold effects that define “okayness”—most obviously Parfit’s “Extinguishing the last 1000 people is much worse than extinguishing seven billion minus a thousand people” but also including things like satisficing indirect normativity and unfriendly AIs going FOOM. The degree to which x-risk thinking has properly adapted to the pragmatic landscape, not just been derived starting from very abstract a priori considerations, was what gave me that worried sense of overabstraction while reading the OP; and that trigged my reflex to start throwing out concrete examples to see what happened to the abstract analysis in that case.
It may be overly abstract. I’m a philosopher by training and I have a tendency to get overly abstract (which I am working on).
I agree that there are important possibilities with threshold effects, such as extinction and perhaps including your point about threshold effects with indirect normativity AIs. I also think that other scenarios, such as Robin Hanson’s scenario, other decentralized market/democracy set-ups, and other scenarios we can’t think of are live possibilities. More continuous trajectory changes may be very relevant in these other scenarios.
Mm, k. I was trying more to say that I got the same sense from your post that Nick Bostrom seems to have gotten at the point where he worried about completely general and perfectly sterile analytic philosophy. Maxipok isn’t derived just from the astronomical waste part, it’s derived from pragmatic features of actual x-risk problems that lead to ubiquitous threshold effects that define “okayness”—most obviously Parfit’s “Extinguishing the last 1000 people is much worse than extinguishing seven billion minus a thousand people” but also including things like satisficing indirect normativity and unfriendly AIs going FOOM. The degree to which x-risk thinking has properly adapted to the pragmatic landscape, not just been derived starting from very abstract a priori considerations, was what gave me that worried sense of overabstraction while reading the OP; and that trigged my reflex to start throwing out concrete examples to see what happened to the abstract analysis in that case.
It may be overly abstract. I’m a philosopher by training and I have a tendency to get overly abstract (which I am working on).
I agree that there are important possibilities with threshold effects, such as extinction and perhaps including your point about threshold effects with indirect normativity AIs. I also think that other scenarios, such as Robin Hanson’s scenario, other decentralized market/democracy set-ups, and other scenarios we can’t think of are live possibilities. More continuous trajectory changes may be very relevant in these other scenarios.
For what it’s worth, I loved this post and don’t think it was very abstract. Then again, my background is also in philosophy.