I endorse ESRogs’ answer. If the world were a singleton under the control of a few particularly benevolent and wise humans, with an AGI that obeys the intention of practical commands (in a somewhat naive way, say, so it’d be unable to help them figure out ethics) then I think argument 5 would no longer apply, but argument 4 would. Or, more generally: argument 5 is about how humans might behave badly under current situations and governmental structures in the short term, but makes no claim that this will be a systemic problem in the long term (we could probably solve it using a singleton + mass surveillance); argument 4 is about how we don’t know of any governmental(/psychological?) structures which are very likely to work well in the long term.
Having said that, your ideas were the main (but not sole) inspiration for argument 4, so if this isn’t what you intended, then I may need to rethink its inclusion.
I think this division makes sense on a substantive level, and I guess I was confused by the naming and the ordering between 4 and 5. I would define “human safety problems” to include both short term and long term problems (just like “AI safety problems” includes short term and long term problems) so I’d put both 4 and 5 under “human safety problems” instead of just 4. I guess in my posts I mostly focused on long term problems since short term problems have already been widely recognized, but as far as naming, it seems strange to exclude short term problems from “human safety problems”. Also you wrote “They are listed roughly from most specific and actionable to most general” and 4 feels like a more general problem than 5 to me, although perhaps that’s arguable.
I endorse ESRogs’ answer. If the world were a singleton under the control of a few particularly benevolent and wise humans, with an AGI that obeys the intention of practical commands (in a somewhat naive way, say, so it’d be unable to help them figure out ethics) then I think argument 5 would no longer apply, but argument 4 would. Or, more generally: argument 5 is about how humans might behave badly under current situations and governmental structures in the short term, but makes no claim that this will be a systemic problem in the long term (we could probably solve it using a singleton + mass surveillance); argument 4 is about how we don’t know of any governmental(/psychological?) structures which are very likely to work well in the long term.
Having said that, your ideas were the main (but not sole) inspiration for argument 4, so if this isn’t what you intended, then I may need to rethink its inclusion.
I think this division makes sense on a substantive level, and I guess I was confused by the naming and the ordering between 4 and 5. I would define “human safety problems” to include both short term and long term problems (just like “AI safety problems” includes short term and long term problems) so I’d put both 4 and 5 under “human safety problems” instead of just 4. I guess in my posts I mostly focused on long term problems since short term problems have already been widely recognized, but as far as naming, it seems strange to exclude short term problems from “human safety problems”. Also you wrote “They are listed roughly from most specific and actionable to most general” and 4 feels like a more general problem than 5 to me, although perhaps that’s arguable.