I’d argue no, because even if some genetically engineered humans have misaligned goals, and seek power, and even if they’re smarter, more well-coordinated than non-genetically engineered humans, it’s still highly questionable whether they’d kill all the non-genetically engineered humans in pursuit of these goals.
1. Wanna spell out the reasons why? (a) They’d be resisted by good gengineered humans, (b) they might be misaligned but not in ways that make them want to kill everyone, (c) they might not be THAT much smarter, such that they can evade the system of laws and power-distribution meant to stop small groups from killing everyone. Anything I missed?
2. Existential risk =/= everyone dead. That’s just the central example. Permanent dystopia is also an existential risk, as is sufficiently big (and unjustified, and irreversible) value drift.
I think Matthew’s view is mostly spelled out in this comment and also in a few more comments on his shortform on the EA forum.
TLDR: his view is that very powerful (and even coordinated) misaligned entities that want resources would end up with almost all the resources (e.g. >99%), but this likely wouldn’t involve violence.
Existential risk =/= everyone dead. That’s just the central example. Permanent dystopia is also an existential risk, as is sufficiently big (and unjustified, and irreversible) value drift.
I think the above situation I described (no violence but >99% of resources owned by misaligned entities) would still count as existential risk from a conventional longtermist perspective, but awkwardly the definition of existential risk depends on a notion of value, in particular what counts as “substantially curtailing potential goodness”.
Whether or not you think that humanity only getting 0.1% of resources is “substantially curtailing total goodness” depends on other philosophical views.
I think it’s worth tabooing this word in this context for this reason.
(I disagree with Matthew about the chance of violence and also about how bad it is to cede 99% of resources.)
1. Wanna spell out the reasons why? (a) They’d be resisted by good gengineered humans, (b) they might be misaligned but not in ways that make them want to kill everyone, (c) they might not be THAT much smarter, such that they can evade the system of laws and power-distribution meant to stop small groups from killing everyone. Anything I missed?
2. Existential risk =/= everyone dead. That’s just the central example. Permanent dystopia is also an existential risk, as is sufficiently big (and unjustified, and irreversible) value drift.
I think Matthew’s view is mostly spelled out in this comment and also in a few more comments on his shortform on the EA forum.
TLDR: his view is that very powerful (and even coordinated) misaligned entities that want resources would end up with almost all the resources (e.g. >99%), but this likely wouldn’t involve violence.
I think the above situation I described (no violence but >99% of resources owned by misaligned entities) would still count as existential risk from a conventional longtermist perspective, but awkwardly the definition of existential risk depends on a notion of value, in particular what counts as “substantially curtailing potential goodness”.
Whether or not you think that humanity only getting 0.1% of resources is “substantially curtailing total goodness” depends on other philosophical views.
I think it’s worth tabooing this word in this context for this reason.
(I disagree with Matthew about the chance of violence and also about how bad it is to cede 99% of resources.)