No. The “fleshed-out version” is rather complex, incomplete, and constantly-changing, as it’s effectively the current compromise that’s been forged between the negative utilitarian, positive utilitarian, deontological, and purely egoist factions within my brain. It has plenty of inconsistencies, but I resolve those on a case-by-case basis as I encounter them. I don’t have a good answer to the doomsday machine, because I currently don’t expect to encounter a situation where my actions would have considerable influence on the creation of a doomsday machine, so I haven’t needed to resolve that particular inconsistency.
Of course, there is the question of x-risk mitigation work and the fact that e.g. my work for MIRI might reduce the risk of a doomsday machine, so I have been forced to somewhat consider the question. My negative utilitarian faction would consider it a good thing if all life on Earth were eradicated, with the other factions strongly disagreeing. The current compromise balance is based around the suspicion that most kinds of x-risk would probably lead to massive suffering in the form of an immense death toll and then a gradual reconstruction that would eventually bring Earth’s population back to its current levels, rather than all life on the planet going extinct. (Even for AI/Singularity scenarios there is great uncertainty and a non-trivial possibility for such an outcome.) All my brain-factions agree on this being a Seriously Bad scenario to happen, so there is currently an agreement that work aimed at reducing the outcome of this scenario is good, even if it indirectly influences the probability of an “everyone dies” scenario in one way or another. The compromise is only possible because we are currently very unsure of what would have a very strong effect on the probability of an “everyone dies” scenario.
I am unsure of what would happen if we had good evidence of it really being possible to strongly increase or decrease the probability of an “everyone dies” scenario: with the current power balances, I expect that we’d just decide not to do anything either way, with the negative utilitarian faction being strong enough to veto attempts to save humanity, but not strong enough to override everyone else’s veto when it came to attempts to destroy humanity. Of course, this assumes that humanity would basically go on experiencing its current levels of suffering after being saved: if saving humanity would also involve a positive Singularity after which it was very sure that nobody would need to experience involuntary suffering anymore, then the power balance would very strongly shift to favor saving humanity.
Do you have a fleshed-out version formulated somewhere? *tries to hide iron fireplace poker behind his back*
No. The “fleshed-out version” is rather complex, incomplete, and constantly-changing, as it’s effectively the current compromise that’s been forged between the negative utilitarian, positive utilitarian, deontological, and purely egoist factions within my brain. It has plenty of inconsistencies, but I resolve those on a case-by-case basis as I encounter them. I don’t have a good answer to the doomsday machine, because I currently don’t expect to encounter a situation where my actions would have considerable influence on the creation of a doomsday machine, so I haven’t needed to resolve that particular inconsistency.
Of course, there is the question of x-risk mitigation work and the fact that e.g. my work for MIRI might reduce the risk of a doomsday machine, so I have been forced to somewhat consider the question. My negative utilitarian faction would consider it a good thing if all life on Earth were eradicated, with the other factions strongly disagreeing. The current compromise balance is based around the suspicion that most kinds of x-risk would probably lead to massive suffering in the form of an immense death toll and then a gradual reconstruction that would eventually bring Earth’s population back to its current levels, rather than all life on the planet going extinct. (Even for AI/Singularity scenarios there is great uncertainty and a non-trivial possibility for such an outcome.) All my brain-factions agree on this being a Seriously Bad scenario to happen, so there is currently an agreement that work aimed at reducing the outcome of this scenario is good, even if it indirectly influences the probability of an “everyone dies” scenario in one way or another. The compromise is only possible because we are currently very unsure of what would have a very strong effect on the probability of an “everyone dies” scenario.
I am unsure of what would happen if we had good evidence of it really being possible to strongly increase or decrease the probability of an “everyone dies” scenario: with the current power balances, I expect that we’d just decide not to do anything either way, with the negative utilitarian faction being strong enough to veto attempts to save humanity, but not strong enough to override everyone else’s veto when it came to attempts to destroy humanity. Of course, this assumes that humanity would basically go on experiencing its current levels of suffering after being saved: if saving humanity would also involve a positive Singularity after which it was very sure that nobody would need to experience involuntary suffering anymore, then the power balance would very strongly shift to favor saving humanity.