I agree with most of what you say here. Maybe it satisfactorily answers the questions raised in my post; I’ll spend some time brooding over this.
For instance, considerations about large possible future populations/astronomical waste increase the expected value of any existential risk reduction, from asteroids to nukes to bio to AI. For any specific risk there are many different ways, direct and indirect, to try to address it.
Here it would be good to compile a list; I myself am very much at a loss as to what the available options are.
I agree with most of what you say here. Maybe it satisfactorily answers the questions raised in my post; I’ll spend some time brooding over this.
Here it would be good to compile a list; I myself am very much at a loss as to what the available options are.
I have such lists, but by the logic of your post it sounds like you should gather them yourself so you worry less about selection bias.
I would love to study these lists! Would you mind sending me them? ( My email: myusername@gmx.de )