So if we interpret the doomsday argument as information about the danger of these advanced technologies—if we do this, we are overwhelmingly likely to die—then isn’t the logical action just to fight them down at every opportunity, rather than trying to be lucky by being ultra-smart about how we develop and deploy them?
This would make a lot of sense if there were any way to enforce it. As it stands defecting would be way too easy and the incentives to do so way too high. Further, the people most likely to defect would be those we least want deciding how new technology is deployed.
This would make a lot of sense if there were any way to enforce it. As it stands defecting would be way too easy and the incentives to do so way too high. Further, the people most likely to defect would be those we least want deciding how new technology is deployed.