Our ultimate fate may be one of doom but it may also be exceedingly positive to us. The conceiving of bad conceivable outcomes is not itself able to negate conceiving of positive conceivable outcomes, nor the other way around. Doomsaying (steel-dystopianizing?) and steel-utopianizing are therefore not productive activities of man.
There has never been a guarantee of safety of our or any other lifeform’s path through the evolutionary mysts. Guaranteeing our path through the singularity to specific agreeable outcomes may not be possible even in a world where a positive singularity outcome is actually later achieved. That might even be our world for all we know. Even if it’s always possible in all possible worlds to create a guarantee of our path through the singularity and its outcome, it’s not clear to me that working on trying to make theoretical (and practical) guarantees would be better than the utility of working on other positive technology developments instead. For example, while such guarantees may be possible in all possible worlds, it may not be possible to develop such guarantees in a timely manner for them to matter. Even if guarantees are universally possible in all possible worlds, prior to, you know, actually needing them to be implemented, it may still be less optimal to focus your work on those guarantees.
Some of those positive singularity outcomes may only be achievable in worlds where specifically your followers and readers neglect the very things that you are advocating for them to spend their time on. Nobody really knows, not with any certainty.
The OP is arguing that X is literally true. Framing it as a ‘steel-man’ of X is misleading; you may disagree with the claim, but engage with it as an actual attempt to describe reality, not as an attempt to steel-man or do the ‘well, maybe this thing will go wrong, we can’t be sure...’ thing.
There has never been a guarantee of safety of our or any other lifeform’s path through the evolutionary mysts.
EY isn’t asking for a guarantee; see −2 in the preamble.
Our ultimate fate may be one of doom but it may also be exceedingly positive to us. The conceiving of bad conceivable outcomes is not itself able to negate conceiving of positive conceivable outcomes, nor the other way around. Doomsaying (steel-dystopianizing?) and steel-utopianizing are therefore not productive activities of man.
There has never been a guarantee of safety of our or any other lifeform’s path through the evolutionary mysts. Guaranteeing our path through the singularity to specific agreeable outcomes may not be possible even in a world where a positive singularity outcome is actually later achieved. That might even be our world for all we know. Even if it’s always possible in all possible worlds to create a guarantee of our path through the singularity and its outcome, it’s not clear to me that working on trying to make theoretical (and practical) guarantees would be better than the utility of working on other positive technology developments instead. For example, while such guarantees may be possible in all possible worlds, it may not be possible to develop such guarantees in a timely manner for them to matter. Even if guarantees are universally possible in all possible worlds, prior to, you know, actually needing them to be implemented, it may still be less optimal to focus your work on those guarantees.
Some of those positive singularity outcomes may only be achievable in worlds where specifically your followers and readers neglect the very things that you are advocating for them to spend their time on. Nobody really knows, not with any certainty.
The OP is arguing that X is literally true. Framing it as a ‘steel-man’ of X is misleading; you may disagree with the claim, but engage with it as an actual attempt to describe reality, not as an attempt to steel-man or do the ‘well, maybe this thing will go wrong, we can’t be sure...’ thing.
EY isn’t asking for a guarantee; see −2 in the preamble.