If that’s really the only thing he drew meaning from, and if he truly thinks that failure is inevitable, today, then I guess he must be getting his meaning from striving to fail in the most dignified possible way.
But I’d guess that like most humans, he probably also draws meaning from love, and joy. You know, living well. The point of surviving was that a future where humans survive would have a lot of that in it. If failure were truly inevitable (though I don’t personally think it is[1]), I’d recommend setting the work aside and making it your duty to just generate as much love and joy as you can with the time you have available. That’s how we lived for most of history, and how most people still live today. We can learn to live that way.
Reasons I don’t understand why anyone would have a P(Doom) higher than 75%: Governments are showing indications of taking the problem seriously. Inspectability techniques are getting pretty good, so misalignment is likely to be detectable before deployment, so a sufficiently energetic government response could be possible, and sub-AGI tech is sufficient for controlling the supply chain and buying additional time, and China isn’t suicidal. Major inner misalignment might just not really happen. Self-correcting from natural language instructions to “be good, you know” could be enough. There are very deep principled reasons to expect that having two opposing AGIs debate and check each others’ arguments works well.
If that’s really the only thing he drew meaning from, and if he truly thinks that failure is inevitable, today, then I guess he must be getting his meaning from striving to fail in the most dignified possible way.
But I’d guess that like most humans, he probably also draws meaning from love, and joy. You know, living well. The point of surviving was that a future where humans survive would have a lot of that in it.
If failure were truly inevitable (though I don’t personally think it is[1]), I’d recommend setting the work aside and making it your duty to just generate as much love and joy as you can with the time you have available. That’s how we lived for most of history, and how most people still live today. We can learn to live that way.
Reasons I don’t understand why anyone would have a P(Doom) higher than 75%: Governments are showing indications of taking the problem seriously. Inspectability techniques are getting pretty good, so misalignment is likely to be detectable before deployment, so a sufficiently energetic government response could be possible, and sub-AGI tech is sufficient for controlling the supply chain and buying additional time, and China isn’t suicidal. Major inner misalignment might just not really happen. Self-correcting from natural language instructions to “be good, you know” could be enough. There are very deep principled reasons to expect that having two opposing AGIs debate and check each others’ arguments works well.