I expect a lot of actually relevant stuff doesn’t seem relevant until you’ve studied it in connection with the problem for a few years. But maybe you don’t get that far, because it didn’t seem relevant :(
Friendly AI is a monster problem partly because nearly everything any human experiences, believes, wants to believe or has any opinion at all on, is potentially relevant. You could be forgiven for thinking maybe there isn’t a well-defined problem buried under all that mess after all. But there may be some useful sub-problems around the edges.
Personally, even if AI-that-goes-FOOM-catastrophically isn’t very likely, I think we shouldn’t even need that reason to study what sort of life and environment would be optimal for humans. It doesn’t have to be about asking dangerous wishes of some technological genie-in-a-bottle. We already have supra-human entities such as governments and corporations making decisions with non-zero existential risk attached, and we probably want them to be a bit friendlier if possible.
I expect a lot of actually relevant stuff doesn’t seem relevant until you’ve studied it in connection with the problem for a few years. But maybe you don’t get that far, because it didn’t seem relevant :(
Friendly AI is a monster problem partly because nearly everything any human experiences, believes, wants to believe or has any opinion at all on, is potentially relevant. You could be forgiven for thinking maybe there isn’t a well-defined problem buried under all that mess after all. But there may be some useful sub-problems around the edges.
Personally, even if AI-that-goes-FOOM-catastrophically isn’t very likely, I think we shouldn’t even need that reason to study what sort of life and environment would be optimal for humans. It doesn’t have to be about asking dangerous wishes of some technological genie-in-a-bottle. We already have supra-human entities such as governments and corporations making decisions with non-zero existential risk attached, and we probably want them to be a bit friendlier if possible.