it is exceedingly unlikely that we will destroy life on earth, although we might find genetic life getting replaced with some new technology generated by ai that replaces all of the animal kingdom. do you really want to give up the one shot we have at making a better world for biological life?
“do you really want to give up the one shot we have at making a better world for biological life?” is a misleading argument because, as you know, humanity may well not create an AGI that makes the world better for life (biological or otherwise).
“it is exceedingly unlikely that we will destroy life on earth” is a valid objection if true though.
I don’t see how we could possibly prevent ai from making a world that is as good as the world has ever been for life, according to the agi. I don’t think a paperclipper is a total failure, my preferences would of course be mortally angry that humanity died if I was dead, but my preferences would still have sympathy for the ai’s interest in shiny objects or what have you. similarly to how if all intelligent life was wiped out, I would still prefer for there to be plants than for there to be a barren rock with no cellular life at all. but I would far far far prefer intelligent beings survive to coexist and help take care of each other.
more to the point—I think we’re going to solve safety, which will involve changing the nature of ownership contracts to no longer allow capitalism to take over all the capital. markets should be designed in ways that protect all their members, and ai safety is going to connect deeply to the design of markets in terms of ai to ai market design.
it is exceedingly unlikely that we will destroy life on earth, although we might find genetic life getting replaced with some new technology generated by ai that replaces all of the animal kingdom. do you really want to give up the one shot we have at making a better world for biological life?
“do you really want to give up the one shot we have at making a better world for biological life?” is a misleading argument because, as you know, humanity may well not create an AGI that makes the world better for life (biological or otherwise).
“it is exceedingly unlikely that we will destroy life on earth” is a valid objection if true though.
I don’t see how we could possibly prevent ai from making a world that is as good as the world has ever been for life, according to the agi. I don’t think a paperclipper is a total failure, my preferences would of course be mortally angry that humanity died if I was dead, but my preferences would still have sympathy for the ai’s interest in shiny objects or what have you. similarly to how if all intelligent life was wiped out, I would still prefer for there to be plants than for there to be a barren rock with no cellular life at all. but I would far far far prefer intelligent beings survive to coexist and help take care of each other.
more to the point—I think we’re going to solve safety, which will involve changing the nature of ownership contracts to no longer allow capitalism to take over all the capital. markets should be designed in ways that protect all their members, and ai safety is going to connect deeply to the design of markets in terms of ai to ai market design.
don’t give up yet!