As a separate point, people talk about AI friendliness as a safety precaution, but I think an important thing to remember is a truly friendly self improving AGI would probably be the greatest possible thing you could do for the world. It’s possible the risk of human destruction from the pursuit of FAI is larger than the possible upside, but if you include the FAI’s ability to mitigate other existential risks I don’t think that’s the case.
As a separate point, people talk about AI friendliness as a safety precaution, but I think an important thing to remember is a truly friendly self improving AGI would probably be the greatest possible thing you could do for the world. It’s possible the risk of human destruction from the pursuit of FAI is larger than the possible upside, but if you include the FAI’s ability to mitigate other existential risks I don’t think that’s the case.