I have updated on how important it is for Friend AI to succeed (more now). I did this by changing the way I thought about the problem. I used to think in terms of the chance of Unfriendly AI, this lead me to assign a chance of whether a fast, self-modifying, indifferent or FAI was possible at all.
Instead of thinking of the risk of UFAI, I started thinking of the risk of ~FAI. The more I think about it the more I believe that a Friendly Singleton AI is the only way for us humans to survive. FAI mitigates other existential risks of nature, unknowns, human cooperation (Mutually Assured Destruction is too risky), as well as hostile intelligences; both human and self-modifying trans-humans. My credence – that without FAI, existential risks will destroy humanity within 1,000 years – is 99%.
Is this flawed? If not then I’m probably really late to this idea, but I thought I would mention it because it’s taken considerable time for me to see it like this. And if I were to explain the AI problem to someone who is uninitiated, I would be tempted to lead with the ~FAI is bad, rather than UFAI is bad. Why? Because intuitively, the dangers of UFAI feels “farther” than ~FAI. First people have to consider whether or not it’s even possible for AI, then consider why its bad for for UFAI, this is a future problem. Whereas ~FAI is now, it feels nearer, it is happening – we have come close to annihilating ourselves before and technology is just getting better at accidentally killing us, therefore let’s work on FAI urgently.
FAI mitigates other existential risks of nature, unknowns, human cooperation (Mutually Assured Destruction is too risky), as well as hostile intelligences; both human and self-modifying trans-humans. My credence – that without FAI, existential risks will destroy humanity within 1,000 years – is 99%.
I find it unlikely that you are well calibrated when you put your credence at 99% for a 1,000 year forecast.
Human culture changes over time. It’s very difficult to predict how humans in the future will think about specific problems. We went in less than 100 years from criminalizing homosexual acts to lawful same sex marriage.
Could you imagine that everyone would adopt your morality in 200 or 300 hundred years? If so do you think that would prevent humanity from being doomed?
If you don’t think so, I would suggest you to evaluate your own moral beliefs in detail.
I have updated on how important it is for Friend AI to succeed (more now). I did this by changing the way I thought about the problem. I used to think in terms of the chance of Unfriendly AI, this lead me to assign a chance of whether a fast, self-modifying, indifferent or FAI was possible at all.
Instead of thinking of the risk of UFAI, I started thinking of the risk of ~FAI. The more I think about it the more I believe that a Friendly Singleton AI is the only way for us humans to survive. FAI mitigates other existential risks of nature, unknowns, human cooperation (Mutually Assured Destruction is too risky), as well as hostile intelligences; both human and self-modifying trans-humans. My credence – that without FAI, existential risks will destroy humanity within 1,000 years – is 99%.
Is this flawed? If not then I’m probably really late to this idea, but I thought I would mention it because it’s taken considerable time for me to see it like this. And if I were to explain the AI problem to someone who is uninitiated, I would be tempted to lead with the ~FAI is bad, rather than UFAI is bad. Why? Because intuitively, the dangers of UFAI feels “farther” than ~FAI. First people have to consider whether or not it’s even possible for AI, then consider why its bad for for UFAI, this is a future problem. Whereas ~FAI is now, it feels nearer, it is happening – we have come close to annihilating ourselves before and technology is just getting better at accidentally killing us, therefore let’s work on FAI urgently.
So you want a god to watch over humanity—without it we’re doomed?
As of right now, yes. However, I could be persuaded otherwise.
I find it unlikely that you are well calibrated when you put your credence at 99% for a 1,000 year forecast.
Human culture changes over time. It’s very difficult to predict how humans in the future will think about specific problems. We went in less than 100 years from criminalizing homosexual acts to lawful same sex marriage.
Could you imagine that everyone would adopt your morality in 200 or 300 hundred years? If so do you think that would prevent humanity from being doomed?
If you don’t think so, I would suggest you to evaluate your own moral beliefs in detail.