Eliezer,
I have a few practical questions for you. If you don’t want to answer them in this tread, that’s fine, but I am curious:
1) Do you believe humans have a chance of achieving uploading without the use of a strong AI? If so, where do you place the odds?
2) Do you believe that uploaded human minds might be capable of improving themselves/increasing their own intelligence within the framework of human preference? If so, where do you place the odds?
3) Do you believe that increased-intelligence-uploaded humans might be able to create an fAI with more success than us meat-men? If so, where do you place the odds?
4) Where do you place the odds of you/your institute creating an fAI faster than 1-3 occurring?
5) Where do you place the odds of someone else creating an unfriendly AI faster than 1-3 occurring?
Eliezer, I have a few practical questions for you. If you don’t want to answer them in this tread, that’s fine, but I am curious:
1) Do you believe humans have a chance of achieving uploading without the use of a strong AI? If so, where do you place the odds?
2) Do you believe that uploaded human minds might be capable of improving themselves/increasing their own intelligence within the framework of human preference? If so, where do you place the odds?
3) Do you believe that increased-intelligence-uploaded humans might be able to create an fAI with more success than us meat-men? If so, where do you place the odds?
4) Where do you place the odds of you/your institute creating an fAI faster than 1-3 occurring?
5) Where do you place the odds of someone else creating an unfriendly AI faster than 1-3 occurring?
Thank you!!!