6) Where do you place the odds of you/​your institute creating an unfriendly AI in an attempt to create a friendly one?
7) Do you have any external validation (ie, unassociated with your institute and not currently worshiping you) for this estimate, or does it come exclusively from calculations you made?
I should also add:
6) Where do you place the odds of you/​your institute creating an unfriendly AI in an attempt to create a friendly one?
7) Do you have any external validation (ie, unassociated with your institute and not currently worshiping you) for this estimate, or does it come exclusively from calculations you made?