To be more precise. You can’t tell concerned AI researchers to read through hundreds of posts of marginal importance. You have to have some brochure for experts and educated laymen to be able to read up on a summary of the big picture that includes precise and compelling methodologies that they can follow through to come up with their own estimations of the likelihood of existential risks posed by superhuman artificial general intelligence. If the decision procedure gives them a different probability due to a differing prior and values, then you can tell them to read up on further material to be able to update their prior probability and values accordingly.
I’m content with your answer, then. I would personally welcome an overhaul to the presentation of AI material too. Still I think that Eliezer’s FAI views are a lot more structured, comprehensive and accessible than the impression you give in your relevant posts.
To be more precise. You can’t tell concerned AI researchers to read through hundreds of posts of marginal importance. You have to have some brochure for experts and educated laymen to be able to read up on a summary of the big picture that includes precise and compelling methodologies that they can follow through to come up with their own estimations of the likelihood of existential risks posed by superhuman artificial general intelligence. If the decision procedure gives them a different probability due to a differing prior and values, then you can tell them to read up on further material to be able to update their prior probability and values accordingly.
I’m content with your answer, then. I would personally welcome an overhaul to the presentation of AI material too. Still I think that Eliezer’s FAI views are a lot more structured, comprehensive and accessible than the impression you give in your relevant posts.