Questions about strange scenarios that people can regard as unlikely or remote should be more explicitly phrased as hypotheticals, asking to describe the “what if”, instead of commenting on plausibility of the assumptions. For example, you have this question:
Q5: How important is it to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to build AI that is good enough at general reasoning (including science, mathematics, engineering and programming) to undergo radical self-modification?
This could be rephrased as follows:
Q5′: Suppose, hypothetically, that in 100 years it will become possible to build an AI that’s good enough at general reasoning (including science, mathematics, engineering and programming) and that is able to improve its own competence to levels far surpassing human ability. How important would it be to figure out (by that time) how to make this hypothetical AI provably friendly to us and our values (non-dangerous), before actually building (running) it?
Additionally, the questions about probability of hypotheticals could be tweaked, so that each hypothetical is addressed in two questions: one asking about probability of its assumptions being met, and another about implications of its assumptions being met.
Questions about strange scenarios that people can regard as unlikely or remote should be more explicitly phrased as hypotheticals, asking to describe the “what if”
Q: Suppose hypothetically that in 100 years it would be possible to build a dangerous AI. How important would it be to figure out how to make it non-dangerous?
Expert: How important is your life to you?
Formulated like that it would be a suggestive question posed to yield the desired answer. The problem in question is the hypothesis and not the implications of its possible correctness.
Your original question already asked about this particular possibility. If you want to gauge how likely this possibility is seen, ask directly, without mixing that with the question of value. And previous responses show that the answer is not determined by my variant of the question: three popular responses are “It’s going to be fine by default” (wrong), “It’s not possible to guarantee absence of danger, so why bother?” (because of the danger) and “If people worried about absence of danger so much, they won’t have useful things X,Y,Z.” (these things weren’t existential risks).
Questions about strange scenarios that people can regard as unlikely or remote should be more explicitly phrased as hypotheticals, asking to describe the “what if”, instead of commenting on plausibility of the assumptions. For example, you have this question:
This could be rephrased as follows:
Additionally, the questions about probability of hypotheticals could be tweaked, so that each hypothetical is addressed in two questions: one asking about probability of its assumptions being met, and another about implications of its assumptions being met.
Q: Suppose hypothetically that in 100 years it would be possible to build a dangerous AI. How important would it be to figure out how to make it non-dangerous?
Expert: How important is your life to you?
Formulated like that it would be a suggestive question posed to yield the desired answer. The problem in question is the hypothesis and not the implications of its possible correctness.
Your original question already asked about this particular possibility. If you want to gauge how likely this possibility is seen, ask directly, without mixing that with the question of value. And previous responses show that the answer is not determined by my variant of the question: three popular responses are “It’s going to be fine by default” (wrong), “It’s not possible to guarantee absence of danger, so why bother?” (because of the danger) and “If people worried about absence of danger so much, they won’t have useful things X,Y,Z.” (these things weren’t existential risks).