Yes, and weak AGIs are dangerous in the same sense as Moore’s law is: by probably making the construction of strong AGI a little bit closer, and thus a development contributing to the eventual existential risk, while being probably not directly dangerous in itself.
Yes, but each step into that direction does also provide insights into the nature of AI and therefore can help to design friendly AI. My idea was that such uncertainties are incorporated into any estimations of the dangers posed by contemporary AI research. How much does the increased understanding outweigh its dangers?
Yes, but each step into that direction does also provide insights into the nature of AI and therefore can help to design friendly AI.
This was my guess for the first 1.5 years or so. The problem is, FAI is necessarily a strong AGI, but if you learn how to build a strong AGI, you are in trouble. You don’t want to have that knowledge around, unless you know where to get the goals from, and studying efficient AGIs doesn’t help with that. The harm is greater than the benefit, and it’s entirely plausible that one can succeed in building a strong AGI without getting the slightest clue about how to define Friendly goal, so it’s not a given that there is any benefit whatsoever.
Yes, and weak AGIs are dangerous in the same sense as Moore’s law is: by probably making the construction of strong AGI a little bit closer, and thus a development contributing to the eventual existential risk, while being probably not directly dangerous in itself.
Yes, but each step into that direction does also provide insights into the nature of AI and therefore can help to design friendly AI. My idea was that such uncertainties are incorporated into any estimations of the dangers posed by contemporary AI research. How much does the increased understanding outweigh its dangers?
This was my guess for the first 1.5 years or so. The problem is, FAI is necessarily a strong AGI, but if you learn how to build a strong AGI, you are in trouble. You don’t want to have that knowledge around, unless you know where to get the goals from, and studying efficient AGIs doesn’t help with that. The harm is greater than the benefit, and it’s entirely plausible that one can succeed in building a strong AGI without getting the slightest clue about how to define Friendly goal, so it’s not a given that there is any benefit whatsoever.