To say that a system of any design is an “artificial intelligence”, we mean that it has goals which it tries to accomplish by acting in the world.
I cannot disagree with the paper based on that definition of what an “artificial intelligence” is. If you’ve all of this, goals, planning and foresight then you’re already at the end of a very long and hard journey peppered with failures. I’m aware of the risks associated with such agents and support the SIAI, including donations. The intention of this thread was that I wanted to show that contemporary AGI research is much more likely to lead to other outcomes, not that there will be no danger if you already have an AGI with the ability for unbounded self-improvement. But I believe there are many AGI designs who fail this characteristic and therefore I concluded that it is more likely than not that it won’t be a danger. I see now that my definition of AGI is considerable weaker than yours. So of course, if you take your definition what I said is not compelling. I believe that we’ll arrive at your definition only after a long chain of previous weak AGI’s who are impotent of considerable self-improvement and that once we figure out how to create the seed for this kind of potential we are also much more knowledgeable about associated risks and challenges such advanced AGI’s might pose.
Yes, and weak AGIs are dangerous in the same sense as Moore’s law is: by probably making the construction of strong AGI a little bit closer, and thus a development contributing to the eventual existential risk, while being probably not directly dangerous in itself.
Yes, but each step into that direction does also provide insights into the nature of AI and therefore can help to design friendly AI. My idea was that such uncertainties are incorporated into any estimations of the dangers posed by contemporary AI research. How much does the increased understanding outweigh its dangers?
Yes, but each step into that direction does also provide insights into the nature of AI and therefore can help to design friendly AI.
This was my guess for the first 1.5 years or so. The problem is, FAI is necessarily a strong AGI, but if you learn how to build a strong AGI, you are in trouble. You don’t want to have that knowledge around, unless you know where to get the goals from, and studying efficient AGIs doesn’t help with that. The harm is greater than the benefit, and it’s entirely plausible that one can succeed in building a strong AGI without getting the slightest clue about how to define Friendly goal, so it’s not a given that there is any benefit whatsoever.
Again, I recommend The Basic AI Drives.
I cannot disagree with the paper based on that definition of what an “artificial intelligence” is. If you’ve all of this, goals, planning and foresight then you’re already at the end of a very long and hard journey peppered with failures. I’m aware of the risks associated with such agents and support the SIAI, including donations. The intention of this thread was that I wanted to show that contemporary AGI research is much more likely to lead to other outcomes, not that there will be no danger if you already have an AGI with the ability for unbounded self-improvement. But I believe there are many AGI designs who fail this characteristic and therefore I concluded that it is more likely than not that it won’t be a danger. I see now that my definition of AGI is considerable weaker than yours. So of course, if you take your definition what I said is not compelling. I believe that we’ll arrive at your definition only after a long chain of previous weak AGI’s who are impotent of considerable self-improvement and that once we figure out how to create the seed for this kind of potential we are also much more knowledgeable about associated risks and challenges such advanced AGI’s might pose.
Yes, and weak AGIs are dangerous in the same sense as Moore’s law is: by probably making the construction of strong AGI a little bit closer, and thus a development contributing to the eventual existential risk, while being probably not directly dangerous in itself.
Yes, but each step into that direction does also provide insights into the nature of AI and therefore can help to design friendly AI. My idea was that such uncertainties are incorporated into any estimations of the dangers posed by contemporary AI research. How much does the increased understanding outweigh its dangers?
This was my guess for the first 1.5 years or so. The problem is, FAI is necessarily a strong AGI, but if you learn how to build a strong AGI, you are in trouble. You don’t want to have that knowledge around, unless you know where to get the goals from, and studying efficient AGIs doesn’t help with that. The harm is greater than the benefit, and it’s entirely plausible that one can succeed in building a strong AGI without getting the slightest clue about how to define Friendly goal, so it’s not a given that there is any benefit whatsoever.
Yes, I’ll read it now.