I thought they were talking specifically about an AGI that is capable of recursive self-improvement. This does not encompass all possible AGIs, but the non-self-improving ones are not likely to be very smart, as far as I understand, and thus aren’t a concern.
[SIAI] are assuming that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels.
and I asked why you believed that, as distinct from ”...any AGI has a non-negligible chance of self-improving itself to transhuman levels, and the cost of that happening is so vast that it’s worth devoting effort to avoid even if the chance is relatively low”?
Now you seem to be saying that SI doesn’t believe that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels, but it is primarily concerned with those who do.
I agree with that entirely; it was my point in the first place.
Were we in agreement all along, have you changed your mind in the course of this exchange, or am I really really confused about what’s going on?
Sorry, I think I am guilty of misusing terminology. I have been using AI and AGI interchangeably, but that’s obviously not right. As far as I understand, “AGI” refers to a general intelligence who can solve (or, at least, attempt to solve) any problem, whereas “AI” refers to any kind of an artificial intelligence, including the specialized kind. There are many AIs that already exist in the world—for example, Google’s AdSense algorithm—but SIAI is not concerned about them (as far as I know), because they lack the capacity to self-improve.
My own hidden assumption, which I should’ve recognized and voiced earlier, is that an AGI (as contrasted with non-general AI) would most likely be produced through a process of recursive self-improvement; it is highly unlikely that an AGI could be created from scratch by humans writing lines of code. As far as I understand, the SIAI agrees with this statement, but again, I could be wrong.
Thus, it is unlikely that a non-general AI will ever be smart enough to warrant concern. It could still do some damage, of course, but then, so could a busted water main. On the other hand, an AGI will most likely arise as the result of recursive self-improvement, and thus will be capable of further self-improvement, thus boosting itself to transhuman levels very quickly unless its self-improvement is arrested by some mechanism.
Yeah, I’ve been talking throughout about what you’re labeling “AI” here. We agree that these won’t necessarily self-improve. Awesome.
With respect to what you’re labeling “AGI” here, you’re saying the following: 1) given that X is an AGI developed by humans, the probability that X has thus far been capable of recursive self-improvement is very high, and 2) given that X has thus far been capable of recursive self-improvement, the probability that X will continue to be capable of recursive self-improvement in the future is very high. 3) SIAI believes 1) and 2).
Yes, with the caveats that a). as far as I know, no such X currently exists, and b). my confidence in (3) is much lower than my confidence in (1) and (2).
I thought they were talking specifically about an AGI that is capable of recursive self-improvement. This does not encompass all possible AGIs, but the non-self-improving ones are not likely to be very smart, as far as I understand, and thus aren’t a concern.
OK, now I am confused.
This whole thread started because you said:
and I asked why you believed that, as distinct from ”...any AGI has a non-negligible chance of self-improving itself to transhuman levels, and the cost of that happening is so vast that it’s worth devoting effort to avoid even if the chance is relatively low”?
Now you seem to be saying that SI doesn’t believe that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels, but it is primarily concerned with those who do.
I agree with that entirely; it was my point in the first place.
Were we in agreement all along, have you changed your mind in the course of this exchange, or am I really really confused about what’s going on?
Sorry, I think I am guilty of misusing terminology. I have been using AI and AGI interchangeably, but that’s obviously not right. As far as I understand, “AGI” refers to a general intelligence who can solve (or, at least, attempt to solve) any problem, whereas “AI” refers to any kind of an artificial intelligence, including the specialized kind. There are many AIs that already exist in the world—for example, Google’s AdSense algorithm—but SIAI is not concerned about them (as far as I know), because they lack the capacity to self-improve.
My own hidden assumption, which I should’ve recognized and voiced earlier, is that an AGI (as contrasted with non-general AI) would most likely be produced through a process of recursive self-improvement; it is highly unlikely that an AGI could be created from scratch by humans writing lines of code. As far as I understand, the SIAI agrees with this statement, but again, I could be wrong.
Thus, it is unlikely that a non-general AI will ever be smart enough to warrant concern. It could still do some damage, of course, but then, so could a busted water main. On the other hand, an AGI will most likely arise as the result of recursive self-improvement, and thus will be capable of further self-improvement, thus boosting itself to transhuman levels very quickly unless its self-improvement is arrested by some mechanism.
OK, I think I understand better now.
Yeah, I’ve been talking throughout about what you’re labeling “AI” here. We agree that these won’t necessarily self-improve. Awesome.
With respect to what you’re labeling “AGI” here, you’re saying the following:
1) given that X is an AGI developed by humans, the probability that X has thus far been capable of recursive self-improvement is very high, and
2) given that X has thus far been capable of recursive self-improvement, the probability that X will continue to be capable of recursive self-improvement in the future is very high.
3) SIAI believes 1) and 2).
Yes? Have I understood you?
Yes, with the caveats that a). as far as I know, no such X currently exists, and b). my confidence in (3) is much lower than my confidence in (1) and (2).