[SIAI] are assuming that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels.
Can you clarify your reasons for believing this, as distinct from ”...any AGI has a non-negligible chance of self-improving itself to transhuman levels, and the cost of that happening is so vast that it’s worth devoting effort to avoid even if the chance is relatively low”?
That’s a good point, but, from reading what Eliezer and Luke are writing, I formed the impression that my interpretation is correct. In addition, the SIAI FAQ seems to be saying that intelligence explosion is a natural consequence of Moore’s Law; thus, if Moore’s Law continues to hold, intelligence explosion is inevitable.
FWIW, I personally disagree with both statements, but that’s probably a separate topic.
You’re right, I just re-read it and it doesn’t mention Moore’s Law; either it did at some point and then changed, or I saw that argument somewhere else. Still, the FAQ does seem to suggest that the only thing that can stop the Singularity is total human extinction (well, that, or the existence of souls, which IMO we can safely discount); that’s pretty close to inevitability as far as I’m concerned.
Note that the section you’re quoting is no longer talking about the inevitable ascension of any given AGI, but rather the inevitability of some AGI ascending.
I thought they were talking specifically about an AGI that is capable of recursive self-improvement. This does not encompass all possible AGIs, but the non-self-improving ones are not likely to be very smart, as far as I understand, and thus aren’t a concern.
[SIAI] are assuming that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels.
and I asked why you believed that, as distinct from ”...any AGI has a non-negligible chance of self-improving itself to transhuman levels, and the cost of that happening is so vast that it’s worth devoting effort to avoid even if the chance is relatively low”?
Now you seem to be saying that SI doesn’t believe that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels, but it is primarily concerned with those who do.
I agree with that entirely; it was my point in the first place.
Were we in agreement all along, have you changed your mind in the course of this exchange, or am I really really confused about what’s going on?
Sorry, I think I am guilty of misusing terminology. I have been using AI and AGI interchangeably, but that’s obviously not right. As far as I understand, “AGI” refers to a general intelligence who can solve (or, at least, attempt to solve) any problem, whereas “AI” refers to any kind of an artificial intelligence, including the specialized kind. There are many AIs that already exist in the world—for example, Google’s AdSense algorithm—but SIAI is not concerned about them (as far as I know), because they lack the capacity to self-improve.
My own hidden assumption, which I should’ve recognized and voiced earlier, is that an AGI (as contrasted with non-general AI) would most likely be produced through a process of recursive self-improvement; it is highly unlikely that an AGI could be created from scratch by humans writing lines of code. As far as I understand, the SIAI agrees with this statement, but again, I could be wrong.
Thus, it is unlikely that a non-general AI will ever be smart enough to warrant concern. It could still do some damage, of course, but then, so could a busted water main. On the other hand, an AGI will most likely arise as the result of recursive self-improvement, and thus will be capable of further self-improvement, thus boosting itself to transhuman levels very quickly unless its self-improvement is arrested by some mechanism.
Yeah, I’ve been talking throughout about what you’re labeling “AI” here. We agree that these won’t necessarily self-improve. Awesome.
With respect to what you’re labeling “AGI” here, you’re saying the following: 1) given that X is an AGI developed by humans, the probability that X has thus far been capable of recursive self-improvement is very high, and 2) given that X has thus far been capable of recursive self-improvement, the probability that X will continue to be capable of recursive self-improvement in the future is very high. 3) SIAI believes 1) and 2).
Yes, with the caveats that a). as far as I know, no such X currently exists, and b). my confidence in (3) is much lower than my confidence in (1) and (2).
Can you clarify your reasons for believing this, as distinct from ”...any AGI has a non-negligible chance of self-improving itself to transhuman levels, and the cost of that happening is so vast that it’s worth devoting effort to avoid even if the chance is relatively low”?
That’s a good point, but, from reading what Eliezer and Luke are writing, I formed the impression that my interpretation is correct. In addition, the SIAI FAQ seems to be saying that intelligence explosion is a natural consequence of Moore’s Law; thus, if Moore’s Law continues to hold, intelligence explosion is inevitable.
FWIW, I personally disagree with both statements, but that’s probably a separate topic.
Huh. The FAQ you cite doesn’t seem to be positing inevitability to me. (shrug)
You’re right, I just re-read it and it doesn’t mention Moore’s Law; either it did at some point and then changed, or I saw that argument somewhere else. Still, the FAQ does seem to suggest that the only thing that can stop the Singularity is total human extinction (well, that, or the existence of souls, which IMO we can safely discount); that’s pretty close to inevitability as far as I’m concerned.
Note that the section you’re quoting is no longer talking about the inevitable ascension of any given AGI, but rather the inevitability of some AGI ascending.
I thought they were talking specifically about an AGI that is capable of recursive self-improvement. This does not encompass all possible AGIs, but the non-self-improving ones are not likely to be very smart, as far as I understand, and thus aren’t a concern.
OK, now I am confused.
This whole thread started because you said:
and I asked why you believed that, as distinct from ”...any AGI has a non-negligible chance of self-improving itself to transhuman levels, and the cost of that happening is so vast that it’s worth devoting effort to avoid even if the chance is relatively low”?
Now you seem to be saying that SI doesn’t believe that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels, but it is primarily concerned with those who do.
I agree with that entirely; it was my point in the first place.
Were we in agreement all along, have you changed your mind in the course of this exchange, or am I really really confused about what’s going on?
Sorry, I think I am guilty of misusing terminology. I have been using AI and AGI interchangeably, but that’s obviously not right. As far as I understand, “AGI” refers to a general intelligence who can solve (or, at least, attempt to solve) any problem, whereas “AI” refers to any kind of an artificial intelligence, including the specialized kind. There are many AIs that already exist in the world—for example, Google’s AdSense algorithm—but SIAI is not concerned about them (as far as I know), because they lack the capacity to self-improve.
My own hidden assumption, which I should’ve recognized and voiced earlier, is that an AGI (as contrasted with non-general AI) would most likely be produced through a process of recursive self-improvement; it is highly unlikely that an AGI could be created from scratch by humans writing lines of code. As far as I understand, the SIAI agrees with this statement, but again, I could be wrong.
Thus, it is unlikely that a non-general AI will ever be smart enough to warrant concern. It could still do some damage, of course, but then, so could a busted water main. On the other hand, an AGI will most likely arise as the result of recursive self-improvement, and thus will be capable of further self-improvement, thus boosting itself to transhuman levels very quickly unless its self-improvement is arrested by some mechanism.
OK, I think I understand better now.
Yeah, I’ve been talking throughout about what you’re labeling “AI” here. We agree that these won’t necessarily self-improve. Awesome.
With respect to what you’re labeling “AGI” here, you’re saying the following:
1) given that X is an AGI developed by humans, the probability that X has thus far been capable of recursive self-improvement is very high, and
2) given that X has thus far been capable of recursive self-improvement, the probability that X will continue to be capable of recursive self-improvement in the future is very high.
3) SIAI believes 1) and 2).
Yes? Have I understood you?
Yes, with the caveats that a). as far as I know, no such X currently exists, and b). my confidence in (3) is much lower than my confidence in (1) and (2).