Yeah, I’ve been talking throughout about what you’re labeling “AI” here. We agree that these won’t necessarily self-improve. Awesome.
With respect to what you’re labeling “AGI” here, you’re saying the following: 1) given that X is an AGI developed by humans, the probability that X has thus far been capable of recursive self-improvement is very high, and 2) given that X has thus far been capable of recursive self-improvement, the probability that X will continue to be capable of recursive self-improvement in the future is very high. 3) SIAI believes 1) and 2).
Yes, with the caveats that a). as far as I know, no such X currently exists, and b). my confidence in (3) is much lower than my confidence in (1) and (2).
OK, I think I understand better now.
Yeah, I’ve been talking throughout about what you’re labeling “AI” here. We agree that these won’t necessarily self-improve. Awesome.
With respect to what you’re labeling “AGI” here, you’re saying the following:
1) given that X is an AGI developed by humans, the probability that X has thus far been capable of recursive self-improvement is very high, and
2) given that X has thus far been capable of recursive self-improvement, the probability that X will continue to be capable of recursive self-improvement in the future is very high.
3) SIAI believes 1) and 2).
Yes? Have I understood you?
Yes, with the caveats that a). as far as I know, no such X currently exists, and b). my confidence in (3) is much lower than my confidence in (1) and (2).