I don’t remember hearing that last bit as a generic warning sign, but I might well have missed it. I do remember hearing that if systems became capable of self-improvement (sooner than expected?), that could be a big update towards believing that fast take-off is more likely (as mentioned in your next point).
He mentioned the self-improvement part twice, so you probably missed the first instance.
I remember both these claims as being significantly more uncertain/hedged.
Yes, all the (far) future claims were more hedged than I express here.
I remembered this as being a forecast for ~transformative AI, and as explicitly not being “AI that can do anything that humans can do”, which could be quite a bit longer. (Your description of AGI is sort-of in-between those, so it’s hard to tell whether it’s inconsistent with my memory.)
I think the difference between “transformative AI” and “AI that can do most economically useful tasks” is not that big? But because of his expectation of very gradual improvement (+ I guess different abilities profile compared to humans) the “when will AGI happen”-question didn’t fit very well in his framework. I think he said something like “taking the question as intended” and he did mention a definition along the lines of “AI that can do x tasks y well”, so I think his definition of AGI was a bit all over the place.
I was a bit confused about this answer in the Q&A, but I would not have summarized it like this. I remember claims that some degree of merging with AI is likely to happen conditional on a good outcome, and maybe a claim that CBI was the most likely path towards merging.
Yes, I think that’s more precise. I guess I shortened it a bit too much.
Thanks, all this seems reasonable, except possibly:
Merging (maybe via BCI) most likely path to a good outcome.
Which in my mind still carries connotations like ~”merging is an identifiable path towards good outcomes, where the most important thing is to get the merging right, and that will solve many problems along the way”. Which is quite different from the claim “merging will likely be a part of a good future”, analogous to e.g. “pizza will likely be a part of a good future”. My interpretation was closer to the latter (although, again, I was uncertain how to interpret this part).
He mentioned the self-improvement part twice, so you probably missed the first instance.
Yes, all the (far) future claims were more hedged than I express here.
I think the difference between “transformative AI” and “AI that can do most economically useful tasks” is not that big? But because of his expectation of very gradual improvement (+ I guess different abilities profile compared to humans) the “when will AGI happen”-question didn’t fit very well in his framework. I think he said something like “taking the question as intended” and he did mention a definition along the lines of “AI that can do x tasks y well”, so I think his definition of AGI was a bit all over the place.
Yes, I think that’s more precise. I guess I shortened it a bit too much.
Thanks, all this seems reasonable, except possibly:
Which in my mind still carries connotations like ~”merging is an identifiable path towards good outcomes, where the most important thing is to get the merging right, and that will solve many problems along the way”. Which is quite different from the claim “merging will likely be a part of a good future”, analogous to e.g. “pizza will likely be a part of a good future”. My interpretation was closer to the latter (although, again, I was uncertain how to interpret this part).
Yeah, I see what you mean. And I agree that he meant “conditional on a good outcome, merging seems quite likely”.