It’s not my fire alarm (in part because I don’t think that’s a good metaphor). But it has caused me to think about updating timelines.
My initial reaction was to update timelines, but this achievement seems less impressive than I thought at first. It doesn’t seem to represent an advance in capabilities; instead it is (another) surprising result of existing capabilities.
Isn’t yet another surprising result of existing capabilities evidence that general intelligence is itself a surprising result of existing capabilities?
That is too strong a statement. I think that it is evidence that general intelligence may be easier to achieve than commonly thought. But, past evidence has already shown that over the last couple of years and I am not sure that this is significant additional evidence in that regard.
On one hand, I agree that nothing really special and novel is happening in Cicero. On the other hand, something about it makes me feel like it’s important. I think it’s the plan->communicate->alter plan->repeat cycle partly taking place in English that intuitively makes me think “oh shit, that’s basically how I think. If we scale this up, it’ll do everything I can do”. I don’t know how true this actually is.
I vaguely recall feeling something like this when a general-purpose model learned to play all the Atari games. But I’m feeling it a lot more now. Maybe it’s the fact that if you showed the results of this to pre-GPT2 me, I’d think it’s an AGI, with zero doubt or hesitation.
It’s not my fire alarm (in part because I don’t think that’s a good metaphor). But it has caused me to think about updating timelines.
My initial reaction was to update timelines, but this achievement seems less impressive than I thought at first. It doesn’t seem to represent an advance in capabilities; instead it is (another) surprising result of existing capabilities.
Isn’t yet another surprising result of existing capabilities evidence that general intelligence is itself a surprising result of existing capabilities?
That is too strong a statement. I think that it is evidence that general intelligence may be easier to achieve than commonly thought. But, past evidence has already shown that over the last couple of years and I am not sure that this is significant additional evidence in that regard.
On one hand, I agree that nothing really special and novel is happening in Cicero. On the other hand, something about it makes me feel like it’s important. I think it’s the plan->communicate->alter plan->repeat cycle partly taking place in English that intuitively makes me think “oh shit, that’s basically how I think. If we scale this up, it’ll do everything I can do”. I don’t know how true this actually is.
I vaguely recall feeling something like this when a general-purpose model learned to play all the Atari games. But I’m feeling it a lot more now. Maybe it’s the fact that if you showed the results of this to pre-GPT2 me, I’d think it’s an AGI, with zero doubt or hesitation.