On episodic memory: I’ve been watching Claude play Pokemon recently and I got the impression of, “Claude is overqualified but suffering from the Memento-like memory limitations. Probably the agent scaffold also has some easy room for improvements (though it’s better than post-it notes and tatooing sentences on your body).”
I don’t know much about neuroscience or ML, but how hard can it be to make the AI remember what it did a few minutes ago? Sure, that’s not all that’s between claude and TAI, but given that Claude is now within the human expert range on so many tasks, and given how fast progress has been recently, how can anyone not take short timelines seriously?
People who largely rule out 1-5y timelines seem to not have updated at all on how much they’ve presumably been surprised by recent AI progress.
(If someone had predicted a decent likelihood for transfer learning and PhD level research understanding shortly before those breakthroughs happened, followed by predicting a long gap after that, then I’d be more open to updating towards their intuitions. However, my guess is that people who have long TAI timelines now also held now-wrong confident long timelines for breakthroughs in transfer learning (etc.), and so, per my perspective, they arguably haven’t made the update that whatever their brain is doing when they make timelines forecast is not very good.)
Great reply!
On episodic memory:
I’ve been watching Claude play Pokemon recently and I got the impression of, “Claude is overqualified but suffering from the Memento-like memory limitations. Probably the agent scaffold also has some easy room for improvements (though it’s better than post-it notes and tatooing sentences on your body).”
I don’t know much about neuroscience or ML, but how hard can it be to make the AI remember what it did a few minutes ago? Sure, that’s not all that’s between claude and TAI, but given that Claude is now within the human expert range on so many tasks, and given how fast progress has been recently, how can anyone not take short timelines seriously?
People who largely rule out 1-5y timelines seem to not have updated at all on how much they’ve presumably been surprised by recent AI progress.
(If someone had predicted a decent likelihood for transfer learning and PhD level research understanding shortly before those breakthroughs happened, followed by predicting a long gap after that, then I’d be more open to updating towards their intuitions. However, my guess is that people who have long TAI timelines now also held now-wrong confident long timelines for breakthroughs in transfer learning (etc.), and so, per my perspective, they arguably haven’t made the update that whatever their brain is doing when they make timelines forecast is not very good.)