I think it does, thank you! In your model does a squirrel perform better than ChatGPT at practical problem solving simply because it was “trained” on practical problem solving examples and ChatGPT performs better on language tasks because it was trained on language? Or is there something fundamentally different between them?
I suspect ChatGPT 4′s weaknesses come from several sources, including:
It’s effectively amnesiac, in human terms.
If you look at the depths of the neural networks and the speed with which they respond, they have more in common with human reflexes than deliberate thought. It’s basically an actor doing a real-time improvisation exercise, not a writer mulling over each word. The fact that it’s as good as it is, well, it’s honestly terrifying to me on some level.
It has never actually lived in the physical world, or had to solve practical problems. Everything it knows comes from text or images.
Most people’s first reaction to ChatGPT is to overestimate it. Then they encounter various problems, and they switch to underestimating it. This is because we’re used to interacting with humans. But ChatGPT is very unlike a human brain. I think it’s actually better than us at some things, but much worse at other key things.
I think it does, thank you! In your model does a squirrel perform better than ChatGPT at practical problem solving simply because it was “trained” on practical problem solving examples and ChatGPT performs better on language tasks because it was trained on language? Or is there something fundamentally different between them?
I suspect ChatGPT 4′s weaknesses come from several sources, including:
It’s effectively amnesiac, in human terms.
If you look at the depths of the neural networks and the speed with which they respond, they have more in common with human reflexes than deliberate thought. It’s basically an actor doing a real-time improvisation exercise, not a writer mulling over each word. The fact that it’s as good as it is, well, it’s honestly terrifying to me on some level.
It has never actually lived in the physical world, or had to solve practical problems. Everything it knows comes from text or images.
Most people’s first reaction to ChatGPT is to overestimate it. Then they encounter various problems, and they switch to underestimating it. This is because we’re used to interacting with humans. But ChatGPT is very unlike a human brain. I think it’s actually better than us at some things, but much worse at other key things.
Thank you for your answers!