It’s part of human intelligence to make errors. Making errors is a sign of human-like intelligence.
You could imagine an AGI that doesn’t make any mistakes, but the presence of errors is no argument against it achieving human-like performance.
It’s interesting that you completely ignored the question about what you believe will be the likely capabilities of near-future technology like Gato.
It’s part of human intelligence to make errors. Making errors is a sign of human-like intelligence.
You could imagine an AGI that doesn’t make any mistakes, but the presence of errors is no argument against it achieving human-like performance.
It’s interesting that you completely ignored the question about what you believe will be the likely capabilities of near-future technology like Gato.