Recognising dogs by ML classification is different to recognising dogs using cells in your brain and eyes, and this makes using the word recognise for AI as though it were exactly identical to the human action of recognising things somewhat inappropriate. Sorting integers is similar, actually. But the difference is no one is confusing the computer sorting integers for the same process people use for sorting integers, it’s a much dumber concept so the word “sorting” is appropriate to use. On the other hand, when you invoke pop sci to say an AI is “recognising emotions’” then yes it can easily confuse people into thinking they are identical processes. No it’s not because one is sacred and the other is not, you’ve confused sacredness with varying degrees of complexity. It’s really just a matter of conveying the right information to readers based on what you assume they understand about computers. Or you could continue to say AI feels things and be no better than a pop sci opinion piece, it’s up to you.
Recognising dogs by ML classification is different to recognising dogs using cells in your brain and eyes
Yeah, and the way that you recognise dogs is different from the way that cats recognise dogs. Doesn’t seem to matter much.
as though it were exactly identical
Two processes don’t need to be exactly identical to do the same thing. My calculator adds numbers, and I add numbers. Yet my calculator isn’t the same as my brain.
when you invoke pop sci
Huh?
No it’s not because one is sacred and the other is not, you’ve confused sacredness with varying degrees of complexity.
What notion of complexity do you mean? People are quite happy to accept that computers can perform tasks with high k-complexity or t-complexity. It is mostly “sacred” things (in the Hansonian sense) that people are unwilling to accept.
or you could continue to say AI feels things
Nowhere in this article to I address AI sentience.
We could easily train an AI to be 70 percentile in recognising human emotions, but (as far as I know) no one has bothered to do this because there is ~ 0 tangible benefit so it wouldn’t justify the cost.
Recognising dogs by ML classification is different to recognising dogs using cells in your brain and eyes, and this makes using the word recognise for AI as though it were exactly identical to the human action of recognising things somewhat inappropriate. Sorting integers is similar, actually. But the difference is no one is confusing the computer sorting integers for the same process people use for sorting integers, it’s a much dumber concept so the word “sorting” is appropriate to use. On the other hand, when you invoke pop sci to say an AI is “recognising emotions’” then yes it can easily confuse people into thinking they are identical processes. No it’s not because one is sacred and the other is not, you’ve confused sacredness with varying degrees of complexity. It’s really just a matter of conveying the right information to readers based on what you assume they understand about computers. Or you could continue to say AI feels things and be no better than a pop sci opinion piece, it’s up to you.
Yeah, and the way that you recognise dogs is different from the way that cats recognise dogs. Doesn’t seem to matter much.
Two processes don’t need to be exactly identical to do the same thing. My calculator adds numbers, and I add numbers. Yet my calculator isn’t the same as my brain.
Huh?
What notion of complexity do you mean? People are quite happy to accept that computers can perform tasks with high k-complexity or t-complexity. It is mostly “sacred” things (in the Hansonian sense) that people are unwilling to accept.
Nowhere in this article to I address AI sentience.
There are differences, but the major differences usually are quantitative, not binary changes.
The major differences are compute, energy, algorithms (sometimes), and currently memorylessness (Though PaLM-E might be changing this).
Can a AI recognize emotions right now? IDK, I haven’t heard of any results on it right now.
Can it learn to recognize emotions to X% accuracy? I’d say yes, but how useful that ability is depends highly on how accurate it can be.
There are differences, but the major differences usually are quantitative, not binary changes.
The major differences are compute, energy, algorithms (sometimes), and currently memorylessness (Though PaLM-E might be changing this).
Can a AI recognize emotions right now? IDK, I haven’t heard of any results on it right now.
Can it learn to recognize emotions to X% accuracy? I’d say yes, but how useful that ability is depends highly on how accurate it can be.
We could easily train an AI to be 70 percentile in recognising human emotions, but (as far as I know) no one has bothered to do this because there is ~ 0 tangible benefit so it wouldn’t justify the cost.