The idea that how brains manifest consciousness requires a new understanding of physics to explain seems as implausible to me as the idea that how brains manifest the Chinese language does.
You seem to be treating “assuming X allows me to make reliable predictions” and “some people behave as though X were true” as equivalent assertions. I agree with you that some people behave as though automated voice systems were people, but I don’t believe that assumption helps them make more reliable predictions than they otherwise could.
I continue to think that when assuming a computer program is conscious allows me to make reliable predictions about it (or, to be more precise, allows me to make more reliable predictions than assuming the opposite would), I’ll do so, and discussions of how computer programs don’t have various attributes that brains have which must therefore explain why brains are conscious and computer programs aren’t will just seem absurd.
The idea that how brains manifest consciousness requires a new understanding of physics to explain seems as implausible to me as the idea that how brains manifest the Chinese language does.
You seem to be treating “assuming X allows me to make reliable predictions” and “some people behave as though X were true” as equivalent assertions.
I agree with you that some people behave as though automated voice systems were people, but I don’t believe that assumption helps them make more reliable predictions than they otherwise could.
I continue to think that when assuming a computer program is conscious allows me to make reliable predictions about it (or, to be more precise, allows me to make more reliable predictions than assuming the opposite would), I’ll do so, and discussions of how computer programs don’t have various attributes that brains have which must therefore explain why brains are conscious and computer programs aren’t will just seem absurd.