I talked about psychologists-scientists, not psychologists-therapists. I think psychologists-scientists should have unusually good imaginations about the potential inner workings of other minds, which many ML engineers probably lack. I think it’s in principle possible for psychologists-scientists to understand all mech. interpretability papers in ML that are being published on the necessary level of detail. Developing the imaginations about inner workings of other minds in ML engineers could be harder.
That being said, as de-facto the only scientifically grounded “part” of psychology has converged with neuroscience as neuropsychology, “AI psychology” shouldn’t probably be a wholly separate field from the beginning, but rather a research sub-methodology within the larger field of “interpretability”.
I think psychologists-scientists should have unusually good imaginations about the potential inner workings of other minds, which many ML engineers probably lack.
That’s not clear to me, given that AI systems are so unlike human minds.
I talked about psychologists-scientists, not psychologists-therapists. I think psychologists-scientists should have unusually good imaginations about the potential inner workings of other minds, which many ML engineers probably lack. I think it’s in principle possible for psychologists-scientists to understand all mech. interpretability papers in ML that are being published on the necessary level of detail. Developing the imaginations about inner workings of other minds in ML engineers could be harder.
That being said, as de-facto the only scientifically grounded “part” of psychology has converged with neuroscience as neuropsychology, “AI psychology” shouldn’t probably be a wholly separate field from the beginning, but rather a research sub-methodology within the larger field of “interpretability”.
Thanks.
That’s not clear to me, given that AI systems are so unlike human minds.