“People on the autistic spectrum may also have the experience of understanding other people better than neurotypicals do.”
I think this casts doubt on the alignment benefit. It seems a priori likely that an AI, lacking the relevant evolutionary history, will be in an exaggerated version of the autistic person’s position. The AI will need an explicit model. If in addition the AI has superior cognitive abilities to the humans it’s working with—or expects to become superior—it’s not clear why simulation would be a good approach for it. Yes that works for humans, with their hardware accelerators and their clunky explicit modeling, but...
I read you as saying that simulation is what makes preference satisfaction a natural thing to do. If I misread, please clarify.
I think this casts doubt on the alignment benefit. It seems a priori likely that an AI, lacking the relevant evolutionary history, will be in an exaggerated version of the autistic person’s position.
Maybe. On the other hand, AIs have recently been getting quite good at things that we previously thought to require human-like intuition, like playing Go, understanding language, and making beautiful art. It feels like a natural continuation of these trends would be for it to develop a superhuman ability for intuitive social modeling as well.
“People on the autistic spectrum may also have the experience of understanding other people better than neurotypicals do.”
I think this casts doubt on the alignment benefit. It seems a priori likely that an AI, lacking the relevant evolutionary history, will be in an exaggerated version of the autistic person’s position. The AI will need an explicit model. If in addition the AI has superior cognitive abilities to the humans it’s working with—or expects to become superior—it’s not clear why simulation would be a good approach for it. Yes that works for humans, with their hardware accelerators and their clunky explicit modeling, but...
I read you as saying that simulation is what makes preference satisfaction a natural thing to do. If I misread, please clarify.
Maybe. On the other hand, AIs have recently been getting quite good at things that we previously thought to require human-like intuition, like playing Go, understanding language, and making beautiful art. It feels like a natural continuation of these trends would be for it to develop a superhuman ability for intuitive social modeling as well.