Sorry, my bad. When I said “critical to national security”, I meant that the US and China probably already see psychology AI as critical to state survival. It’s not like it’s a good thing for this tech to be developed (idk what Bostrom/FHI was thinking when he wrote VWH in 2019), it’s just that the US and China are already in a state of moloch where they are worried about eachother (and Russia) using psychology AI which already exists to hack public opinion and pull the rug out from under the enemy regime. The NSA and CCP can’t resist developing psychological warfare/propaganda applications for SOTA AI systems, because psychology AI is also needed for defensively neutralizing/mitigating successful public opinion influence operations after they get through and turn millions of people (especially elites). As a result, it seems to me that the AI safety community should pick different battles than opposing psychological AI.
I don’t see how psychology-focused AI would develop better theory of mind than AI with tons of books in the training set. At the level where inner misalignment kills everyone, it seems like even something as powerful as the combination of social media and scrolling data would cause a dimmer awareness of humans than from the combination of physics and biology and evolution and history textbooks. I’d be happy to understand your thinking better since I don’t know much of the technical details of inner alignment or how psych AI is connected to that.
Sorry, my bad. When I said “critical to national security”, I meant that the US and China probably already see psychology AI as critical to state survival. It’s not like it’s a good thing for this tech to be developed (idk what Bostrom/FHI was thinking when he wrote VWH in 2019), it’s just that the US and China are already in a state of moloch where they are worried about eachother (and Russia) using psychology AI which already exists to hack public opinion and pull the rug out from under the enemy regime. The NSA and CCP can’t resist developing psychological warfare/propaganda applications for SOTA AI systems, because psychology AI is also needed for defensively neutralizing/mitigating successful public opinion influence operations after they get through and turn millions of people (especially elites). As a result, it seems to me that the AI safety community should pick different battles than opposing psychological AI.
I don’t see how psychology-focused AI would develop better theory of mind than AI with tons of books in the training set. At the level where inner misalignment kills everyone, it seems like even something as powerful as the combination of social media and scrolling data would cause a dimmer awareness of humans than from the combination of physics and biology and evolution and history textbooks. I’d be happy to understand your thinking better since I don’t know much of the technical details of inner alignment or how psych AI is connected to that.