Yes, I thought for years that the research should be private but as it turns out, most people in policy are pretty robustly not-interested in anything that sounds like “mind control” and the math is hard to explain, so if this stuff ends up causing a public scandal that damages the US’s position in international affairs then it probably won’t originate from here (e.g. it would get popular elsewhere like the AI surveillance pipeline) so AI safety might as well be the people that profit off of it by open-sourcing it early.
It’s actually a statistical induction. When you have enough human behavioral data in one place, you can use gradient descent to steer people in measurable directions if the people remain in the controlled interactive environment that the data came from (and social media news feeds are surprisingly optimized to be that perfect controlled environment). More psychologists mean better quality data-labeling, which means people can be steered more precisely.
Yes, I thought for years that the research should be private but as it turns out, most people in policy are pretty robustly not-interested in anything that sounds like “mind control” and the math is hard to explain, so if this stuff ends up causing a public scandal that damages the US’s position in international affairs then it probably won’t originate from here (e.g. it would get popular elsewhere like the AI surveillance pipeline) so AI safety might as well be the people that profit off of it by open-sourcing it early.
It’s actually a statistical induction. When you have enough human behavioral data in one place, you can use gradient descent to steer people in measurable directions if the people remain in the controlled interactive environment that the data came from (and social media news feeds are surprisingly optimized to be that perfect controlled environment). More psychologists mean better quality data-labeling, which means people can be steered more precisely.