In addition to the researchy implications for topics like deception and superpersuasion and so forth, I imagine that results like this (although, as you say, unsuprising in a technical sense) could have a huge impact on the public discussion of AI (paging @Holly_Elmore and @Joseph Miller?) -- the general public often seems to get very freaked out about privacy issues where others might learn their personal information, demographic characteristics, etc.
In fact, the way people react about privacy issues is so strong that it usually seems very overblown to me—but it also seems plausible that the fundamental /reason/ people are so sensitive about their personal information is precisely because they want to avoid being decieved or becoming easily manipulable / persuadable / exploitable! Maybe this fear turns out to be unrealistic when it comes to credit scores and online ad-targeting and TSA no-fly lists, but AI might be a genuinely much more problematic technology with much more potential for abuses here.
I imagine that results like this (although, as you say, unsuprising in a technical sense) could have a huge impact on the public discussion of AI
Agreed. I considered releasing a web demo where people could put in text they’d written and GPT would give estimates of their gender, ethnicity, etc. I built one, and anecdotally people found it really interesting.
I held off because I can imagine it going viral and getting mixed up in culture war drama, and I don’t particularly want to be embroiled in that (and I can also imagine OpenAI just shutting down my account because it’s bad PR).
That said, I feel fine about someone else deciding to take that on, and would be happy to help them figure out the details—AI Digest expressed some interest but I’m not sure if they’re still considering it.
In addition to the researchy implications for topics like deception and superpersuasion and so forth, I imagine that results like this (although, as you say, unsuprising in a technical sense) could have a huge impact on the public discussion of AI (paging @Holly_Elmore and @Joseph Miller?) -- the general public often seems to get very freaked out about privacy issues where others might learn their personal information, demographic characteristics, etc.
In fact, the way people react about privacy issues is so strong that it usually seems very overblown to me—but it also seems plausible that the fundamental /reason/ people are so sensitive about their personal information is precisely because they want to avoid being decieved or becoming easily manipulable / persuadable / exploitable! Maybe this fear turns out to be unrealistic when it comes to credit scores and online ad-targeting and TSA no-fly lists, but AI might be a genuinely much more problematic technology with much more potential for abuses here.
Agreed. I considered releasing a web demo where people could put in text they’d written and GPT would give estimates of their gender, ethnicity, etc. I built one, and anecdotally people found it really interesting.
I held off because I can imagine it going viral and getting mixed up in culture war drama, and I don’t particularly want to be embroiled in that (and I can also imagine OpenAI just shutting down my account because it’s bad PR).
That said, I feel fine about someone else deciding to take that on, and would be happy to help them figure out the details—AI Digest expressed some interest but I’m not sure if they’re still considering it.
Nice idea. Might try to work it into some of our material.
See my reply to Jackson for a suggestion on that.