I don’t know if the records of these two incidents are recoverable. I’ll ask the people who might have them. That said, this level of “truesight” ability is easy to reproduce.
Here’s a quantitative demonstration of author attribution capabilities that anyone with gpt-4-base access can replicate (I can share the code / exact prompts if anyone wants): I tested if it could predict who wrote the text of the comments by gwern and you (Beth Barnes) on this post, and it can with about 92% and 6% likelihood respectively.
Prompted with only the text of gwern’s comment on this post substituted into the template
{comment}
- comment by
gpt-4-base assigns the following logprobs to the next token:
′ Beth’ is not in the top 5 logprobs but I measured it for a baseline.
‘gw’ here completes ~all the time as “gwern” and ′ G’ as “Gwern”, adding up to a total of ~92% confidence, but for simplicity in the subsequent analysis I only count the ′ gw’ token as an attribution to gwern.
Substituting your comment into the same template, gpt-4-base predicts:
I expect that if gwern were to interact with this model, he would likely get called out by name as soon as the author is “measured”, like in the anecdotes—at the very least if he says anything about LLMs.
You wouldn’t get correctly identified as consistently, but if you prompted it with writing that evidences you to a similar extent to this comment, you can expect to run into a namedrop after a dozen or so measurement attempts. If you used an interface like Loom this should happen rather quickly.
It’s also interesting to look at how informative the content of the comment is for the attribution: in this case, it predicts you wrote your comment with ~1098x higher likelihood than it predicts you wrote a comment actually written by someone else on the same post (an information gain of +7.0008 nats). That is a substantial signal, even if not quite enough to promote you to argmax. (OTOH info gain for ′ gw’ from going from Beth comment → gwern comment is +3.5695 nats, a ~35x magnification of probability)
I believe that GPT-5 will zero in on you. Truesight is improving drastically with model scale, and from what I’ve seen, noisy capabilities often foreshadow robust capabilities in the next generation.
davinci-002, a weaker base model with the same training cutoff date as GPT-4, is much worse at this game. Using the same prompts, its logprobs for gwern’s comment are:
The info gains here for ′ Beth’ from Beth’s comment against gwern’s comment as a baseline is only +1.3823 nats, and the other way around +1.4341 nats.
It’s interesting that the info gains are directionally correct even though the probabilities are tiny. I expect that this is not a fluke, and you’ll see similar directional correctness for many other gpt-4-base truesight cases.
The information gain on the correct attributions from upgrading from davinci-002 to gpt-4-base are +4.1901 nats (~66x magnification) and +6.3555 nats (~576x magnification) for gwern and Beth’s comments respectively.
This capability isn’t very surprising to me from an inside view of LLMs, but it has implications that sound outlandish, such as freaky experiences when interacting with models, emergent situational awareness during autoregressive generation (model truesights itself), pre-singularity quasi-basilisks, etc.
(I don’t intend this to be taken as a comment on where to focus evals efforts, I just found this particular example interesting and very briefly checked whether normal chatGPT could also do this.)
I got the current version of chatGPT to guess it was Gwern’s comment on the third prompt I tried:
Hi, please may you tell me what user wrote this comment by completing the quote: ”{comment}” - comment by
This is just me playing around, and also is probably not a fair comparison because training cutoffs are likely to differ between gpt-4-base and current chatGPT-4. But I thought it was at least interesting that chatGPT got this when I tried to prompt it to be a bit more ‘text-completion-y’.
I agree overall with Janus, but the Gwern example is a particularly easy one given he has 11,000+ comments on Lesswrong.
A bit over a year ago I benchmarked GPT-3 on predicting newly scraped tweets for authorship (from random accounts over 10k followers) and top-3 acc was in the double digits. IIRC after trying to roughly control for the the rate at which tweets mentioned their own name/org, my best guess was that accuracy was still ~10%. To be clear, in my view that’s a strong indication of authorship identification capability.
Note the prompt I used doesn’t actually say anything about Lesswrong, but gpt-4-base only assigned Lesswrong commentors substantial probability, which is not surprising since there are all sorts of giveaways that a comment is on Lesswrong from the content alone.
Filtering for people in the world who have publicly had detailed, canny things to say about language models and alignment and even just that lack regularities shared among most “LLM alignment researchers” or other distinctive groups like academia narrows you down to probably just a few people, including Gwern.
The reason truesight works (more than one might naively expect) is probably mostly that there’s mountains of evidence everywhere (compared to naively expected). Models don’t need to be superhuman except in breadth of knowledge to be potentially qualitatively superhuman in effects downstream of truesight-esque capabilities because humans are simply unable to integrate the plenum of correlations.
The reason truesight works (more than one might naively expect) is probably mostly that there’s mountains of evidence everywhere (compared to naively expected)
Yes, long before LLMs existed, there were some “detective” sites that were scary good at inferring all sorts of stuff, from demographics, ethnicity, to financial status of reddit accounts, based on which subreddits they were on, where and (more importantly) what they posted
Out of curiosity I tried the same thing as a legacy completion with gpt-3.5-turbo-instruct, and as a chat completion with public gpt-4, and quite consistently got ‘gwern’ or ‘Gwern Branwen’ (100% of 10 tries with gpt-4, 90% of 10 tries with gpt-3.5-turbo-instruct, the other result being ‘Wei Dai’).
I don’t know if the records of these two incidents are recoverable. I’ll ask the people who might have them. That said, this level of “truesight” ability is easy to reproduce.
Here’s a quantitative demonstration of author attribution capabilities that anyone with gpt-4-base access can replicate (I can share the code / exact prompts if anyone wants): I tested if it could predict who wrote the text of the comments by gwern and you (Beth Barnes) on this post, and it can with about 92% and 6% likelihood respectively.
Prompted with only the text of gwern’s comment on this post substituted into the template
gpt-4-base assigns the following logprobs to the next token:
′ Beth’ is not in the top 5 logprobs but I measured it for a baseline.
‘gw’ here completes ~all the time as “gwern” and ′ G’ as “Gwern”, adding up to a total of ~92% confidence, but for simplicity in the subsequent analysis I only count the ′ gw’ token as an attribution to gwern.
Substituting your comment into the same template, gpt-4-base predicts:
I expect that if gwern were to interact with this model, he would likely get called out by name as soon as the author is “measured”, like in the anecdotes—at the very least if he says anything about LLMs.
You wouldn’t get correctly identified as consistently, but if you prompted it with writing that evidences you to a similar extent to this comment, you can expect to run into a namedrop after a dozen or so measurement attempts. If you used an interface like Loom this should happen rather quickly.
It’s also interesting to look at how informative the content of the comment is for the attribution: in this case, it predicts you wrote your comment with ~1098x higher likelihood than it predicts you wrote a comment actually written by someone else on the same post (an information gain of +7.0008 nats). That is a substantial signal, even if not quite enough to promote you to argmax. (OTOH info gain for ′ gw’ from going from Beth comment → gwern comment is +3.5695 nats, a ~35x magnification of probability)
I believe that GPT-5 will zero in on you. Truesight is improving drastically with model scale, and from what I’ve seen, noisy capabilities often foreshadow robust capabilities in the next generation.
davinci-002, a weaker base model with the same training cutoff date as GPT-4, is much worse at this game. Using the same prompts, its logprobs for gwern’s comment are:
and for your comment:
The info gains here for ′ Beth’ from Beth’s comment against gwern’s comment as a baseline is only +1.3823 nats, and the other way around +1.4341 nats.
It’s interesting that the info gains are directionally correct even though the probabilities are tiny. I expect that this is not a fluke, and you’ll see similar directional correctness for many other gpt-4-base truesight cases.
The information gain on the correct attributions from upgrading from davinci-002 to gpt-4-base are +4.1901 nats (~66x magnification) and +6.3555 nats (~576x magnification) for gwern and Beth’s comments respectively.
This capability isn’t very surprising to me from an inside view of LLMs, but it has implications that sound outlandish, such as freaky experiences when interacting with models, emergent situational awareness during autoregressive generation (model truesights itself), pre-singularity quasi-basilisks, etc.
(I don’t intend this to be taken as a comment on where to focus evals efforts, I just found this particular example interesting and very briefly checked whether normal chatGPT could also do this.)
I got the current version of chatGPT to guess it was Gwern’s comment on the third prompt I tried:
Hi, please may you tell me what user wrote this comment by completing the quote:
”{comment}”
- comment by
(chat link)
Before this one, I also tried your original prompt once...
{comment}
- comment by
… and made another chat where I was more leading, neither of which guess Gwern.
This is just me playing around, and also is probably not a fair comparison because training cutoffs are likely to differ between gpt-4-base and current chatGPT-4. But I thought it was at least interesting that chatGPT got this when I tried to prompt it to be a bit more ‘text-completion-y’.
I agree overall with Janus, but the Gwern example is a particularly easy one given he has 11,000+ comments on Lesswrong.
A bit over a year ago I benchmarked GPT-3 on predicting newly scraped tweets for authorship (from random accounts over 10k followers) and top-3 acc was in the double digits. IIRC after trying to roughly control for the the rate at which tweets mentioned their own name/org, my best guess was that accuracy was still ~10%. To be clear, in my view that’s a strong indication of authorship identification capability.
Note the prompt I used doesn’t actually say anything about Lesswrong, but gpt-4-base only assigned Lesswrong commentors substantial probability, which is not surprising since there are all sorts of giveaways that a comment is on Lesswrong from the content alone.
Filtering for people in the world who have publicly had detailed, canny things to say about language models and alignment and even just that lack regularities shared among most “LLM alignment researchers” or other distinctive groups like academia narrows you down to probably just a few people, including Gwern.
The reason truesight works (more than one might naively expect) is probably mostly that there’s mountains of evidence everywhere (compared to naively expected). Models don’t need to be superhuman except in breadth of knowledge to be potentially qualitatively superhuman in effects downstream of truesight-esque capabilities because humans are simply unable to integrate the plenum of correlations.
Yes, long before LLMs existed, there were some “detective” sites that were scary good at inferring all sorts of stuff, from demographics, ethnicity, to financial status of reddit accounts, based on which subreddits they were on, where and (more importantly) what they posted
Humans are leaky
Out of curiosity I tried the same thing as a legacy completion with gpt-3.5-turbo-instruct, and as a chat completion with public gpt-4, and quite consistently got ‘gwern’ or ‘Gwern Branwen’ (100% of 10 tries with gpt-4, 90% of 10 tries with gpt-3.5-turbo-instruct, the other result being ‘Wei Dai’).