I am not sure I understand. Are you saying that GPT thinks the text is genuinely from the future (i.e., the distribution that it is modeling contains text from the future), or that it doesn’t think so? The sentence you quote is intended to mean that it does not think the text is genuinely from the future.
I agree that it doesn’t think the text is from the future. I am nitpicking a technical detail because the text I was line-commenting upon seemed confused. (How vexing to find this thread below the post, in the place for louder discussions!)
Instead of conjecturing that it doesn’t think the text is from the future, you should conjecture that it thinks the text is from the training data, because:
I am not sure I understand. Are you saying that GPT thinks the text is genuinely from the future (i.e., the distribution that it is modeling contains text from the future), or that it doesn’t think so? The sentence you quote is intended to mean that it does not think the text is genuinely from the future.
I agree that it doesn’t think the text is from the future. I am nitpicking a technical detail because the text I was line-commenting upon seemed confused. (How vexing to find this thread below the post, in the place for louder discussions!)
Instead of conjecturing that it doesn’t think the text is from the future, you should conjecture that it thinks the text is from the training data, because:
The latter implies the former.
We have technical reasons to believe the latter.