The behaviour here seems very similar to what I’ve seen when getting ChatGPT to repeat glitch tokens—it runs into a wall and cuts off content instead of repeating the actual glitch token (e.g. a list of word will be suddenly cut off on the actual glitch token). Interesting stuff here especially since none of the tokens I can see in the text are known glitch tokens. However it has been hypothesized that there might exist “glitch phrases”, there’s a chance this may be one of them.
Also, I did try it in the OpenAI playground and the various gpt 3.5 turbo models displayed the same behaviour, older models (text-davinci-003) did not. Note that there was a change of the tokenizer to a 100k tokenizer on gpt-3.5-turbo (older models use a tokenizer with 50k tokens). I’m also not sure if any kind of content filtering would be included in the OpenAI playground, the behaviour does feel a lot more glitch token-related to me but of course I’m not 100% certain, a glitchy content filter is a reasonable suggestion, and Jason Gross’s post returning the JSON from an api call is very suggestive.
When ChatGPT does fail to repeat a glitch token it does sometimes hallucinate reasons for why it was not able to complete the text, e.g. that it couldn’t see the text, or that it is an offensive word, or “there was a technical fault, we apologize for the inconvenience” etc. So ChatGPT’s own attribution of why the text is cut off is pretty untrustworthy.
Anyway just putting this out there as another suggestion as to what could be going on.
The behaviour here seems very similar to what I’ve seen when getting ChatGPT to repeat glitch tokens—it runs into a wall and cuts off content instead of repeating the actual glitch token (e.g. a list of word will be suddenly cut off on the actual glitch token). Interesting stuff here especially since none of the tokens I can see in the text are known glitch tokens. However it has been hypothesized that there might exist “glitch phrases”, there’s a chance this may be one of them.
Also, I did try it in the OpenAI playground and the various gpt 3.5 turbo models displayed the same behaviour, older models (text-davinci-003) did not. Note that there was a change of the tokenizer to a 100k tokenizer on gpt-3.5-turbo (older models use a tokenizer with 50k tokens). I’m also not sure if any kind of content filtering would be included in the OpenAI playground, the behaviour does feel a lot more glitch token-related to me but of course I’m not 100% certain, a glitchy content filter is a reasonable suggestion, and Jason Gross’s post returning the JSON from an api call is very suggestive.
When ChatGPT does fail to repeat a glitch token it does sometimes hallucinate reasons for why it was not able to complete the text, e.g. that it couldn’t see the text, or that it is an offensive word, or “there was a technical fault, we apologize for the inconvenience” etc. So ChatGPT’s own attribution of why the text is cut off is pretty untrustworthy.
Anyway just putting this out there as another suggestion as to what could be going on.