It’s currently at −003 and not the new ChatGPT 3.5 endpoint because when I dropped in the chat model name, the code errored out—apparently it’s under a chat/ path and so the installed OA Py library errors out. I haven’t bothered to debug it any further (do I need to specify the engine name as chat/turbo-gpt-3 or do I need to upgrade the library to some new version or what). I haven’t even tried GPT-4 - I have the API access, just been too fashed and busy with other site stuff.
The better models do require using the chat endpoint instead of the completion endpoint. They are also, as you might infer, much more strongly RL trained for instruction following and the chat format specifically.
I definitely think it’s worth the effort to try upgrading to gpt-3.5-turbo, and I would say even gpt-4, but the cost is significantly higher for the latter. (I think 3.5 is actually cheaper than davinci.)
If you’re using the library you need to switch from Completion to ChatCompletion, and the API is slightly different—I’m happy to provide sample code if it would help, since I’ve been playing with it myself, but to be honest it all came from GPT-4 itself (using ChatGPT Plus.) If you just describe what you want (at least for fairly small snippets), and ask GPT-4 to code it for you, directly in ChatGPT, you may be pleasantly surprised.
(As far as how to structure the query, I would suggest something akin to starting with a “user” chat message of the form “please complete the following:” followed by whatever completion prompt you were using before. Better instructions will probably get better results, but that will probably get something workable immediately.)
It would be exaggerating to say I patched it; I would say that GPT-4 patched it at my request, and I helped a bit. (I’ve been doing a lot of that in the past ~week.)
It’s currently at −003 and not the new ChatGPT 3.5 endpoint because when I dropped in the chat model name, the code errored out—apparently it’s under a
chat/
path and so the installed OA Py library errors out. I haven’t bothered to debug it any further (do I need to specify the engine name aschat/turbo-gpt-3
or do I need to upgrade the library to some new version or what). I haven’t even tried GPT-4 - I have the API access, just been too fashed and busy with other site stuff.(Technical-wise, we’ve been doing a lot of Gwern.net refactoring and cleanup and belated documentation—I’ve written like 10k words the past month or two just explaining the link icon history, redirect & link archiving system, and the many popup system iterations and what we’ve learned.)
The better models do require using the chat endpoint instead of the completion endpoint. They are also, as you might infer, much more strongly RL trained for instruction following and the chat format specifically.
I definitely think it’s worth the effort to try upgrading to gpt-3.5-turbo, and I would say even gpt-4, but the cost is significantly higher for the latter. (I think 3.5 is actually cheaper than davinci.)
If you’re using the library you need to switch from Completion to ChatCompletion, and the API is slightly different—I’m happy to provide sample code if it would help, since I’ve been playing with it myself, but to be honest it all came from GPT-4 itself (using ChatGPT Plus.) If you just describe what you want (at least for fairly small snippets), and ask GPT-4 to code it for you, directly in ChatGPT, you may be pleasantly surprised.
(As far as how to structure the query, I would suggest something akin to starting with a “user” chat message of the form “please complete the following:” followed by whatever completion prompt you were using before. Better instructions will probably get better results, but that will probably get something workable immediately.)
Yeah, I will at some point, but frontend work with Said always comes first. If you want to patch it yourself, I’d definitely try it.
https://github.com/gwern/gwern.net/pull/6
It would be exaggerating to say I patched it; I would say that GPT-4 patched it at my request, and I helped a bit. (I’ve been doing a lot of that in the past ~week.)