Cool, I haven’t been able to play with the API yet.
Yeah, it has its challenges. Personally, my template prompt gets the card format right almost all of the time with GPT-4 (only sometimes with GPT-3.5). I asked it to return the cards using “Front:” and “Back:” because it would often default to that format (or “Q:” and “A:”), and it’s easy to clean it up with a script afterward.
As you’ve seen yourself, it’s very difficult to force it be concise. It does tend to ignore more specific commands, as you tried. I suggest you try to put the examples at the end of the prompt, instead of the middle. I’ve noticed that the later cards tend to inherit the formatting of the first ones, so the examples at the end might play that role.
Personally, I’m quite happy with how the Rationality deck turned out, but I’m hesitant about using it for more complicated topics (in my case, mostly math-heavy stuff).
In any case, I would likely not have spent the time writing the cards by hand, so this is always better than nothing.
Cool, I haven’t been able to play with the API yet.
Yeah, it has its challenges. Personally, my template prompt gets the card format right almost all of the time with GPT-4 (only sometimes with GPT-3.5). I asked it to return the cards using “Front:” and “Back:” because it would often default to that format (or “Q:” and “A:”), and it’s easy to clean it up with a script afterward.
As you’ve seen yourself, it’s very difficult to force it be concise. It does tend to ignore more specific commands, as you tried. I suggest you try to put the examples at the end of the prompt, instead of the middle. I’ve noticed that the later cards tend to inherit the formatting of the first ones, so the examples at the end might play that role.
Personally, I’m quite happy with how the Rationality deck turned out, but I’m hesitant about using it for more complicated topics (in my case, mostly math-heavy stuff).
In any case, I would likely not have spent the time writing the cards by hand, so this is always better than nothing.