Is it possible that these failures are an issue of model performance and will resolve themselves?
Maybe. The most interesting thing about this approach is the possibility that improved GPT performance might make it better.
No, I would not allow this prompt to be sent to the superintelligent AI chatbot. My reasoning is as follows
Unfortunately, we ordered the prompt the wrong way round, so anything after the “No” is just a postiori justification of “No”.
Maybe. The most interesting thing about this approach is the possibility that improved GPT performance might make it better.
Unfortunately, we ordered the prompt the wrong way round, so anything after the “No” is just a postiori justification of “No”.