Most of the functionality limits OpenAI has put on the public demos have proven to be quite easy to work around with simple prompt engineering—mostly telling it to play act.
It seems that there are two kinds of limitations. One is where you get an answer that ChatGPT is not willing to answer you. The other is where the text gets marked red and you get told that this might have been a violation of the rules of service.
I think there’s a good chance that if you use the professional API you won’t get warnings about how you might have violated the rules of service but instead, those violations get counted in the background, and if there are too many your account will be blocked either automatically or with a human reviewing the violations.
I would expect that if you create a system that involves accomplishing bigger tasks it will need a lot of human supervision, in the beginning, to be taught how to transform tasks into subtasks. Afterward, that supervised data can be used as training data. I think it’s unlikely that you will get an agent that can do more general high complexity tasks without that step of human supervision for more training data in between.
It seems that there are two kinds of limitations. One is where you get an answer that ChatGPT is not willing to answer you. The other is where the text gets marked red and you get told that this might have been a violation of the rules of service.
I think there’s a good chance that if you use the professional API you won’t get warnings about how you might have violated the rules of service but instead, those violations get counted in the background, and if there are too many your account will be blocked either automatically or with a human reviewing the violations.
I would expect that if you create a system that involves accomplishing bigger tasks it will need a lot of human supervision, in the beginning, to be taught how to transform tasks into subtasks. Afterward, that supervised data can be used as training data. I think it’s unlikely that you will get an agent that can do more general high complexity tasks without that step of human supervision for more training data in between.