I hope so—most of them seem like making trouble. But at the rate transformer models are improving, it doesn’t seem like it’s going to be long until they can handle them. It’s not quite AGI, but it’s close enough to be worrisome.
Most of the functionality limits OpenAI has put on the public demos have proven to be quite easy to work around with simple prompt engineering—mostly telling it to play act. Combine that with the ability to go into the Internet and (a) you’ve got a powerful (or soon to be powerful) tool, but (b) you’ve got something that already has a lot of potential for making mischief.
Even without the enhanced abilities rumored for GPT-4.
Most of the functionality limits OpenAI has put on the public demos have proven to be quite easy to work around with simple prompt engineering—mostly telling it to play act.
It seems that there are two kinds of limitations. One is where you get an answer that ChatGPT is not willing to answer you. The other is where the text gets marked red and you get told that this might have been a violation of the rules of service.
I think there’s a good chance that if you use the professional API you won’t get warnings about how you might have violated the rules of service but instead, those violations get counted in the background, and if there are too many your account will be blocked either automatically or with a human reviewing the violations.
I would expect that if you create a system that involves accomplishing bigger tasks it will need a lot of human supervision, in the beginning, to be taught how to transform tasks into subtasks. Afterward, that supervised data can be used as training data. I think it’s unlikely that you will get an agent that can do more general high complexity tasks without that step of human supervision for more training data in between.
I hope so—most of them seem like making trouble. But at the rate transformer models are improving, it doesn’t seem like it’s going to be long until they can handle them. It’s not quite AGI, but it’s close enough to be worrisome.
Most of the functionality limits OpenAI has put on the public demos have proven to be quite easy to work around with simple prompt engineering—mostly telling it to play act. Combine that with the ability to go into the Internet and (a) you’ve got a powerful (or soon to be powerful) tool, but (b) you’ve got something that already has a lot of potential for making mischief.
Even without the enhanced abilities rumored for GPT-4.
It seems that there are two kinds of limitations. One is where you get an answer that ChatGPT is not willing to answer you. The other is where the text gets marked red and you get told that this might have been a violation of the rules of service.
I think there’s a good chance that if you use the professional API you won’t get warnings about how you might have violated the rules of service but instead, those violations get counted in the background, and if there are too many your account will be blocked either automatically or with a human reviewing the violations.
I would expect that if you create a system that involves accomplishing bigger tasks it will need a lot of human supervision, in the beginning, to be taught how to transform tasks into subtasks. Afterward, that supervised data can be used as training data. I think it’s unlikely that you will get an agent that can do more general high complexity tasks without that step of human supervision for more training data in between.