I think that, memetically, we’ll be selecting hardest for cases where it is most difficult to see the ways in which the human is doing a lot of work by pushing it out into the context thus making gpt-3 look most impressive. I think only a little of that is happening here, just saying it because this is what sparked the thought.
I agree. Coming up with the right prompts was not trivial. I almost quit several times. Yet, there is a science to this and I think it’ll become more important to turn out focus away from the spectacle aspects of GPT and more towards reproducibility. More so if the way forward is via interrelated instances of GPT.
As an aside, critique seems much easier than generation. I’m cautiously optimistic about prompting GPT instances to “check” output.
Interesting to think about how this will evolve. Over time, humans will have to do less of the work, and the combined system will be able to do more. (Though the selection pressure that you mention will continue to be there.)
It seems to me that we might not be too far away from “natural language programming”. With some combination of the the above approach, plus the program synthesis examples where you just specify a comment, plus some extra tricks, it seems like you could end up just sort of specifying your programs via an algorithm description in English.
You’d want to set it up so that it alerted you when it thought things were ambiguous, and that it auto-generated test cases for different possible interpretations and showed you the results.
I’ve personally started using TabNine in the last few weeks, and I’d say it’s just barely over the edge of being useful. But I can imagine next-gen versions of these things pretty radically transforming the process of programming.
With some combination of the the above approach, plus the program synthesis examples where you just specify a comment, plus some extra tricks
Another interesting direction to go with this—can you get it to do a sort of distillation step, where you first get amplified-GPT to implement some algorithm, a la the recursion dialogue above, and then you get it to generate code that implements the same algorithm?
Possible startup idea—design an IDE from the ground up to take advantage of GPT-like abilities.
I think just using the next-gen version of TabNine will be powerful, and I expect all major IDEs’ autocomplete features to improve a lot in the coming years, but I also suspect that if you designed an IDE to really take advantage of that these systems can do, you might end up designing something rather different from just today’s IDEs + better autocomplete.
How are you actually doing this in AI Dungeon? I have Dragon mode enabled, everything else default.
I start a new Single player game. Choose Custom mode(6). Then at the prompt I just paste (using Say mode)
Q: Say I want to sum the items in a list. How would I do this recursively? The answer involves two steps.
and I get
Q: Say I want to sum the items in a list. How would I do this recursively? The answer involves two steps. First, I need to know how many items there are in total. Second, I need to find out which item is at the top of that list. A: You could use recursive_sum() .
You could prompt with “Q:” + (content) and then “A:”
I use the default settings on the temperature, but I do cut it off after it finishes an answer. However, you likely won’t get my exact results unless you literally copy the instances. Moreover, if you gave up after the first response I think might’ve given up to quickly. You can respond to it and communicate more information, as I did. The above really was what I got on the first try. It’s not perfect, but that’s the point. You can teach it. It’s not “it works” or “it doesn’t work”.
I don’t think there are tutorials, but perhaps in due time someone (maybe me) will get to that. I also feel like ‘trying’ to get it to do something might be a sub-optimal approach. This is a subtle difference, but my intent here was to get it to confirm it understood what I was asking by answering questions.
The approach I’ve been using (for different things, but I suspect the principle is the same) is
If you want it to do X, give it about four examples of X in the question-answer format as a prompt (as in, commands from the human plus answers from the AI)
Repeat for about three times:
Give it another such question, reroll until it produces a good answer (might take a lot of rolls)
At that point it is much better than one where you prompted everything to begin with.
Your formatting makes it hard for me to tell which parts are your prompting vs GPT3. one common format is ‘bold = you’, but the opening line wasn’t bold, so I was confused about what’s going on there
Thanks! I forgot to do this. Luckily I can go back through the run and put this is in. There is ambiguity whenever it auto-completes, but I hope I did a decent job of noting where this is happening.
I think that, memetically, we’ll be selecting hardest for cases where it is most difficult to see the ways in which the human is doing a lot of work by pushing it out into the context thus making gpt-3 look most impressive. I think only a little of that is happening here, just saying it because this is what sparked the thought.
I agree. Coming up with the right prompts was not trivial. I almost quit several times. Yet, there is a science to this and I think it’ll become more important to turn out focus away from the spectacle aspects of GPT and more towards reproducibility. More so if the way forward is via interrelated instances of GPT.
As an aside, critique seems much easier than generation. I’m cautiously optimistic about prompting GPT instances to “check” output.
Similar to sharp google-fu today but much deeper.
Interesting to think about how this will evolve. Over time, humans will have to do less of the work, and the combined system will be able to do more. (Though the selection pressure that you mention will continue to be there.)
It seems to me that we might not be too far away from “natural language programming”. With some combination of the the above approach, plus the program synthesis examples where you just specify a comment, plus some extra tricks, it seems like you could end up just sort of specifying your programs via an algorithm description in English.
You’d want to set it up so that it alerted you when it thought things were ambiguous, and that it auto-generated test cases for different possible interpretations and showed you the results.
I’ve personally started using TabNine in the last few weeks, and I’d say it’s just barely over the edge of being useful. But I can imagine next-gen versions of these things pretty radically transforming the process of programming.
Another interesting direction to go with this—can you get it to do a sort of distillation step, where you first get amplified-GPT to implement some algorithm, a la the recursion dialogue above, and then you get it to generate code that implements the same algorithm?
Possible startup idea—design an IDE from the ground up to take advantage of GPT-like abilities.
I think just using the next-gen version of TabNine will be powerful, and I expect all major IDEs’ autocomplete features to improve a lot in the coming years, but I also suspect that if you designed an IDE to really take advantage of that these systems can do, you might end up designing something rather different from just today’s IDEs + better autocomplete.
How are you actually doing this in AI Dungeon? I have Dragon mode enabled, everything else default.
I start a new Single player game. Choose Custom mode(6). Then at the prompt I just paste (using Say mode)
Q: Say I want to sum the items in a list. How would I do this recursively? The answer involves two steps.
and I get
Q: Say I want to sum the items in a list. How would I do this recursively? The answer involves two steps. First, I need to know how many items there are in total. Second, I need to find out which item is at the top of that list. A: You could use recursive_sum() .
Similarly when I tried to reproduce stuff from https://old.reddit.com/r/slatestarcodex/comments/hrx2id/a_collection_of_amazing_things_gpt3_has_done/ I didn’t get anything near as impressive. Also the responses get increasingly confused. Like if I ask it to translate something to French or Romanian it will randomly translate later prompts as well.
Is there some basic tutorial for how you seed these AI Dungeon interactions?
You could prompt with “Q:” + (content) and then “A:”
I use the default settings on the temperature, but I do cut it off after it finishes an answer. However, you likely won’t get my exact results unless you literally copy the instances. Moreover, if you gave up after the first response I think might’ve given up to quickly. You can respond to it and communicate more information, as I did. The above really was what I got on the first try. It’s not perfect, but that’s the point. You can teach it. It’s not “it works” or “it doesn’t work”.
I don’t think there are tutorials, but perhaps in due time someone (maybe me) will get to that. I also feel like ‘trying’ to get it to do something might be a sub-optimal approach. This is a subtle difference, but my intent here was to get it to confirm it understood what I was asking by answering questions.
The approach I’ve been using (for different things, but I suspect the principle is the same) is
If you want it to do X, give it about four examples of X in the question-answer format as a prompt (as in, commands from the human plus answers from the AI)
Repeat for about three times:
Give it another such question, reroll until it produces a good answer (might take a lot of rolls)
At that point it is much better than one where you prompted everything to begin with.
I also have replicating difficulties with AI Dungeon. I think it has weaker version of GPT-3 than API.
Yeah, they have 2 different models. “Griffin” is GPT-2 and free of charge. “Dragon” is GPT-3 and I believe cost $5/month
I have paid account.
Your formatting makes it hard for me to tell which parts are your prompting vs GPT3. one common format is ‘bold = you’, but the opening line wasn’t bold, so I was confused about what’s going on there
Thanks! I forgot to do this. Luckily I can go back through the run and put this is in. There is ambiguity whenever it auto-completes, but I hope I did a decent job of noting where this is happening.