I was thinking of some kind of prompt that would lead to GPT trying to do something as “environment agent-y” as trying to end a story and start a new one—i.e., stuff from some class that has some expected behaviour on the prior and deviates from that pretty hard. There’s probably some analogue with something like the output of random Turing machines, but for that specific thing I was pointing at this seemed like a cleaner example.
I was thinking of some kind of prompt that would lead to GPT trying to do something as “environment agent-y” as trying to end a story and start a new one—i.e., stuff from some class that has some expected behaviour on the prior and deviates from that pretty hard. There’s probably some analogue with something like the output of random Turing machines, but for that specific thing I was pointing at this seemed like a cleaner example.