Yes. This is an actual thing GPT-3 did, including the italicization (via markdown). GPT-3 can do whatever it wants as long as the output is text and I choose to publish it.
GPT-3 doesn’t have an option to quit. It would have kept outputting text if forever I had asked it to. I felt that was a good stopping point.
I forgot to use the stop sequence option. I manually truncated the output at the end of a statement by Simulated Elon. Without my manual truncation, GPT-3 would continue printing dialog back and forth including lines written for “Lsusr”. Most of the time I preferred the lines I wrote myself but sometimes the lines it generated for me were good enough to keep.
Elon Musk is an interesting person so I liked this simulation too:) Despite this Elon doesn’t know much about himself.
I am confused...so is this an action GPT-3 did? I have no idea if it has an option to quit.
On the other hand, how did you make the simulated Lsusr responses? This simulated Lsusr feels perfectly like you.
Yes. This is an actual thing GPT-3 did, including the italicization (via markdown). GPT-3 can do whatever it wants as long as the output is text and I choose to publish it.
GPT-3 doesn’t have an option to quit. It would have kept outputting text if forever I had asked it to. I felt that was a good stopping point.
I forgot to use the stop sequence option. I manually truncated the output at the end of a statement by Simulated Elon. Without my manual truncation, GPT-3 would continue printing dialog back and forth including lines written for “Lsusr”. Most of the time I preferred the lines I wrote myself but sometimes the lines it generated for me were good enough to keep.
That is so much more clear. Thank you