chatgpt cannot execute a real python interpreter. if it appears to execute the function, it is because it has a fairly strong approximate understanding of how a python interpreter behaves. perhaps its counting skill is least noisy when recent context implies a perfect counting machine is the target to mimic?
It seems to run code about as well as I do in my head. That’s pretty damned impressive, since it does this in seconds, and has been even able to emulate a shell session.
My guess is that there is a difference in how it was trained with code vs general text. It’s like a different mode of thinking/computing. When you put it in terms of code, you engage that more mathematical mode of thinking. When you are just conversing, it’s pretty happy to give you plausible bullshit.
I’m curious how we can engage these different modes of thinking, assuming that my idea is more that plausible bullshit.
chatgpt cannot execute a real python interpreter. if it appears to execute the function, it is because it has a fairly strong approximate understanding of how a python interpreter behaves. perhaps its counting skill is least noisy when recent context implies a perfect counting machine is the target to mimic?
Yeah, it was probably just by chance, that it got it correct 2 or 3 times after writing a function.
It seems to run code about as well as I do in my head. That’s pretty damned impressive, since it does this in seconds, and has been even able to emulate a shell session.
My guess is that there is a difference in how it was trained with code vs general text. It’s like a different mode of thinking/computing. When you put it in terms of code, you engage that more mathematical mode of thinking. When you are just conversing, it’s pretty happy to give you plausible bullshit.
I’m curious how we can engage these different modes of thinking, assuming that my idea is more that plausible bullshit.