FWIW I don’t think that matters, in my experience interactions like this arise naturally as well, and humans usually perform similarly to how Friend did here.
In particular it seems that here ChatGPT completely fails at tracking the competence of its interlocutor in the domain at hand. If you asked a human with no context at first they might give you the complete recipe just like ChatGPT tried, but any follow up question immediately would indicate to them that more hand-holding is necessary. (And ChatGPT was asked to “walk me through one step at a time”, which should be blatantly obvious and no human would just repeat the instructions again in answer to this.)
I asked a group of friends for “someone to help me with an AI experiment” and then I gave this particular friend the context that I wanted her help guiding me through a task via text message and that she should be in front of her phone in some room that was not the kitchen.
Did you tell your friend the premise behind this interaction out of band?
FWIW I don’t think that matters, in my experience interactions like this arise naturally as well, and humans usually perform similarly to how Friend did here.
In particular it seems that here ChatGPT completely fails at tracking the competence of its interlocutor in the domain at hand. If you asked a human with no context at first they might give you the complete recipe just like ChatGPT tried, but any follow up question immediately would indicate to them that more hand-holding is necessary. (And ChatGPT was asked to “walk me through one step at a time”, which should be blatantly obvious and no human would just repeat the instructions again in answer to this.)
I asked a group of friends for “someone to help me with an AI experiment” and then I gave this particular friend the context that I wanted her help guiding me through a task via text message and that she should be in front of her phone in some room that was not the kitchen.