FWIW I don’t think that matters, in my experience interactions like this arise naturally as well, and humans usually perform similarly to how Friend did here.
In particular it seems that here ChatGPT completely fails at tracking the competence of its interlocutor in the domain at hand. If you asked a human with no context at first they might give you the complete recipe just like ChatGPT tried, but any follow up question immediately would indicate to them that more hand-holding is necessary. (And ChatGPT was asked to “walk me through one step at a time”, which should be blatantly obvious and no human would just repeat the instructions again in answer to this.)
FWIW I don’t think that matters, in my experience interactions like this arise naturally as well, and humans usually perform similarly to how Friend did here.
In particular it seems that here ChatGPT completely fails at tracking the competence of its interlocutor in the domain at hand. If you asked a human with no context at first they might give you the complete recipe just like ChatGPT tried, but any follow up question immediately would indicate to them that more hand-holding is necessary. (And ChatGPT was asked to “walk me through one step at a time”, which should be blatantly obvious and no human would just repeat the instructions again in answer to this.)