Let’s see what happens if I tweak the language: … Neat! It’s picked up on a lot of nuance implied by saying “important” rather than “matters”.
Don’t forget that people trying to extrapolate from your five words have not seen any alternate wordings you were considering. The LLM could more easily pick up on the nuance there because it was shown both wordings and asked to contrast them. So if you actually want to use this technique to figure out what someone will take away from your five words, maybe ask the LLM about each possible wording in a separate sandbox rather than a single conversation.
Yep, this is actually how I used Claude in all the above experiments. I started new chats with it each time, which I believe don’t share context between each other. Putting them in the same chat seemed likely to risk contamination from earlier context, so I wanted it to come at each task fresh.
Yes, you’re right, in exactly one screen shot above I did follow up in the chat. But you should be able to see that all the other ones are in new, separate chats. There was just one case where it made more sense to follow up rather than ask a new question.
Yes, but this exact case is when you say “This would be useful for trying out different variations on a phrase to see what those small variations change about the implied meaning” and when it can be particularly misleading because the LLM is contrasting with the previous version which the humans reading/hearing the final version don’t know about.
So it would be more useful for that purpose to use a new chat.
Cool idea!
One note about this:
Don’t forget that people trying to extrapolate from your five words have not seen any alternate wordings you were considering. The LLM could more easily pick up on the nuance there because it was shown both wordings and asked to contrast them. So if you actually want to use this technique to figure out what someone will take away from your five words, maybe ask the LLM about each possible wording in a separate sandbox rather than a single conversation.
Yep, this is actually how I used Claude in all the above experiments. I started new chats with it each time, which I believe don’t share context between each other. Putting them in the same chat seemed likely to risk contamination from earlier context, so I wanted it to come at each task fresh.
But the screenshot says “if i instead say the words...”. This seems like it has to be in the same chat with the “matters” version.
Yes, you’re right, in exactly one screen shot above I did follow up in the chat. But you should be able to see that all the other ones are in new, separate chats. There was just one case where it made more sense to follow up rather than ask a new question.
Yes, but this exact case is when you say “This would be useful for trying out different variations on a phrase to see what those small variations change about the implied meaning” and when it can be particularly misleading because the LLM is contrasting with the previous version which the humans reading/hearing the final version don’t know about.
So it would be more useful for that purpose to use a new chat.