Note for posterity: “Let’s think step by step” is joke.
I downvoted this and I feel the urge to explain myself—the LLMism in the writing is uncanny.
The combination of “Let’s think step by step”, “First…” and “Not so fast…” gives me a subtle but dreadful impression that a highly valued member of the community is being finetuned by model output in real time. This emulation of the “Wait, but!” pattern is a bit too much for my comfort.
My comment hasn’t too much to do with the content but more about how unsettled I feel. I don’t think LLM outputs are all necessarily infohazardous—but I am beginning to see the potentially failure modes that people have been gesturing at for a while.
“Let’s think step by step” was indeed a joke/on purpose. Everything else was just my stream of consciousness… my “chain of thought” shall we say. I more or less wrote down thoughts as they came to me. Perhaps I’ve been influenced by reading LLM CoT’s, though I haven’t done very much of that. Or perhaps this is just what thinking looks like when you write it down?
I’ve spent enough time staring at LLM chain-of-thoughts now that when I started thinking about a thing for work, I found my thoughts taking the shape of an LLM thinking about how to approach its problem. And that actually felt like a useful systematic way of approaching the problem, so I started writing out that chain of thought like I was an LLM, and that felt valuable in helping me stay focused.
Of course, I had to amuse myself by starting the chain-of-thought with “The user has asked me to...”
Note for posterity: “Let’s think step by step” is joke.
I downvoted this and I feel the urge to explain myself—the LLMism in the writing is uncanny.
The combination of “Let’s think step by step”, “First…” and “Not so fast…” gives me a subtle but dreadful impression that a highly valued member of the community is being finetuned by model output in real time. This emulation of the “Wait, but!” pattern is a bit too much for my comfort.
My comment hasn’t too much to do with the content but more about how unsettled I feel. I don’t think LLM outputs are all necessarily infohazardous—but I am beginning to see the potentially failure modes that people have been gesturing at for a while.
I assume “let’s think step by step” is a joke/on purpose. The “first” and “not so fast” on their own don’t seem that egregious to me.
“Let’s think step by step” was indeed a joke/on purpose. Everything else was just my stream of consciousness… my “chain of thought” shall we say. I more or less wrote down thoughts as they came to me. Perhaps I’ve been influenced by reading LLM CoT’s, though I haven’t done very much of that. Or perhaps this is just what thinking looks like when you write it down?
I’ve spent enough time staring at LLM chain-of-thoughts now that when I started thinking about a thing for work, I found my thoughts taking the shape of an LLM thinking about how to approach its problem. And that actually felt like a useful systematic way of approaching the problem, so I started writing out that chain of thought like I was an LLM, and that felt valuable in helping me stay focused.
Of course, I had to amuse myself by starting the chain-of-thought with “The user has asked me to...”