No, I’ve only tried it with Claude so far. I did think about trying other models to see how it compares, but I think Claude gave me enough info that trying to do this in chat is unlikely to be useful. I got enough info to feel like, in theory, teaching LLMs to meditate is not exactly a useful thing to do, but if it is then it needs to happen as part of training.
It seemed like part of the problem is that Claude can’t think and speak separately. When I’ve been instructed in meditation, they guided me by focusing on my breath, or on some sound or set of syllables: something repetitive, without words.
When using inference time compute, I wonder if it would be possible for the system to use a mantra, and maybe try to reply with a random response, unrelated to the prompt. I’m going to experiment, but I’m a novice myself.
No, I’ve only tried it with Claude so far. I did think about trying other models to see how it compares, but I think Claude gave me enough info that trying to do this in chat is unlikely to be useful. I got enough info to feel like, in theory, teaching LLMs to meditate is not exactly a useful thing to do, but if it is then it needs to happen as part of training.
It seemed like part of the problem is that Claude can’t think and speak separately. When I’ve been instructed in meditation, they guided me by focusing on my breath, or on some sound or set of syllables: something repetitive, without words.
When using inference time compute, I wonder if it would be possible for the system to use a mantra, and maybe try to reply with a random response, unrelated to the prompt. I’m going to experiment, but I’m a novice myself.