Here are all of my interactions with claude related to writing blog posts or comments in the last four days:
I asked Claude for a couple back-of-the-envelope power output estimations (running, and scratching one’s nose). I double-checked the results for myself before alluding to them in the (upcoming) post. Claude’s suggestions were generally in the right ballpark, but more importantly Claude helpfully reminded me that metabolic power consumption = mechanical power + heat production, and that I should be clear on which one I mean.
“There are two unrelated senses of “energy conservation”, one being physics, the other being “I want to conserve my energy for later”. Is there some different term I can use for the latter?” — Claude had a couple good suggestions; I think I wound up going with “energy preservation”.
“how many centimeters separate the preoptic nucleus of the hypothalamus from the arcuate nucleus?” — Claude didn’t really know but its ballpark number was consistent with what I would have guessed. I think I also googled, and then just to be safe I worded the claim in a pretty vague way. It didn’t really matter much for my larger point in even that one sentence, let alone for the important points in the whole (upcoming) post.
“what’s a typical amount that a 4yo can pick up? what about a national champion weightlifter? I’m interested in the ratio.” — Claude gave an answer and showed its work. Seemed plausible. I was writing this comment, and after reading Claude’s guess I changed a number from “500” to “50”.
“Are there characteristic auditory properties that distinguish the sound of someone talking to me while facing me, versus talking to me while facing a different direction?” — Claude said some things that were marginally helpful. I didn’t wind up saying anything about that in the (upcoming) post.
“what does “receiving eye contact” mean?” — I was trying to figure out if readers would understand what I mean if I wrote that in my (upcoming) post. I thought it was a standard term but had a niggling worry that I had made it up. Claude got the right answer, so I felt marginally more comfortable using that phrase without defining it.
“what’s the name for the psychotic delusion where you’re surprised by motor actions?” — I had a particular thing in mind, but was blanking on the exact word. Claude was pretty confused but after a couple tries it mentioned “delusion of control”, which is what I wanted. (I googled that term afterwards.)
Somewhat following this up: I think not using LLMs is going to be fairly similar to “not using google.” Google results are not automatically true – you have to use your judgment. But, like, it’s kinda silly to not use it as part of your search process.
I do recommend perplexity.ai for people who want an easier time checking up on where the AI got some info (it does a search first and provides citations, while packaging the results in a clearer overall explanation than google)
I in fact don’t use Google very much these days, and don’t particularly recommend that anyone else do so, either.
(If by “google” you meant “search engines in general”, then that’s a bit different, of course. But then, the analogy here would be to something like “carefully select which LLM products you use, try to minimize their use, avoid the popular ones, and otherwise take all possible steps to ensure that LLMs affect what you see and do as little as possible”.)
Here are all of my interactions with claude related to writing blog posts or comments in the last four days:
I asked Claude for a couple back-of-the-envelope power output estimations (running, and scratching one’s nose). I double-checked the results for myself before alluding to them in the (upcoming) post. Claude’s suggestions were generally in the right ballpark, but more importantly Claude helpfully reminded me that metabolic power consumption = mechanical power + heat production, and that I should be clear on which one I mean.
“There are two unrelated senses of “energy conservation”, one being physics, the other being “I want to conserve my energy for later”. Is there some different term I can use for the latter?” — Claude had a couple good suggestions; I think I wound up going with “energy preservation”.
“how many centimeters separate the preoptic nucleus of the hypothalamus from the arcuate nucleus?” — Claude didn’t really know but its ballpark number was consistent with what I would have guessed. I think I also googled, and then just to be safe I worded the claim in a pretty vague way. It didn’t really matter much for my larger point in even that one sentence, let alone for the important points in the whole (upcoming) post.
“what’s a typical amount that a 4yo can pick up? what about a national champion weightlifter? I’m interested in the ratio.” — Claude gave an answer and showed its work. Seemed plausible. I was writing this comment, and after reading Claude’s guess I changed a number from “500” to “50”.
“Are there characteristic auditory properties that distinguish the sound of someone talking to me while facing me, versus talking to me while facing a different direction?” — Claude said some things that were marginally helpful. I didn’t wind up saying anything about that in the (upcoming) post.
“what does “receiving eye contact” mean?” — I was trying to figure out if readers would understand what I mean if I wrote that in my (upcoming) post. I thought it was a standard term but had a niggling worry that I had made it up. Claude got the right answer, so I felt marginally more comfortable using that phrase without defining it.
“what’s the name for the psychotic delusion where you’re surprised by motor actions?” — I had a particular thing in mind, but was blanking on the exact word. Claude was pretty confused but after a couple tries it mentioned “delusion of control”, which is what I wanted. (I googled that term afterwards.)
Somewhat following this up: I think not using LLMs is going to be fairly similar to “not using google.” Google results are not automatically true – you have to use your judgment. But, like, it’s kinda silly to not use it as part of your search process.
I do recommend perplexity.ai for people who want an easier time checking up on where the AI got some info (it does a search first and provides citations, while packaging the results in a clearer overall explanation than google)
I in fact don’t use Google very much these days, and don’t particularly recommend that anyone else do so, either.
(If by “google” you meant “search engines in general”, then that’s a bit different, of course. But then, the analogy here would be to something like “carefully select which LLM products you use, try to minimize their use, avoid the popular ones, and otherwise take all possible steps to ensure that LLMs affect what you see and do as little as possible”.)