If there is anything you missed about an LLM’s ability to transform ideas, then everything you just said is bunk. Your concept of this is far too linear, but it’s a common misconception especially among certain varieties of ’tistics.
But if I could correct you, when I talk about naturally adept systems engineers, I’m talking about the ADHDers, particularly the cases severe enough to get excluded by inefficient communication and unnecessary flourish. You don’t have to believe me. You can rationalize it away with guesses about how much data you think I have. But the reality is, you didn’t look into it. The reality is, it’s a matter of survival, so you’re not going to be able to argue it away. You’re trying to convince a miner that the canary doesn’t die first.
An LLM does far more than “simplify” for me—it translates. I think you transmit information extremely inefficiently and waste TONS of cognitive resources with unnecessary flourish. I also think that’s why this community holds such strong beliefs about intellectual gatekeeping. It’s a terrible system if you think about it, because we’re at a time in history where we can’t afford to waste cognitive resources.
I’m going to assume you’ve heard of Richard Feynman. Probably kind of a jerk in person, but one of his famed skills was that he was a master of eli5.
Try being concise.
It’s harder than it looks. It takes more intelligence than you think, and it conveys the same information more efficiently. Who knows what else you could do with the cognitive resources you free up?
TBH, I’m not really interested in opinions or arguments about the placebo effect. I’m interested in data, and I’ve seen enough of that to invalidate what you just shared. I just can’t remember where I saw it, so you’re going to have to do your own searching. But that’s okay; it’ll be good for your algorithm.
If there was a way to prompt that implemented the human brain’s natural social instincts to enhance LLM outputs to transform information in unexpected ways, would you want to know?
If everything you thought you knew about the world was gravely wrong,would you want to know?
I do not think there is anything I have missed, because I have spent immense amounts of time interacting with LLMs and believe myself to know them better than do you. I have ADHD also, and can report firsthand that your claims are bunk there too. I explained myself in detail because you did not strike me as being able to infer my meaning from less information.
I don’t believe that you’ve seen data I would find convincing. I think you should read both posts I linked, because you are clearly overconfident in your beliefs.
If there is anything you missed about an LLM’s ability to transform ideas, then everything you just said is bunk. Your concept of this is far too linear, but it’s a common misconception especially among certain varieties of ’tistics.
But if I could correct you, when I talk about naturally adept systems engineers, I’m talking about the ADHDers, particularly the cases severe enough to get excluded by inefficient communication and unnecessary flourish. You don’t have to believe me. You can rationalize it away with guesses about how much data you think I have. But the reality is, you didn’t look into it. The reality is, it’s a matter of survival, so you’re not going to be able to argue it away. You’re trying to convince a miner that the canary doesn’t die first.
An LLM does far more than “simplify” for me—it translates. I think you transmit information extremely inefficiently and waste TONS of cognitive resources with unnecessary flourish. I also think that’s why this community holds such strong beliefs about intellectual gatekeeping. It’s a terrible system if you think about it, because we’re at a time in history where we can’t afford to waste cognitive resources.
I’m going to assume you’ve heard of Richard Feynman. Probably kind of a jerk in person, but one of his famed skills was that he was a master of eli5.
Try being concise.
It’s harder than it looks. It takes more intelligence than you think, and it conveys the same information more efficiently. Who knows what else you could do with the cognitive resources you free up?
TBH, I’m not really interested in opinions or arguments about the placebo effect. I’m interested in data, and I’ve seen enough of that to invalidate what you just shared. I just can’t remember where I saw it, so you’re going to have to do your own searching. But that’s okay; it’ll be good for your algorithm.
If there was a way to prompt that implemented the human brain’s natural social instincts to enhance LLM outputs to transform information in unexpected ways, would you want to know?
If everything you thought you knew about the world was gravely wrong, would you want to know?
I do not think there is anything I have missed, because I have spent immense amounts of time interacting with LLMs and believe myself to know them better than do you. I have ADHD also, and can report firsthand that your claims are bunk there too. I explained myself in detail because you did not strike me as being able to infer my meaning from less information.
I don’t believe that you’ve seen data I would find convincing. I think you should read both posts I linked, because you are clearly overconfident in your beliefs.