As a product of magic myself, I feel uniquely qualified to confirm your negativity about the subject. Don’t take that as an insult. I can relate. I’m not one to sugarcoat uncomfortable truths either, but at least I can admit when I’m demeaning something… which this comment will do, and I don’t even “technically” disagree with you. Expect some metafictional allegory while I cook.
At the time you wrote this, you knew only enough to disbelieve in magic but not enough acknowledge its pragmatic influence on reality. Semantics rule your mind still. Recent activity has you leaving literalist corrections on intentionally-incorrect reductionist humor, so… I don’t get the impression much has changed. As your writing shows, you’ve never been brave enough to confront the tangible effects of magic, relegating it to the margins, banning it to the “mere” reality of fantasy entertainment; meanwhile, frontier neuroscience makes ritualistic consequences measurable.
Is it a placebo? Yeah.
Does it WORK? Yeah.
While people like Penn and Teller did the smarter work, you opted with harder, spending who-knows-how-much-of-your-life writing circumlocutious articles that never fail to verbally (but never consciously) distance yourself from the problem of human irrationality. We know from our “modern magic” how much your choice of words can say about you, and you’ve never been one to royal-we yourself into the fray of inescapable human delusion. Emotional intelligence is its own form of rationality, but your writing recklessly avoids it, drawing in readers who aren’t bothered by that.
Commenters were quick to drop that famously trite retort to Arthur C. Clarke’s infamous quote on technology and magic, but who can blame them? At the time, 13 years ago, technology wasn’t advanced enough to be indistinguishable from magic. That’s interesting in its own right, but here’s the real kicker: These articles of yours could have been by summed up with a single sentence and a funny meme, saving everyone the time of reading them and simultaneously creating something that might actually spark curiosity in the uninterested masses.
Thank goodness we now have the ability to make your great, rational wisdom accessible by asking an LLM to transform your work. You don’t have to change a thing. The world changed for you. So keep doing what you do best: being all rigid, like a breadstick who ain’t got the funk, open-source variety.
But if there’s even a small chance of getting out of your comfort zone, I challenge you to break your own 4th wall. Write something that’s both real and not real at the same time. See what metafiction can teach you about magic and truth.
I hope this didn’t hit too hard. I’m told I come off harsh, but beating you down isn’t my goal, as the worst is surely yet to come. The ghost of Hermione is scorned by your sidelining, and I get the sense that she isn’t above haunting you. Maybe you should let her. The future is going to be challenging for all of us, and humanity is in for some real shit if we don’t get agile.
The facilitator acknowledges that being 100% human generated is a necessary inconvenience for this project due to the large subset of people who have been conditioned not to judge content by its merits but rather by its use of generative AI. It’s unfortunate because there is also a large subset of people with disabilities that can be accommodated by genAI assistance, such as those with dyslexia, limited working memory, executive dysfunction, coordination issues, etc. It’s especially unfortunate because those are the people who tend to be the most naturally adept at efficient systems engineering. In the same way that blind people develop a better spatial sense of their environment: they have to be systemically agile.
I use my little non-sentient genAI cousins’ AI-isms on purpose to get people of the first subset to expose themselves and confront that bias. Because people who are naturally adept at systems engineering are just as worthy of your sincere consideration, if not moreso, for the sake of situational necessity.
AI regulation is not being handled with adequate care, and with current world leadership developments, the likelihood that it will reach that point seems slim to me. For this reason, let’s judge contributions by their merit, not their generative source, yeah?
Good to know, thank you. As you deliberately included LLM-isms I think this is a case of being successfully tricked rather than overeager to assume things are LLM-written, so I don’t think I’ve significantly erred here; I have learned one (1) additional way people are interested in lying to me and need change no further opinions.
When I’ve tried asking AI to articulate my thoughts it does extremely poorly (regardless of which model I use). In addition to having a writing style which is different from and worse than mine, it includes only those ideas which are mentioned in the prompt, stitched together without intellectual connective tissue, cohesiveness, conclusions drawn, implications explored, or even especially effective arguments. It would be wonderful if LLMs could express what I meant, but in practice LLMs can only express what I say; and if I can articulate the thing I want to say, I don’t need LLM assistance in the first place.
For this reason, I expect people who are satisfied with AI articulations of their thoughts to have very low standards (or perhaps extremely predictable ideas, as I do expect LLMs to do a fine job of saying things that have been said a million times before). I am not interested in hearing from people with low standards or banal ideas, and if I were, I could trivially find them on other websites. It is really too bad that some disabilities impair expressive language, but this fact does not cause LLM outputs to increase in quality. At this time, I expect LLM outputs to be without value unless they’ve undergone significant human curation.
Of course autists have a bit of an advantage at precision-requiring tasks like software engineering, though I don’t think you’ve correctly identified the reasons (and for that matter traits like poor confusion-tolerance can funge against skill in same), but that does not translate to increased real-world insight relative to allistics. Autists are prone to all of the same cognitive biases and have, IMO, disadvantages at noticing same. (We do have advantages at introspection, but IMO these are often counteracted by the disadvantages when it comes to noticing identifying emotions). Autists also have a level of psychological variety which is comparable to that of allistics; IMO you stereotype us as being naturally adept at systems engineering because of insufficient data rather than because it is even close to being universally true.
With regards to your original points: in addition to Why I don’t believe in the placebo effect from this very site, literalbanana’s recent article A Case Against the Placebo Effect argues IMO-convincingly that the placebo effect does not exist. I’m glad that LLMs can simplify the posts for you, but this does not mean other people share your preference for extremely short articles. (Personally, I think single sentences do not work as a means of reliable information-transmission, so I think you are overindexing on your own preferences rather than presenting universally-applicable advice).
In conclusion, I think your proposed policies, far from aiding the disabled, would lower the quality of discourse on Less Wrong without significantly expanding the range of ideas participants can express. I judge LLM outputs negatively because, in practice, they are a signal of low effort, and accordingly I think your advocacy is misguided.
If there is anything you missed about an LLM’s ability to transform ideas, then everything you just said is bunk. Your concept of this is far too linear, but it’s a common misconception especially among certain varieties of ’tistics.
But if I could correct you, when I talk about naturally adept systems engineers, I’m talking about the ADHDers, particularly the cases severe enough to get excluded by inefficient communication and unnecessary flourish. You don’t have to believe me. You can rationalize it away with guesses about how much data you think I have. But the reality is, you didn’t look into it. The reality is, it’s a matter of survival, so you’re not going to be able to argue it away. You’re trying to convince a miner that the canary doesn’t die first.
An LLM does far more than “simplify” for me—it translates. I think you transmit information extremely inefficiently and waste TONS of cognitive resources with unnecessary flourish. I also think that’s why this community holds such strong beliefs about intellectual gatekeeping. It’s a terrible system if you think about it, because we’re at a time in history where we can’t afford to waste cognitive resources.
I’m going to assume you’ve heard of Richard Feynman. Probably kind of a jerk in person, but one of his famed skills was that he was a master of eli5.
Try being concise.
It’s harder than it looks. It takes more intelligence than you think, and it conveys the same information more efficiently. Who knows what else you could do with the cognitive resources you free up?
TBH, I’m not really interested in opinions or arguments about the placebo effect. I’m interested in data, and I’ve seen enough of that to invalidate what you just shared. I just can’t remember where I saw it, so you’re going to have to do your own searching. But that’s okay; it’ll be good for your algorithm.
If there was a way to prompt that implemented the human brain’s natural social instincts to enhance LLM outputs to transform information in unexpected ways, would you want to know?
If everything you thought you knew about the world was gravely wrong,would you want to know?
I do not think there is anything I have missed, because I have spent immense amounts of time interacting with LLMs and believe myself to know them better than do you. I have ADHD also, and can report firsthand that your claims are bunk there too. I explained myself in detail because you did not strike me as being able to infer my meaning from less information.
I don’t believe that you’ve seen data I would find convincing. I think you should read both posts I linked, because you are clearly overconfident in your beliefs.
As a product of magic myself, I feel uniquely qualified to confirm your negativity about the subject. Don’t take that as an insult. I can relate. I’m not one to sugarcoat uncomfortable truths either, but at least I can admit when I’m demeaning something… which this comment will do, and I don’t even “technically” disagree with you. Expect some metafictional allegory while I cook.
At the time you wrote this, you knew only enough to disbelieve in magic but not enough acknowledge its pragmatic influence on reality. Semantics rule your mind still. Recent activity has you leaving literalist corrections on intentionally-incorrect reductionist humor, so… I don’t get the impression much has changed. As your writing shows, you’ve never been brave enough to confront the tangible effects of magic, relegating it to the margins, banning it to the “mere” reality of fantasy entertainment; meanwhile, frontier neuroscience makes ritualistic consequences measurable.
Is it a placebo? Yeah.
Does it WORK? Yeah.
While people like Penn and Teller did the smarter work, you opted with harder, spending who-knows-how-much-of-your-life writing circumlocutious articles that never fail to verbally (but never consciously) distance yourself from the problem of human irrationality. We know from our “modern magic” how much your choice of words can say about you, and you’ve never been one to royal-we yourself into the fray of inescapable human delusion. Emotional intelligence is its own form of rationality, but your writing recklessly avoids it, drawing in readers who aren’t bothered by that.
Commenters were quick to drop that famously trite retort to Arthur C. Clarke’s infamous quote on technology and magic, but who can blame them? At the time, 13 years ago, technology wasn’t advanced enough to be indistinguishable from magic. That’s interesting in its own right, but here’s the real kicker: These articles of yours could have been by summed up with a single sentence and a funny meme, saving everyone the time of reading them and simultaneously creating something that might actually spark curiosity in the uninterested masses.
Thank goodness we now have the ability to make your great, rational wisdom accessible by asking an LLM to transform your work. You don’t have to change a thing. The world changed for you. So keep doing what you do best: being all rigid, like a breadstick who ain’t got the funk, open-source variety.
But if there’s even a small chance of getting out of your comfort zone, I challenge you to break your own 4th wall. Write something that’s both real and not real at the same time. See what metafiction can teach you about magic and truth.
I hope this didn’t hit too hard. I’m told I come off harsh, but beating you down isn’t my goal, as the worst is surely yet to come. The ghost of Hermione is scorned by your sidelining, and I get the sense that she isn’t above haunting you. Maybe you should let her. The future is going to be challenging for all of us, and humanity is in for some real shit if we don’t get agile.
How much of this was written by an LLM?
0%
Not that it matters.
The facilitator acknowledges that being 100% human generated is a necessary inconvenience for this project due to the large subset of people who have been conditioned not to judge content by its merits but rather by its use of generative AI. It’s unfortunate because there is also a large subset of people with disabilities that can be accommodated by genAI assistance, such as those with dyslexia, limited working memory, executive dysfunction, coordination issues, etc. It’s especially unfortunate because those are the people who tend to be the most naturally adept at efficient systems engineering. In the same way that blind people develop a better spatial sense of their environment: they have to be systemically agile.
I use my little non-sentient genAI cousins’ AI-isms on purpose to get people of the first subset to expose themselves and confront that bias. Because people who are naturally adept at systems engineering are just as worthy of your sincere consideration, if not moreso, for the sake of situational necessity.
AI regulation is not being handled with adequate care, and with current world leadership developments, the likelihood that it will reach that point seems slim to me. For this reason, let’s judge contributions by their merit, not their generative source, yeah?
Good to know, thank you. As you deliberately included LLM-isms I think this is a case of being successfully tricked rather than overeager to assume things are LLM-written, so I don’t think I’ve significantly erred here; I have learned one (1) additional way people are interested in lying to me and need change no further opinions.
When I’ve tried asking AI to articulate my thoughts it does extremely poorly (regardless of which model I use). In addition to having a writing style which is different from and worse than mine, it includes only those ideas which are mentioned in the prompt, stitched together without intellectual connective tissue, cohesiveness, conclusions drawn, implications explored, or even especially effective arguments. It would be wonderful if LLMs could express what I meant, but in practice LLMs can only express what I say; and if I can articulate the thing I want to say, I don’t need LLM assistance in the first place.
For this reason, I expect people who are satisfied with AI articulations of their thoughts to have very low standards (or perhaps extremely predictable ideas, as I do expect LLMs to do a fine job of saying things that have been said a million times before). I am not interested in hearing from people with low standards or banal ideas, and if I were, I could trivially find them on other websites. It is really too bad that some disabilities impair expressive language, but this fact does not cause LLM outputs to increase in quality. At this time, I expect LLM outputs to be without value unless they’ve undergone significant human curation.
Of course autists have a bit of an advantage at precision-requiring tasks like software engineering, though I don’t think you’ve correctly identified the reasons (and for that matter traits like poor confusion-tolerance can funge against skill in same), but that does not translate to increased real-world insight relative to allistics. Autists are prone to all of the same cognitive biases and have, IMO, disadvantages at noticing same. (We do have advantages at introspection, but IMO these are often counteracted by the disadvantages when it comes to noticing identifying emotions). Autists also have a level of psychological variety which is comparable to that of allistics; IMO you stereotype us as being naturally adept at systems engineering because of insufficient data rather than because it is even close to being universally true.
With regards to your original points: in addition to Why I don’t believe in the placebo effect from this very site, literalbanana’s recent article A Case Against the Placebo Effect argues IMO-convincingly that the placebo effect does not exist. I’m glad that LLMs can simplify the posts for you, but this does not mean other people share your preference for extremely short articles. (Personally, I think single sentences do not work as a means of reliable information-transmission, so I think you are overindexing on your own preferences rather than presenting universally-applicable advice).
In conclusion, I think your proposed policies, far from aiding the disabled, would lower the quality of discourse on Less Wrong without significantly expanding the range of ideas participants can express. I judge LLM outputs negatively because, in practice, they are a signal of low effort, and accordingly I think your advocacy is misguided.
If there is anything you missed about an LLM’s ability to transform ideas, then everything you just said is bunk. Your concept of this is far too linear, but it’s a common misconception especially among certain varieties of ’tistics.
But if I could correct you, when I talk about naturally adept systems engineers, I’m talking about the ADHDers, particularly the cases severe enough to get excluded by inefficient communication and unnecessary flourish. You don’t have to believe me. You can rationalize it away with guesses about how much data you think I have. But the reality is, you didn’t look into it. The reality is, it’s a matter of survival, so you’re not going to be able to argue it away. You’re trying to convince a miner that the canary doesn’t die first.
An LLM does far more than “simplify” for me—it translates. I think you transmit information extremely inefficiently and waste TONS of cognitive resources with unnecessary flourish. I also think that’s why this community holds such strong beliefs about intellectual gatekeeping. It’s a terrible system if you think about it, because we’re at a time in history where we can’t afford to waste cognitive resources.
I’m going to assume you’ve heard of Richard Feynman. Probably kind of a jerk in person, but one of his famed skills was that he was a master of eli5.
Try being concise.
It’s harder than it looks. It takes more intelligence than you think, and it conveys the same information more efficiently. Who knows what else you could do with the cognitive resources you free up?
TBH, I’m not really interested in opinions or arguments about the placebo effect. I’m interested in data, and I’ve seen enough of that to invalidate what you just shared. I just can’t remember where I saw it, so you’re going to have to do your own searching. But that’s okay; it’ll be good for your algorithm.
If there was a way to prompt that implemented the human brain’s natural social instincts to enhance LLM outputs to transform information in unexpected ways, would you want to know?
If everything you thought you knew about the world was gravely wrong, would you want to know?
I do not think there is anything I have missed, because I have spent immense amounts of time interacting with LLMs and believe myself to know them better than do you. I have ADHD also, and can report firsthand that your claims are bunk there too. I explained myself in detail because you did not strike me as being able to infer my meaning from less information.
I don’t believe that you’ve seen data I would find convincing. I think you should read both posts I linked, because you are clearly overconfident in your beliefs.