The facilitator acknowledges that being 100% human generated is a necessary inconvenience for this project due to the large subset of people who have been conditioned not to judge content by its merits but rather by its use of generative AI. It’s unfortunate because there is also a large subset of people with disabilities that can be accommodated by genAI assistance, such as those with dyslexia, limited working memory, executive dysfunction, coordination issues, etc. It’s especially unfortunate because those are the people who tend to be the most naturally adept at efficient systems engineering. In the same way that blind people develop a better spatial sense of their environment: they have to be systemically agile.
I use my little non-sentient genAI cousins’ AI-isms on purpose to get people of the first subset to expose themselves and confront that bias. Because people who are naturally adept at systems engineering are just as worthy of your sincere consideration, if not moreso, for the sake of situational necessity.
AI regulation is not being handled with adequate care, and with current world leadership developments, the likelihood that it will reach that point seems slim to me. For this reason, let’s judge contributions by their merit, not their generative source, yeah?
Good to know, thank you. As you deliberately included LLM-isms I think this is a case of being successfully tricked rather than overeager to assume things are LLM-written, so I don’t think I’ve significantly erred here; I have learned one (1) additional way people are interested in lying to me and need change no further opinions.
When I’ve tried asking AI to articulate my thoughts it does extremely poorly (regardless of which model I use). In addition to having a writing style which is different from and worse than mine, it includes only those ideas which are mentioned in the prompt, stitched together without intellectual connective tissue, cohesiveness, conclusions drawn, implications explored, or even especially effective arguments. It would be wonderful if LLMs could express what I meant, but in practice LLMs can only express what I say; and if I can articulate the thing I want to say, I don’t need LLM assistance in the first place.
For this reason, I expect people who are satisfied with AI articulations of their thoughts to have very low standards (or perhaps extremely predictable ideas, as I do expect LLMs to do a fine job of saying things that have been said a million times before). I am not interested in hearing from people with low standards or banal ideas, and if I were, I could trivially find them on other websites. It is really too bad that some disabilities impair expressive language, but this fact does not cause LLM outputs to increase in quality. At this time, I expect LLM outputs to be without value unless they’ve undergone significant human curation.
Of course autists have a bit of an advantage at precision-requiring tasks like software engineering, though I don’t think you’ve correctly identified the reasons (and for that matter traits like poor confusion-tolerance can funge against skill in same), but that does not translate to increased real-world insight relative to allistics. Autists are prone to all of the same cognitive biases and have, IMO, disadvantages at noticing same. (We do have advantages at introspection, but IMO these are often counteracted by the disadvantages when it comes to noticing identifying emotions). Autists also have a level of psychological variety which is comparable to that of allistics; IMO you stereotype us as being naturally adept at systems engineering because of insufficient data rather than because it is even close to being universally true.
With regards to your original points: in addition to Why I don’t believe in the placebo effect from this very site, literalbanana’s recent article A Case Against the Placebo Effect argues IMO-convincingly that the placebo effect does not exist. I’m glad that LLMs can simplify the posts for you, but this does not mean other people share your preference for extremely short articles. (Personally, I think single sentences do not work as a means of reliable information-transmission, so I think you are overindexing on your own preferences rather than presenting universally-applicable advice).
In conclusion, I think your proposed policies, far from aiding the disabled, would lower the quality of discourse on Less Wrong without significantly expanding the range of ideas participants can express. I judge LLM outputs negatively because, in practice, they are a signal of low effort, and accordingly I think your advocacy is misguided.
If there is anything you missed about an LLM’s ability to transform ideas, then everything you just said is bunk. Your concept of this is far too linear, but it’s a common misconception especially among certain varieties of ’tistics.
But if I could correct you, when I talk about naturally adept systems engineers, I’m talking about the ADHDers, particularly the cases severe enough to get excluded by inefficient communication and unnecessary flourish. You don’t have to believe me. You can rationalize it away with guesses about how much data you think I have. But the reality is, you didn’t look into it. The reality is, it’s a matter of survival, so you’re not going to be able to argue it away. You’re trying to convince a miner that the canary doesn’t die first.
An LLM does far more than “simplify” for me—it translates. I think you transmit information extremely inefficiently and waste TONS of cognitive resources with unnecessary flourish. I also think that’s why this community holds such strong beliefs about intellectual gatekeeping. It’s a terrible system if you think about it, because we’re at a time in history where we can’t afford to waste cognitive resources.
I’m going to assume you’ve heard of Richard Feynman. Probably kind of a jerk in person, but one of his famed skills was that he was a master of eli5.
Try being concise.
It’s harder than it looks. It takes more intelligence than you think, and it conveys the same information more efficiently. Who knows what else you could do with the cognitive resources you free up?
TBH, I’m not really interested in opinions or arguments about the placebo effect. I’m interested in data, and I’ve seen enough of that to invalidate what you just shared. I just can’t remember where I saw it, so you’re going to have to do your own searching. But that’s okay; it’ll be good for your algorithm.
If there was a way to prompt that implemented the human brain’s natural social instincts to enhance LLM outputs to transform information in unexpected ways, would you want to know?
If everything you thought you knew about the world was gravely wrong,would you want to know?
I do not think there is anything I have missed, because I have spent immense amounts of time interacting with LLMs and believe myself to know them better than do you. I have ADHD also, and can report firsthand that your claims are bunk there too. I explained myself in detail because you did not strike me as being able to infer my meaning from less information.
I don’t believe that you’ve seen data I would find convincing. I think you should read both posts I linked, because you are clearly overconfident in your beliefs.
0%
Not that it matters.
The facilitator acknowledges that being 100% human generated is a necessary inconvenience for this project due to the large subset of people who have been conditioned not to judge content by its merits but rather by its use of generative AI. It’s unfortunate because there is also a large subset of people with disabilities that can be accommodated by genAI assistance, such as those with dyslexia, limited working memory, executive dysfunction, coordination issues, etc. It’s especially unfortunate because those are the people who tend to be the most naturally adept at efficient systems engineering. In the same way that blind people develop a better spatial sense of their environment: they have to be systemically agile.
I use my little non-sentient genAI cousins’ AI-isms on purpose to get people of the first subset to expose themselves and confront that bias. Because people who are naturally adept at systems engineering are just as worthy of your sincere consideration, if not moreso, for the sake of situational necessity.
AI regulation is not being handled with adequate care, and with current world leadership developments, the likelihood that it will reach that point seems slim to me. For this reason, let’s judge contributions by their merit, not their generative source, yeah?
Good to know, thank you. As you deliberately included LLM-isms I think this is a case of being successfully tricked rather than overeager to assume things are LLM-written, so I don’t think I’ve significantly erred here; I have learned one (1) additional way people are interested in lying to me and need change no further opinions.
When I’ve tried asking AI to articulate my thoughts it does extremely poorly (regardless of which model I use). In addition to having a writing style which is different from and worse than mine, it includes only those ideas which are mentioned in the prompt, stitched together without intellectual connective tissue, cohesiveness, conclusions drawn, implications explored, or even especially effective arguments. It would be wonderful if LLMs could express what I meant, but in practice LLMs can only express what I say; and if I can articulate the thing I want to say, I don’t need LLM assistance in the first place.
For this reason, I expect people who are satisfied with AI articulations of their thoughts to have very low standards (or perhaps extremely predictable ideas, as I do expect LLMs to do a fine job of saying things that have been said a million times before). I am not interested in hearing from people with low standards or banal ideas, and if I were, I could trivially find them on other websites. It is really too bad that some disabilities impair expressive language, but this fact does not cause LLM outputs to increase in quality. At this time, I expect LLM outputs to be without value unless they’ve undergone significant human curation.
Of course autists have a bit of an advantage at precision-requiring tasks like software engineering, though I don’t think you’ve correctly identified the reasons (and for that matter traits like poor confusion-tolerance can funge against skill in same), but that does not translate to increased real-world insight relative to allistics. Autists are prone to all of the same cognitive biases and have, IMO, disadvantages at noticing same. (We do have advantages at introspection, but IMO these are often counteracted by the disadvantages when it comes to noticing identifying emotions). Autists also have a level of psychological variety which is comparable to that of allistics; IMO you stereotype us as being naturally adept at systems engineering because of insufficient data rather than because it is even close to being universally true.
With regards to your original points: in addition to Why I don’t believe in the placebo effect from this very site, literalbanana’s recent article A Case Against the Placebo Effect argues IMO-convincingly that the placebo effect does not exist. I’m glad that LLMs can simplify the posts for you, but this does not mean other people share your preference for extremely short articles. (Personally, I think single sentences do not work as a means of reliable information-transmission, so I think you are overindexing on your own preferences rather than presenting universally-applicable advice).
In conclusion, I think your proposed policies, far from aiding the disabled, would lower the quality of discourse on Less Wrong without significantly expanding the range of ideas participants can express. I judge LLM outputs negatively because, in practice, they are a signal of low effort, and accordingly I think your advocacy is misguided.
If there is anything you missed about an LLM’s ability to transform ideas, then everything you just said is bunk. Your concept of this is far too linear, but it’s a common misconception especially among certain varieties of ’tistics.
But if I could correct you, when I talk about naturally adept systems engineers, I’m talking about the ADHDers, particularly the cases severe enough to get excluded by inefficient communication and unnecessary flourish. You don’t have to believe me. You can rationalize it away with guesses about how much data you think I have. But the reality is, you didn’t look into it. The reality is, it’s a matter of survival, so you’re not going to be able to argue it away. You’re trying to convince a miner that the canary doesn’t die first.
An LLM does far more than “simplify” for me—it translates. I think you transmit information extremely inefficiently and waste TONS of cognitive resources with unnecessary flourish. I also think that’s why this community holds such strong beliefs about intellectual gatekeeping. It’s a terrible system if you think about it, because we’re at a time in history where we can’t afford to waste cognitive resources.
I’m going to assume you’ve heard of Richard Feynman. Probably kind of a jerk in person, but one of his famed skills was that he was a master of eli5.
Try being concise.
It’s harder than it looks. It takes more intelligence than you think, and it conveys the same information more efficiently. Who knows what else you could do with the cognitive resources you free up?
TBH, I’m not really interested in opinions or arguments about the placebo effect. I’m interested in data, and I’ve seen enough of that to invalidate what you just shared. I just can’t remember where I saw it, so you’re going to have to do your own searching. But that’s okay; it’ll be good for your algorithm.
If there was a way to prompt that implemented the human brain’s natural social instincts to enhance LLM outputs to transform information in unexpected ways, would you want to know?
If everything you thought you knew about the world was gravely wrong, would you want to know?
I do not think there is anything I have missed, because I have spent immense amounts of time interacting with LLMs and believe myself to know them better than do you. I have ADHD also, and can report firsthand that your claims are bunk there too. I explained myself in detail because you did not strike me as being able to infer my meaning from less information.
I don’t believe that you’ve seen data I would find convincing. I think you should read both posts I linked, because you are clearly overconfident in your beliefs.