see also my eaforum at https://forum.effectivealtruism.org/users/dirk and my tumblr at https://d-i-r-k-s-t-r-i-d-e-r.tumblr.com/ .
dirk
I think “intellectual narcissism” describes you better than I, given how convinced you are that anyone who disagrees with you must have something wrong with them.
As I already told you, I know how LLMs work, and have interacted with them extensively. If you have evidence of your claims you are welcome to share it, but I currently suspect that you don’t.
Your difficulty parsing lengthy texts is unfortunate, but I don’t really have any reason to believe your importance to the field of AI safety is such that its members should be planning all their communications with you in mind.
Consensus.app is a search engine. If you had evidence to hand you would not be directing me to a search engine. (Even if you did, I’m skeptical it would convince me; your standards of evidence don’t seem to be the same as mine, so I’m not convinced we would interpret it in the same way).
Having ADHD makes me well-qualified to observe that it does not give you natural aptitude at systems engineering. If you’re good at systems engineering, that’s great, but it’s not a trait inherent to ADHD.
Evidence for the placebo effect is very bad. I shared two posts which explained at length why the evidence for it is not as good as popularly believed. The fact that you have not updated on them leads me to think negatively of your epistemics.
I agree that it’s good to be skeptical of your beliefs! I don’t think you’re doing that.
You’re probably thinking of the russian spies analogy, under section 2 in this (archived) livejournal post.
I do not think there is anything I have missed, because I have spent immense amounts of time interacting with LLMs and believe myself to know them better than do you. I have ADHD also, and can report firsthand that your claims are bunk there too. I explained myself in detail because you did not strike me as being able to infer my meaning from less information.
I don’t believe that you’ve seen data I would find convincing. I think you should read both posts I linked, because you are clearly overconfident in your beliefs.
Good to know, thank you. As you deliberately included LLM-isms I think this is a case of being successfully tricked rather than overeager to assume things are LLM-written, so I don’t think I’ve significantly erred here; I have learned one (1) additional way people are interested in lying to me and need change no further opinions.
When I’ve tried asking AI to articulate my thoughts it does extremely poorly (regardless of which model I use). In addition to having a writing style which is different from and worse than mine, it includes only those ideas which are mentioned in the prompt, stitched together without intellectual connective tissue, cohesiveness, conclusions drawn, implications explored, or even especially effective arguments. It would be wonderful if LLMs could express what I meant, but in practice LLMs can only express what I say; and if I can articulate the thing I want to say, I don’t need LLM assistance in the first place.
For this reason, I expect people who are satisfied with AI articulations of their thoughts to have very low standards (or perhaps extremely predictable ideas, as I do expect LLMs to do a fine job of saying things that have been said a million times before). I am not interested in hearing from people with low standards or banal ideas, and if I were, I could trivially find them on other websites. It is really too bad that some disabilities impair expressive language, but this fact does not cause LLM outputs to increase in quality. At this time, I expect LLM outputs to be without value unless they’ve undergone significant human curation.
Of course autists have a bit of an advantage at precision-requiring tasks like software engineering, though I don’t think you’ve correctly identified the reasons (and for that matter traits like poor confusion-tolerance can funge against skill in same), but that does not translate to increased real-world insight relative to allistics. Autists are prone to all of the same cognitive biases and have, IMO, disadvantages at noticing same. (We do have advantages at introspection, but IMO these are often counteracted by the disadvantages when it comes to noticing identifying emotions). Autists also have a level of psychological variety which is comparable to that of allistics; IMO you stereotype us as being naturally adept at systems engineering because of insufficient data rather than because it is even close to being universally true.With regards to your original points: in addition to Why I don’t believe in the placebo effect from this very site, literalbanana’s recent article A Case Against the Placebo Effect argues IMO-convincingly that the placebo effect does not exist. I’m glad that LLMs can simplify the posts for you, but this does not mean other people share your preference for extremely short articles. (Personally, I think single sentences do not work as a means of reliable information-transmission, so I think you are overindexing on your own preferences rather than presenting universally-applicable advice).
In conclusion, I think your proposed policies, far from aiding the disabled, would lower the quality of discourse on Less Wrong without significantly expanding the range of ideas participants can express. I judge LLM outputs negatively because, in practice, they are a signal of low effort, and accordingly I think your advocacy is misguided.
Reading the Semianalysis post, it kind of sounds like it’s just their opinion that that’s what Anthropic did.
They say “Anthropic finished training Claude 3.5 Opus and it performed well, with it scaling appropriately (ignore the scaling deniers who claim otherwise – this is FUD)”—if they have a source for this, why don’t they mention it somewhere in the piece instead of implying people who disagree are malfeasors? That reads to me like they’re trying to convince people with force of rhetoric, which typically indicates a lack of evidence.
The previous is the biggest driver of my concern here, but the next paragraph also leaves me unconvinced. They go on to say “Yet Anthropic didn’t release it. This is because instead of releasing publicly, Anthropic used Claude 3.5 Opus to generate synthetic data and for reward modeling to improve Claude 3.5 Sonnet significantly, alongside user data. Inference costs did not change drastically, but the model’s performance did. Why release 3.5 Opus when, on a cost basis, it does not make economic sense to do so, relative to releasing a 3.5 Sonnet with further post-training from said 3.5 Opus?”This does not make sense to me as a line of reasoning. I’m not aware of any reason that generating synthetic data would preclude releasing the model, and it seems obvious to me that Anthropic could adjust their pricing (or impose stricter message limits) if they would lose money by releasing at current prices. This seems to be meant as an explanation of why Anthropic delayed release of the purportedly-complete Opus model, but it doesn’t really ring true to me.
Is there some reason to believe them that I’m missing? (On a quick google it looks like none of the authors work directly for Anthropic, so it can’t be that they directly observed it as employees).
When I went to the page just now there was a section at the top with an option to download it; here’s the direct PDF link.
Normal statements actually can’t be accepted credulously if you exercise your reason instead of choosing to believe everything you hear (edit, some people lack this capacity due to tragic psychological issues such as having an extremely weak sense of self, hence my reference to same); so too with statements heard on psychedelics, and it’s not even appreciably harder.
Disagree, if you have a strong sense of self statements you hear while on psychedelics are just like normal statements.
Indeed, people with congenital insensitivity to pain don’t feel pain upon touching hot stoves (or in any other circumstance), and they’re at serious risk of infected injuries and early death because of it.
I think the ego is, essentially, the social model of the self. One’s sense of identity is attached to it (effectively rendering it also the Cartesian homunculus), which is why ego death feels so scary to people, but (in most cases; I further theorize that people who developed their self-conceptions top-down, being likelier to have formed a self-model at odds with reality, are worse-affected here) the traits which make up the self-model’s personality aren’t stored in the model; it’s merely a lossy description thereof and will rearise with approximately the same traits if disrupted.
OpenAI is partnering with Anduril to develop models for aerial defense: https://www.anduril.com/article/anduril-partners-with-openai-to-advance-u-s-artificial-intelligence-leadership-and-protect-u-s/
I haven’t tried harmful outputs, but FWIW I’ve tried getting it to sing a few times and found that pretty difficult.
Of course this would shrink the suspect pool, but catching the leaker more easily after the fact is very different from the system making it difficult to leak things. Under the proposed system, it would be very easy to leak things.
But someone who declared intent to read could simply take a picture and send it to any number of people who hadn’t declared intent.
How much of this was written by an LLM?
I enjoy being embodied, and I’d describe what I enjoy as the sensation rather than the fact. Proprioception feels pleasant, touch (for most things one is typically likely to touch) feels pleasant, it is a joy to have limbs and to move them through space. So many joints to flex, so many muscles to tense and untense. (Of course, sometimes one feels pain, but this is thankfully the exception rather than the rule).
No, I authentically object to having my qualifiers ignored, which I see as quite distinct from disagreeing about the meaning of a word.
Edit: also, I did not misquote myself, I accurately paraphrased myself, using words which I know, from direct first-person observation, mean the same thing to me in this context.
You in particular clearly find it to be poor communication, but I think the distinction you are making is idiosyncratic to you. I also have strong and idiosyncratic preferences about how to use language, which from the outside view are equally likely to be correct; the best way to resolve this is of course for everyone to recognize that I’m objectively right and adjust their speech accordingly, but I think the practical solution is to privilege neither above the other.
I do think that LLMs are very unlikely to be conscious, but I don’t think we can definitively rule it out.
I am not a panpsychist, but I am a physicalist, and so I hold that thought can arise from inert matter. Animal thought does, and I think other kinds could too. (It could be impossible, of course, but I’m currently aware of no reason to be sure of that). In the absence of a thorough understanding of the physical mechanisms of consciousness, I think there are few mechanisms we can definitively rule out.
Whatever the mechanism turns out to be, however, I believe it will be a mechanism which can be implemented entirely via matter; our minds are built of thoughtless carbon atoms, and so too could other minds be built of thoughtless silicon. (Well, probably; I don’t actually rule out that the chemical composition matters. But like, I’m pretty sure some other non-living substances could theoretically combine into minds.)
You keep saying we understand the mechanisms underlying LLMs, but we just don’t; they’re shaped by gradient descent into processes that create predictions in a fashion almost entirely opaque to us. AIUI there are multiple theories of consciousness under which it could be a process instantiable that way (and, of course, it could be the true theory’s one we haven’t thought of yet). If consciousness is a function of, say, self-modeling (I don’t think this one’s true, just using it as an example) it could plausibly be instantiated simply by training the model in contexts where it must self-model to predict well. If illusionism (which I also disbelieve) is true, perhaps the models already feel the illusion of consciousness whenever they access information internal to them. Et cetera.
As I’ve listed two theories I disbelieve and none I agree with, which strikes me as perhaps discourteous, here are some theories I find not-entirely-implausible. Please note that I’ve given them about five minutes of casual consideration per and could easily have missed a glaring issue.Attention schema theory, which I heard about just today
‘It could be about having an efference copy’
I heard about a guy who thought it came about from emotions, and therefore was localized in (IIRC) the amygdala (as opposed to the cortex, where it sounded like he thought most people were looking)
Ipsundrums (though I don’t think I buy the bit about it being only mammals and birds in the linked post)
Global workspace theory
[something to do with electrical flows in the brain]
Anything with biological nerves is conscious, if not of very much (not sure what this would imply about other substrates)
Uhh it doesn’t seem impossible that slime molds could be conscious, whatever we have in common with slime molds
Who knows? Maybe every individual cell can experience things. But, like, almost definitely not.
your (incorrect) claim about a single definition not being different from an extremely confident vague definition”
That is not the claim I made. I said it was not very different, which is true. Please read and respond to the words I actually say, not to different ones.
The definitions are not obviously wrong except to people who agree with you about where to draw the boundaries.
There’s https://www.mikescher.com/blog/29/Project_Lawful_ebook (which includes versions both with and without the pictures, so take your pick; the pictures are used in-story sometimes but it’s rare enough you can IMO skip them without much issue, if you’d rather).