see also my eaforum at https://forum.effectivealtruism.org/users/dirk and my tumblr at https://d-i-r-k-s-t-r-i-d-e-r.tumblr.com/ .
dirk
Alexander contrasts the imagined consequences of the expanded definition of “lying” becoming more widely accepted, to a world that uses the restricted definition:
...
But this is an appeal to consequences. Appeals to consequences are invalid because they represent a map–territory confusion, an attempt to optimize our description of reality at the expense of our ability to describe reality accurately (which we need in order to actually optimize reality).
I disagree.
Appeals to consequences are extremely valid when it comes to which things are or are not good to do (in this case, defining “lying” in one way or another); having good consequences is what it means for a thing to be good to do.
The purpose of words is to communicate information (actually it’s social bonding and communicating information, but the former isn’t super relevant here); if defining a word in a particular way makes it less effective for communication, that is directly relevant to whether we should in fact define the word that way.
Words don’t have inherent meanings; they have only the ones we agree on. In spherical-cow world, definitions converge on the concept-bundles which are useful for communication. (E.g., it’s useful to communicate “water” or “lion” and less so to communicate “the glowing golden fruit which spontaneously appears whenever someone’s hungry” or “things with four corners, gray walls, and top hats”). Of course it’s more complicated in practice, but this is still an important aim when considering how to define terms (though in most communicative contexts, the most useful definition is ‘the one everybody else is already using’). If attaching a particular concept-bundle to a particular term has bad consequences, that’s evidence it’s not a useful concept-bundle to attach to that term. Not conclusive evidence—it could be useful for communication and have bad consequences—but evidence nonetheless.
As a tangent: you mention ‘accurately describing reality’ as a desirable property for definitions to have; IMO that is itself a consequence of choosing a concept-bundle which hews closely to natural features of reality (when there are natural features to hew to! It’s also useful to be able to talk about manmade concepts like ‘red’). And also of using definitions other people also know; if your ‘glast’ beautifully captures some natural category (uhhh let’s say stars) and everyone else understands ‘glast’ to mean ‘pickles’, then referring to a massive stellar object which radiates light and heat as a ‘glast’ does not describe reality accurately. More typically of course words have multiple overlapping definitions ~all of which are used by a decently-sized group of people, and all we can do is describe things accurately-according-to-some-particular-set-of-definitions and accept we’ll be misunderstood, but like, in the limit a definition which nobody shares cannot describe things to anyone.)
Or, to put all that in what might be shorter terms, words should describe reality to whom?For any answer other than “myself,” it is necessary also to consider how the other person will understand the words in order to choose words which communicate those concepts which you mean. You have to consider the consequences of the words you say, because you’re saying the words in order to produce a specific consequence (your reader understanding reality more accurately).
Which brings me to my next point: Scott is arguing that defining lying more broadly will make people understand the world less accurately! If using the term in a broad sense makes people too angry to be rational, and using it in a narrow sense doesn’t do that, then people in the broad scenario will end up with a worse understanding of the world. (Personally I think rationalists in particular should simply decouple harder, but with people in general, someone who understands your words as an insult is—rationally—unlikely to also assess them as a truth claim).
On the object level Scott is wrong about whether jessicata’s usage is novel and IMO also about how lying should be defined (I think lying should include both saying things that are technically not false with intent to deceive and motivated self-deception in order to “honestly” report falsehoods; IMO using the narrow definition makes it easier for people to pretend the former are fundamentally dissimilar in a way which makes them fine. (Uh TBC I think rationalists are too negative on lies; these things are generally bad and should be socially punished but e.g. some rationalists think it’s wrong to ever tell a lie and I think normal social lying is basically fine. Actually I bet[1] the extreme anti-lie attitude is upstream of the increased concern re: false positives, come to think of it)) but on the meta level, consequences are an entirely reasonable thing to appeal to when deciding which actions we should take.- ^
https://x.com/luminousalicorn/status/839542071547441152 ; and some of us were damn well using it as a figure of speech
- ^
If you have evidence her communication strategy works, you are of course welcome to provide it. (Also, “using whatever communication strategy actually works” is not necessarily a good thing to do! Lying, for example, works very well on most people, and yet it would be bad to promote AI safety with a campaign of lies).
I also dislike many of the posts you included here, but I feel like this is perhaps unfairly harsh on some of the matters that come down to subjective taste; while it’s perfectly reasonable to find a post cringe or unfunny for your own part, not everyone will necessarily agree, and the opinions of those who enjoy this sort of content aren’t incorrect per se.
As a note, since it seems like you’re pretty frustrated with how many of her posts you’re seeing, blocking her might be a helpful intervention; Reddit’s help page says blocked users’ posts are hidden from your feeds.
Huh—that sounds fascinatingly akin to this description of how to induce first jhana I read the other day.
You have misunderstood a standard figure of speech. Here is the definition he was using: https://www.ldoceonline.com/dictionary/to-be-fair (see also https://dictionary.cambridge.org/us/dictionary/english/to-be-fair, which doesn’t explicitly mention that it’s typically used to offset criticisms but otherwise defines it more thoroughly).
Feature request: comment bookmarks
Raemon’s question was ‘which terms did you not understand and which terms are you advocating replacing them with?’
As far as I can see, you did not share that information anywhere in the comment chain (except with the up goer five example, which already included a linked explanation), so it’s not really possible for interlocutors to explain or replace whichever terms confused you.
A fourth or fifth possibility: they don’t actually alieve that the singularity is coming
There’s https://www.mikescher.com/blog/29/Project_Lawful_ebook (which includes versions both with and without the pictures, so take your pick; the pictures are used in-story sometimes but it’s rare enough you can IMO skip them without much issue, if you’d rather).
I think “intellectual narcissism” describes you better than I, given how convinced you are that anyone who disagrees with you must have something wrong with them.
As I already told you, I know how LLMs work, and have interacted with them extensively. If you have evidence of your claims you are welcome to share it, but I currently suspect that you don’t.
Your difficulty parsing lengthy texts is unfortunate, but I don’t really have any reason to believe your importance to the field of AI safety is such that its members should be planning all their communications with you in mind.
Consensus.app is a search engine. If you had evidence to hand you would not be directing me to a search engine. (Even if you did, I’m skeptical it would convince me; your standards of evidence don’t seem to be the same as mine, so I’m not convinced we would interpret it in the same way).
Having ADHD makes me well-qualified to observe that it does not give you natural aptitude at systems engineering. If you’re good at systems engineering, that’s great, but it’s not a trait inherent to ADHD.
Evidence for the placebo effect is very bad. I shared two posts which explained at length why the evidence for it is not as good as popularly believed. The fact that you have not updated on them leads me to think negatively of your epistemics.
I agree that it’s good to be skeptical of your beliefs! I don’t think you’re doing that.
You’re probably thinking of the russian spies analogy, under section 2 in this (archived) livejournal post.
I do not think there is anything I have missed, because I have spent immense amounts of time interacting with LLMs and believe myself to know them better than do you. I have ADHD also, and can report firsthand that your claims are bunk there too. I explained myself in detail because you did not strike me as being able to infer my meaning from less information.
I don’t believe that you’ve seen data I would find convincing. I think you should read both posts I linked, because you are clearly overconfident in your beliefs.
Good to know, thank you. As you deliberately included LLM-isms I think this is a case of being successfully tricked rather than overeager to assume things are LLM-written, so I don’t think I’ve significantly erred here; I have learned one (1) additional way people are interested in lying to me and need change no further opinions.
When I’ve tried asking AI to articulate my thoughts it does extremely poorly (regardless of which model I use). In addition to having a writing style which is different from and worse than mine, it includes only those ideas which are mentioned in the prompt, stitched together without intellectual connective tissue, cohesiveness, conclusions drawn, implications explored, or even especially effective arguments. It would be wonderful if LLMs could express what I meant, but in practice LLMs can only express what I say; and if I can articulate the thing I want to say, I don’t need LLM assistance in the first place.
For this reason, I expect people who are satisfied with AI articulations of their thoughts to have very low standards (or perhaps extremely predictable ideas, as I do expect LLMs to do a fine job of saying things that have been said a million times before). I am not interested in hearing from people with low standards or banal ideas, and if I were, I could trivially find them on other websites. It is really too bad that some disabilities impair expressive language, but this fact does not cause LLM outputs to increase in quality. At this time, I expect LLM outputs to be without value unless they’ve undergone significant human curation.
Of course autists have a bit of an advantage at precision-requiring tasks like software engineering, though I don’t think you’ve correctly identified the reasons (and for that matter traits like poor confusion-tolerance can funge against skill in same), but that does not translate to increased real-world insight relative to allistics. Autists are prone to all of the same cognitive biases and have, IMO, disadvantages at noticing same. (We do have advantages at introspection, but IMO these are often counteracted by the disadvantages when it comes to noticing identifying emotions). Autists also have a level of psychological variety which is comparable to that of allistics; IMO you stereotype us as being naturally adept at systems engineering because of insufficient data rather than because it is even close to being universally true.With regards to your original points: in addition to Why I don’t believe in the placebo effect from this very site, literalbanana’s recent article A Case Against the Placebo Effect argues IMO-convincingly that the placebo effect does not exist. I’m glad that LLMs can simplify the posts for you, but this does not mean other people share your preference for extremely short articles. (Personally, I think single sentences do not work as a means of reliable information-transmission, so I think you are overindexing on your own preferences rather than presenting universally-applicable advice).
In conclusion, I think your proposed policies, far from aiding the disabled, would lower the quality of discourse on Less Wrong without significantly expanding the range of ideas participants can express. I judge LLM outputs negatively because, in practice, they are a signal of low effort, and accordingly I think your advocacy is misguided.
Reading the Semianalysis post, it kind of sounds like it’s just their opinion that that’s what Anthropic did.
They say “Anthropic finished training Claude 3.5 Opus and it performed well, with it scaling appropriately (ignore the scaling deniers who claim otherwise – this is FUD)”—if they have a source for this, why don’t they mention it somewhere in the piece instead of implying people who disagree are malfeasors? That reads to me like they’re trying to convince people with force of rhetoric, which typically indicates a lack of evidence.
The previous is the biggest driver of my concern here, but the next paragraph also leaves me unconvinced. They go on to say “Yet Anthropic didn’t release it. This is because instead of releasing publicly, Anthropic used Claude 3.5 Opus to generate synthetic data and for reward modeling to improve Claude 3.5 Sonnet significantly, alongside user data. Inference costs did not change drastically, but the model’s performance did. Why release 3.5 Opus when, on a cost basis, it does not make economic sense to do so, relative to releasing a 3.5 Sonnet with further post-training from said 3.5 Opus?”This does not make sense to me as a line of reasoning. I’m not aware of any reason that generating synthetic data would preclude releasing the model, and it seems obvious to me that Anthropic could adjust their pricing (or impose stricter message limits) if they would lose money by releasing at current prices. This seems to be meant as an explanation of why Anthropic delayed release of the purportedly-complete Opus model, but it doesn’t really ring true to me.
Is there some reason to believe them that I’m missing? (On a quick google it looks like none of the authors work directly for Anthropic, so it can’t be that they directly observed it as employees).
When I went to the page just now there was a section at the top with an option to download it; here’s the direct PDF link.
Normal statements actually can’t be accepted credulously if you exercise your reason instead of choosing to believe everything you hear (edit, some people lack this capacity due to tragic psychological issues such as having an extremely weak sense of self, hence my reference to same); so too with statements heard on psychedelics, and it’s not even appreciably harder.
Disagree, if you have a strong sense of self statements you hear while on psychedelics are just like normal statements.
Indeed, people with congenital insensitivity to pain don’t feel pain upon touching hot stoves (or in any other circumstance), and they’re at serious risk of infected injuries and early death because of it.
I think the ego is, essentially, the social model of the self. One’s sense of identity is attached to it (effectively rendering it also the Cartesian homunculus), which is why ego death feels so scary to people, but (in most cases; I further theorize that people who developed their self-conceptions top-down, being likelier to have formed a self-model at odds with reality, are worse-affected here) the traits which make up the self-model’s personality aren’t stored in the model; it’s merely a lossy description thereof and will rearise with approximately the same traits if disrupted.
Language can only ever approximate reality and that’s Fine Actually. The point of maps is to have a simplified representation of the territory you can use for navigation (or avoiding water mains as you dig, or assessing potential weather conditions, or deciding which apartment to rent—and maps for different purposes include or leave out different features of the territory depending on which matter to the task at hand); including all the detail would mean the details that actually matter for our goals are lost in the noise (not to mention requiring, in the limit, a map which is an identical copy of the territory and therefore intractably large). So too is language a compression of reality in order to better communicate that subset of its features which matter to the task at hand; it’s that very compression which lets us choose which part of the territory we point to.