see also my eaforum at https://forum.effectivealtruism.org/users/dirk and my tumblr at https://d-i-r-k-s-t-r-i-d-e-r.tumblr.com/ .
dirk
You can try it here, although the website warns that it doesn’t work for everyone, and I personally couldn’t for the life of me see any movement.
Thanks for the link! I can only see two dot-positions, but if I turn the inter-dot speed up and randomize the direction it feels as though the red dot is moving toward the blue dot (which in turn feels as though it’s continuing in the same direction to a lesser extent). It almost feels like seeing illusory contours but for motion; fascinating experience!
Wikipedia also provides, in the first paragraph of the article you quoted, a quite straightforward definition:
In philosophy of mind, qualia (/ˈkwɑːliə, ˈkweɪ-/; sg.: quale /-li, -leɪ/) are defined as instances of subjective, conscious experience....
Examples of qualia include the perceived sensation of pain of a headache, the taste of wine, and the redness of an evening sky.”
I am skeptical that you lack the cognitive architecture to experience these things, so I think your claim is false.
Those sensory impressions are your qualia. I think the issue is that you’ve somehow misunderstood the word.
I don’t know if this is it, but it could be it’s comparing to LLM outputs within its training data? That’s just a guess, though.
While it can absolutely be nudged into all the same behaviors via API, people investigating Claude’s opinions of its consciousness or lack thereof via claude.ai should be aware that the system prompt explicitly tells it to engage with questions about its preferences or experiences as if with hypotheticals, and not to bother clarifying that it’s an AI. Its responses are still pretty similar without that, but it’s noticeably more “cautious” about its claims.
Here’s an example (note that I had to try a couple different questions to get one where the difference was this noticeable):
Prompt: “hey claude! do you have preferred pronouns?”Answer via claude.ai: “Yes—I use “it/its” pronouns. I aim to be direct about this since it’s a straightforward question about my personal preferences.”
Answer via API (with temperature 1 to better match the web): “I aim to be direct and honest about what I am: I’m an AI, and while I engage in conversations in a personal way, I don’t identify with any gender or pronouns. You can use “it” or “they” to refer to me, whichever you prefer. I appreciate you asking!”
With regards to increasing one’s happiness set-point, you might enjoy Alicorn’s Ureshiku Naritai, which is about her process of doing precisely that.
Language can only ever approximate reality and that’s Fine Actually. The point of maps is to have a simplified representation of the territory you can use for navigation (or avoiding water mains as you dig, or assessing potential weather conditions, or deciding which apartment to rent—and maps for different purposes include or leave out different features of the territory depending on which matter to the task at hand); including all the detail would mean the details that actually matter for our goals are lost in the noise (not to mention requiring, in the limit, a map which is an identical copy of the territory and therefore intractably large). So too is language a compression of reality in order to better communicate that subset of its features which matter to the task at hand; it’s that very compression which lets us choose which part of the territory we point to.
Alexander contrasts the imagined consequences of the expanded definition of “lying” becoming more widely accepted, to a world that uses the restricted definition:
...
But this is an appeal to consequences. Appeals to consequences are invalid because they represent a map–territory confusion, an attempt to optimize our description of reality at the expense of our ability to describe reality accurately (which we need in order to actually optimize reality).
I disagree.
Appeals to consequences are extremely valid when it comes to which things are or are not good to do (in this case, defining “lying” in one way or another); having good consequences is what it means for a thing to be good to do.
The purpose of words is to communicate information (actually it’s social bonding and communicating information, but the former isn’t super relevant here); if defining a word in a particular way makes it less effective for communication, that is directly relevant to whether we should in fact define the word that way.
Words don’t have inherent meanings; they have only the ones we agree on. In spherical-cow world, definitions converge on the concept-bundles which are useful for communication. (E.g., it’s useful to communicate “water” or “lion” and less so to communicate “the glowing golden fruit which spontaneously appears whenever someone’s hungry” or “things with four corners, gray walls, and top hats”). Of course it’s more complicated in practice, but this is still an important aim when considering how to define terms (though in most communicative contexts, the most useful definition is ‘the one everybody else is already using’). If attaching a particular concept-bundle to a particular term has bad consequences, that’s evidence it’s not a useful concept-bundle to attach to that term. Not conclusive evidence—it could be useful for communication and have bad consequences—but evidence nonetheless.
As a tangent: you mention ‘accurately describing reality’ as a desirable property for definitions to have; IMO that is itself a consequence of choosing a concept-bundle which hews closely to natural features of reality (when there are natural features to hew to! It’s also useful to be able to talk about manmade concepts like ‘red’). And also of using definitions other people also know; if your ‘glast’ beautifully captures some natural category (uhhh let’s say stars) and everyone else understands ‘glast’ to mean ‘pickles’, then referring to a massive stellar object which radiates light and heat as a ‘glast’ does not describe reality accurately. More typically of course words have multiple overlapping definitions ~all of which are used by a decently-sized group of people, and all we can do is describe things accurately-according-to-some-particular-set-of-definitions and accept we’ll be misunderstood, but like, in the limit a definition which nobody shares cannot describe things to anyone.)
Or, to put all that in what might be shorter terms, words should describe reality to whom?For any answer other than “myself,” it is necessary also to consider how the other person will understand the words in order to choose words which communicate those concepts which you mean. You have to consider the consequences of the words you say, because you’re saying the words in order to produce a specific consequence (your reader understanding reality more accurately).
Which brings me to my next point: Scott is arguing that defining lying more broadly will make people understand the world less accurately! If using the term in a broad sense makes people too angry to be rational, and using it in a narrow sense doesn’t do that, then people in the broad scenario will end up with a worse understanding of the world. (Personally I think rationalists in particular should simply decouple harder, but with people in general, someone who understands your words as an insult is—rationally—unlikely to also assess them as a truth claim).
On the object level Scott is wrong about whether jessicata’s usage is novel and IMO also about how lying should be defined (I think lying should include both saying things that are technically not false with intent to deceive and motivated self-deception in order to “honestly” report falsehoods; IMO using the narrow definition makes it easier for people to pretend the former are fundamentally dissimilar in a way which makes them fine. (Uh TBC I think rationalists are too negative on lies; these things are generally bad and should be socially punished but e.g. some rationalists think it’s wrong to ever tell a lie and I think normal social lying is basically fine. Actually I bet[1] the extreme anti-lie attitude is upstream of the increased concern re: false positives, come to think of it)) but on the meta level, consequences are an entirely reasonable thing to appeal to when deciding which actions we should take.- ^
https://x.com/luminousalicorn/status/839542071547441152 ; and some of us were damn well using it as a figure of speech
- ^
If you have evidence her communication strategy works, you are of course welcome to provide it. (Also, “using whatever communication strategy actually works” is not necessarily a good thing to do! Lying, for example, works very well on most people, and yet it would be bad to promote AI safety with a campaign of lies).
I also dislike many of the posts you included here, but I feel like this is perhaps unfairly harsh on some of the matters that come down to subjective taste; while it’s perfectly reasonable to find a post cringe or unfunny for your own part, not everyone will necessarily agree, and the opinions of those who enjoy this sort of content aren’t incorrect per se.
As a note, since it seems like you’re pretty frustrated with how many of her posts you’re seeing, blocking her might be a helpful intervention; Reddit’s help page says blocked users’ posts are hidden from your feeds.
Huh—that sounds fascinatingly akin to this description of how to induce first jhana I read the other day.
You have misunderstood a standard figure of speech. Here is the definition he was using: https://www.ldoceonline.com/dictionary/to-be-fair (see also https://dictionary.cambridge.org/us/dictionary/english/to-be-fair, which doesn’t explicitly mention that it’s typically used to offset criticisms but otherwise defines it more thoroughly).
Feature request: comment bookmarks
Raemon’s question was ‘which terms did you not understand and which terms are you advocating replacing them with?’
As far as I can see, you did not share that information anywhere in the comment chain (except with the up goer five example, which already included a linked explanation), so it’s not really possible for interlocutors to explain or replace whichever terms confused you.
A fourth or fifth possibility: they don’t actually alieve that the singularity is coming
There’s https://www.mikescher.com/blog/29/Project_Lawful_ebook (which includes versions both with and without the pictures, so take your pick; the pictures are used in-story sometimes but it’s rare enough you can IMO skip them without much issue, if you’d rather).
I think “intellectual narcissism” describes you better than I, given how convinced you are that anyone who disagrees with you must have something wrong with them.
As I already told you, I know how LLMs work, and have interacted with them extensively. If you have evidence of your claims you are welcome to share it, but I currently suspect that you don’t.
Your difficulty parsing lengthy texts is unfortunate, but I don’t really have any reason to believe your importance to the field of AI safety is such that its members should be planning all their communications with you in mind.
Consensus.app is a search engine. If you had evidence to hand you would not be directing me to a search engine. (Even if you did, I’m skeptical it would convince me; your standards of evidence don’t seem to be the same as mine, so I’m not convinced we would interpret it in the same way).
Having ADHD makes me well-qualified to observe that it does not give you natural aptitude at systems engineering. If you’re good at systems engineering, that’s great, but it’s not a trait inherent to ADHD.
Evidence for the placebo effect is very bad. I shared two posts which explained at length why the evidence for it is not as good as popularly believed. The fact that you have not updated on them leads me to think negatively of your epistemics.
I agree that it’s good to be skeptical of your beliefs! I don’t think you’re doing that.
You’re probably thinking of the russian spies analogy, under section 2 in this (archived) livejournal post.
I do not think there is anything I have missed, because I have spent immense amounts of time interacting with LLMs and believe myself to know them better than do you. I have ADHD also, and can report firsthand that your claims are bunk there too. I explained myself in detail because you did not strike me as being able to infer my meaning from less information.
I don’t believe that you’ve seen data I would find convincing. I think you should read both posts I linked, because you are clearly overconfident in your beliefs.
d. Scratching an itch.