see also my eaforum at https://forum.effectivealtruism.org/users/dirk and my tumblr at https://d-i-r-k-s-t-r-i-d-e-r.tumblr.com/ .
dirk
Your grievance with your former employer seems to me to have little relevance to how would-be college students should plan to spend their time, and even if it had, you haven’t shared enough detail for people to judge your report as accurate (assuming this is in fact the case).
His lack of reply probably means he doesn’t want to engage with you, likely due to what he described as “your combative and sensationalistic attitude.”
This is directionally correct and most lesswrongers could probably benefit from taking the advice herein, but goes too far (possibly as deliberate humor? The section about Flynn especially was quite funny XD).
I do take issue with the technical-truths section; I think using technical truths to trick people, while indeed a form of lying, is quite distinct from qualifying claims which would be false if unqualified. It’s true that some philistines skim texts in order to respond to vibes rather than content, but the typical reader understands qualifiers to be part of the sentences which contain them, and to affect their meaning. That is why qualifiers exist, to change the meanings of the things they qualify, and choosing to ignore their presence is a choice to ignore the actual meaning of the sentences you’re ostensibly reading.
There’s an easy way to avoid competition for a restricted pool of elite slots: some students could go to less competitive schools.
Sorry, I meant to change only the headings you didn’t want (but that won’t work for text that’s already paragraph-style, so I suppose that wouldn’t fix the bold issue in any case; I apologize for mixing things up!).
Testing it out in a draft, it seems like having paragraph breaks before and after a single line of bold text might be what triggers index inclusion? In which case you can likely remove the offending entries by replacing the preceding or subsequent paragraph break with a shift-enter (still hacky, but at least addressing the right problem this time XD).
A relatively easy solution (which would, unfortunately, mess with your formatting; not sure if there’s a better one that doesn’t do that) might be to convert everything you don’t want in there to paragraph style instead of heading 1/2/3
I’m not sure the deletions are a learnt behavior—base models, or at least llama 405b in particular, do this too IME (as does the fine-tuned 8b version).
And I think you believe others to experience this extra thing because you have failed to understand what they’re talking about when they discuss qualia.
Ziz believes her entire hemisphere theory is an infohazard (IIRC she believes it was partially responsible for Pasek’s death), so terms pertaining to it are separate from the rest of her glossary.
Neither of them is exactly what you’re looking for, but you might be interested in lojban, which aims to be syntactically unambiguous, and Ithkuil, which aims to be extremely information-dense as well as to reduce ambiguity. With regards to logical languages (ones which, like lojban, aim for each statement to have a single possible interpretation), I also found Toaq and Eberban just now while looking up lojban, though these have fewer speakers.
For people interested in college credit, https://modernstates.org/ offers free online courses on gen-ed material which, when passed, give you a fee waiver for CLEP testing in the relevant subject; many colleges, in turn, will accept CLEP tests as transfer credit. I haven’t actually taken any tests through them (you need a Windows computer or a nearby test center), so I can’t attest to the ease of that process, but it might interest others nonetheless.
Plots that are profitable to write abound, but plots that any specific person likes may well be quite thin on the ground.
I think the key here is that authors don’t feel the same attachment to submitted plot ideas as submitters do (or the same level of confidence in their profitability), and thus would view writing them as a service done for the submitter. Writing is hard work, and most people want to be compensated if they’re going to do a lot of work to someone else’s specifications. In scenarios where they’re paid for their services, writers often do write others’ plots; consider e.g. video game novelizations, franchises like Nancy Drew or Animorphs, and celebrity memoirs. (There are also non-monetized contexts like e.g. fanfiction exchanges, in which participants write a story to someone else’s request and in turn are gifted a story tailored to their own.)
I wouldn’t describe LLMs’ abilities as wonderful, but IME they do quite serviceable pastiche of popular styles I like; if your idea is e.g. a hard-boiled detective story, MilSF, etc., I would expect an LLM to be perfectly capable of rendering it into tolerable form.
d. Scratching an itch.
You can try it here, although the website warns that it doesn’t work for everyone, and I personally couldn’t for the life of me see any movement.
Thanks for the link! I can only see two dot-positions, but if I turn the inter-dot speed up and randomize the direction it feels as though the red dot is moving toward the blue dot (which in turn feels as though it’s continuing in the same direction to a lesser extent). It almost feels like seeing illusory contours but for motion; fascinating experience!
Wikipedia also provides, in the first paragraph of the article you quoted, a quite straightforward definition:
In philosophy of mind, qualia (/ˈkwɑːliə, ˈkweɪ-/; sg.: quale /-li, -leɪ/) are defined as instances of subjective, conscious experience....
Examples of qualia include the perceived sensation of pain of a headache, the taste of wine, and the redness of an evening sky.”
I am skeptical that you lack the cognitive architecture to experience these things, so I think your claim is false.
Those sensory impressions are your qualia. I think the issue is that you’ve somehow misunderstood the word.
I don’t know if this is it, but it could be it’s comparing to LLM outputs within its training data? That’s just a guess, though.
While it can absolutely be nudged into all the same behaviors via API, people investigating Claude’s opinions of its consciousness or lack thereof via claude.ai should be aware that the system prompt explicitly tells it to engage with questions about its preferences or experiences as if with hypotheticals, and not to bother clarifying that it’s an AI. Its responses are still pretty similar without that, but it’s noticeably more “cautious” about its claims.
Here’s an example (note that I had to try a couple different questions to get one where the difference was this noticeable):
Prompt: “hey claude! do you have preferred pronouns?”Answer via claude.ai: “Yes—I use “it/its” pronouns. I aim to be direct about this since it’s a straightforward question about my personal preferences.”
Answer via API (with temperature 1 to better match the web): “I aim to be direct and honest about what I am: I’m an AI, and while I engage in conversations in a personal way, I don’t identify with any gender or pronouns. You can use “it” or “they” to refer to me, whichever you prefer. I appreciate you asking!”
With regards to increasing one’s happiness set-point, you might enjoy Alicorn’s Ureshiku Naritai, which is about her process of doing precisely that.
I assume young, naive, and optimistic. (There’s a humor element here, in that niplav is referencing a snowclone, afaik originating in this tweet which went “My neighbor told me coyotes keep eating his outdoor cats so I asked how many cats he has and he said he just goes to the shelter and gets a new cat afterwards so I said it sounds like he’s just feeding shelter cats to coyotes and then his daughter started crying.”, so it may have been added to make the cadence more similar to the original tweet’s).