I don’t doubt that LLMs could do this, but has this exact thing actually been done somewhere?
Adele Lopez
The “one weird trick” to getting the right answers is to discard all stuck, fixed points. Discard all priors and posteriors. Discard all aliefs and beliefs. Discard worldview after worldview. Discard perspective. Discard unity. Discard separation. Discard conceptuality. Discard map, discard territory. Discard past, present, and future. Discard a sense of you. Discard a sense of world. Discard dichotomy and trichotomy. Discard vague senses of wishy-washy flip floppiness. Discard something vs nothing. Discard one vs all. Discard symbols, discard signs, discard waves, discard particles.
All of these things are Ignorance. Discard Ignorance.
Is this the same principle as “non-attachment”?
Make a letter addressed to Governor Newsom using the template here.
For convenience, here is the template:
September [DATE], 2024
The Honorable Gavin Newsom
Governor, State of California
State Capitol, Suite 1173
Sacramento, CA 95814
Via leg.unit@gov.ca.govRe: SB 1047 (Wiener) – Safe and Secure Innovation for Frontier Artificial Intelligence Models Act – Request for Signature
Dear Governor Newsom,
[CUSTOM LETTER BODY GOES HERE. Consider mentioning:
Where you live (this is useful even if you don’t live in California)
Why you care about SB 1047
What it would mean to you if Governor Newsom signed SB 1047
SAVE THIS DOCUMENT AS A PDF AND EMAIL TO leg.unit@gov.ca.gov
]Sincerely,
[YOUR NAME]
This matches my memory as well.
I have no idea, but I wouldn’t be at all surprised if it’s a mainstream position.
My thinking is that long-term memory requires long-term preservation of information, and evolution “prefers” to repurpose things rather than starting from scratch. And what do you know, there’s this robust and effective infrastructure for storing and replicating information just sitting there in the middle of each neuron!
The main problem is writing new information. But apparently, there’s a protein evolved from a retrotransposon (those things which viruses use to insert their own RNA into their host’s DNA) which is important to long term memory!
And I’ve since learned of an experiment with snails which also suggests this possibility. Based on that article, it looks like this is maybe a relatively new line of thinking.
It’s good news for cryonics if this is the primary way long term memories are stored, since we “freeze” sperm and eggs all the time, and they still work.
Do you know if fluid preservation preserves the DNA of individual neurons?
(DNA is on my shortlist of candidates for where long-term memories are stored)
Consider finding a way to integrate Patreon or similar services into the LW UI then. That would go a long way towards making it feel like a more socially acceptable thing to do, I think.
Yeah, that’s not what I’m suggesting. I think the thing I want to encourage is basically just to be more reflective on the margin of disgust-based reactions (when it concerns other people). I agree it would be bad to throw it out unilaterally, and probably not a good idea for most people to silence or ignore it. At the same time, I think it’s good to treat appeals to disgust with suspicion in moral debates (which was the main point I was trying to make) (especially since disgust in particular seems to be a more “contagious” emotion for reasons that make sense in the context of infectious diseases but usually not beyond that, making appeals to it more “dark arts-y”).
As far as the more object-level debate on whether disgust is important for things like epistemic hygiene, I expect it to be somewhere where people will vary, so I think we probably agree here too.
I meant wrong in the sense of universal human morality (to the extent that’s a coherent thing). But yes, on an individual level your values are just your values.
I see that stuff as at best an unfortunate crutch for living in a harsher world, and which otherwise is a blemish on morality. I agree that it is a major part of what many people consider to be morality, but I think people who still think it’s important are just straightforwardly wrong.
I don’t think disgust is important for logic / reflectivity. Personally, it feels like it’s more of a “unsatisfactory” feeling. A bowl with a large crack, and a bowl with mold in it are both unsatisfactory in this sense, but only the latter is disgusting. Additionally, it seems like people who are good at logic/math/precise thinking seem to care less about disgust (as morality), and highly reflective people seem to care even less about it.
ETA: Which isn’t to say I’d be surprised if some people do use their disgust instinct for logical/reflective reasoning. I just think that if we lived in the world where that main thing going on, people good at that kind of stuff would tend to be more bigoted (in a reflectively endorsed way) and religious fundamentalism would not be as strong of an attractor as it apparently is.
That doesn’t seem right to me. My thinking is that disgust comes from the need to avoid things which cause and spread illness. On the other hand, things I consider more central to morality seem to have evolved for different needs [these are just off-the-cuff speculations for the origins]:
Love—seems to be generalized from parental nurturing instincts, which address the need to ensure your offspring thrive
Friendliness—seems to have stemmed from the basic fact that cooperation is beneficial
Empathy—seems to be a side-effect of the way our brains model conspecifics (the easiest way to model someone else is to emulate them with your own brain, which happens to make you feel things)
These all seem to be part of a Cooperation attractor which is where the pressure to generalize/keep these instincts comes from. I think of the Logic/reflectivity stuff as noticing this and developing it further.
Disgust seems unsavory to me because it dampens each of the above feelings (including making the logic/reflectivity stuff more difficult). That’s not to say I think it’s completely absent form human morality, it just doesn’t seem like it’s where it comes from.
(As far as Enforcement goes, it seems like Anger and Fear are much more important than Disgust.)
This is fascinating and I would love to hear about anything else you know of a similar flavor.
Caloric Vestibular Stimulation seems to be of a similar flavor, in case you haven’t heard of it.
It decreases the granularity of the actions to which it applies. In other words, where before you had to solve a Sudoku puzzle to go to work, now you’ve got to solve a puzzle to get dressed, a puzzle to get in the car, a puzzle to drive, and a puzzle to actually get started working. Before all of those counted as a single action - ‘go to work’ - now they’re counted separately, as discrete steps, and each requires a puzzle.
This resonates strongly with my experience, though when I noticed this pattern I thought of it as part of my ADHD and not my depression. Maybe this is something like the mechanism via which ADHD causes depression.
Anyway, I’ve had mild success at improving productivity simply by trying to deliberately think of possible actions in coarser chunks. Plausibly this technique can be refined and improved–which I’d love to hear about if anyone figures this out!
I imagine some of it is due to this part of the blog post UI making people feel like they might as well use some quickly generated images as an easy way to boost engagement. Perhaps worth rewording?
When I’m trying to understand a math concept, I find that it can be very helpful to try to invent a better notation for it. (As an example, this is how I learned linear logic: http://adelelopez.com/visual-linear-logic)
I think this is helpful because it gives me something to optimize for in what would otherwise be a somewhat rote and often tedious activity. I also think it makes me engage more deeply with the problem than I otherwise would, simply because I find it more interesting. (And sometimes, I even get a cool new notation from it!)
This principle likely generalizes: tedious activities can be made more fun and interesting by having something to optimize for.
Thanks for the rec! I’ve been trying it out for the last few days, and it does seem to have noticeably less friction compared to LaTeX.
Sanskrit scholars worked for generations to make Sanskrit better for philosophy
That sounds interesting, do you know a good place to get an overview of what the changes were and how they approached it?
(To be clear, no I am not at all afraid of this specific thing, but the principle is crucial. But also, as Kevin Roose put it, perhaps let’s avoid this sort of thing.)
There are no doubt people already running literal cartoon supervillain characters on these models, given the popularity of these sorts of characters on character.ai.
I’m not worried about that with Llama-3.1-405B, but I believe this is an almost inevitable consequence of open source weights. Another reason not to do it.
What do we do, if the people would not choose The Good, and instead pick a universe with no value?
I agree this would be a pretty depressing outcome, but the experiences themselves still have quite a bit of value.
Well, I’m very forgetful, and I notice that I do happen to be myself so… :p
But yeah, I’ve bitten this bullet too, in my case, as a way to avoid the Boltzmann brain problem. (Roughly: “you” includes lots of information generated by a lawful universe. Any specific branch has small measure, but if you aggregate over all the places where “you” exist (say your exact brain state, though the real thing that counts might be more or less broad than this), you get more substantial measure from all the simple lawful universes that only needed 10^X coincidences to make you instead of the 10^Y coincidences required for you to be a Boltzmann brain.)
I think that what anthropically “counts” is most likely somewhere between conscious experience (I’ve woken up as myself after anesthesia), and exact state of brain in local spacetime (I doubt thermal fluctuations or path dependence matter for being “me”).