Note to bounty hunters, since it’s come up twice: An “approximately deterministic function of X” is one where, conditional on X, the entropy of F(X) is very small, though possibly nonzero. I.e. you have very nearly pinned down the value that F(X) takes on once you learn X. For conceptual intuition on the graphical representation, X approximately mediating between F(X) and F(X) (two random variables which always take on the same value as each other) means as always that there is approximately no further update on the value of either given the other, (despite them being perfectly informative about the value of the other,) if one already knows the value of X. See this for more.
David Lorell
Well but also kind of yes? Like agreed with what you said, but also the hypothesis is that there’s a certain kind of depression-manifestation which is somewhat atypical and that we’ve seen bupropion work magic on.
*And that this sounds a lot like that manifestation. So it might be particularly good at giving John in particular (and me, and others) the Wizard spirit back.
Disclaimer: I am not a doctor and this is not medical advice. Do your own research.
In short: I experienced something similar. Garrett and I call it “Rat(ionalist) Depression.” It manifested as similar to a loss/lessening of Will To Wizard Power as John uses the term here. Importantly: I wasn’t “sad”, or pessimistic about the future (AI risk aside,) or most other classical signs of depression; I was considered pretty well emotionally put-together by myself and my friends (throughout, and this has never stopped being true.) But at some point for reasons unclear to me, I became listless. The many projects of a similar flavor to things John points at above, which I used do to in spades, lost their visceral appeal (though they kept their cognitive/aesthetic/non-visceral appeal and so compelled me to force myself now and then to some success but also some discomfort and cognitive dissonance)-- and it happened gradually so that it seemed like a natural development over a year or two.
My girlfriend, who is on Bupropion for regular physician-recognized depression, encouraged me to try it just to see. So I did. And it worked.
And it kicks in very quickly. There was a honeymoon phase during the first ~8 days it takes for all of the long half-lived active metabolites to reach equilibrium concentrations, during which I and others I know have reported feeling mild euphoria along with the other benefits. After that subsides, it’s a background thing where mostly you look back on your day/week and realize you just got things done and did more things. And it’s been consistently helpful ever since. (4-6 months for me, ~7 years for my girlfriend, years for some family members and somewhat less time so far for others I know personally.)
Oh and my social battery is way larger. I used to get introvert-exhaustion in a way that ~basically doesn’t happen anymore. Parties are more often fun than not, now.
Further nice-to-haves:
It’s not an SSRI, it’s an NDRI, so it doesn’t do the terrible SSRI things. Side effects may include decreased mental fog, increased libido, decreased appetite, and a renewed will to Wizard Power.
You’ll “feel it” right away (~same day) even though it takes a ~week to settle in to equilibrium concentrations (and, anecdotally from others, possibly up to month to feel it’s final form?)
It’s fairly easy to get. Go to your psychiatrist and ask for it (XR, extended release to be taken in the morning) or trade time/convenience for money and go online to a site like Nurx.com and if, upon completing their intake survey, they consider you to have mild depression (not severe or you’ll scare them off) they’ll start mailing you bupropion once a month!
It doesn’t work for literally everyone. If you have bad anxiety, or if you have mania, be warned. But for the large handful of people around me who are now on it, they’ve reported fast and significant positive effects, including at least one other “Rat Depression” case.
That’s most of the pitch.
🫡 I have pitched him. (Also agreed strongly on point 1. And tentatively agree on your point about the primary bottleneck.)
Sounds plausible. Is that 50% of coding work that the LLMs replace of a particular sort, and the other 50% a distinctly different sort?
My impression is that they are getting consistently better at coding tasks of a kind that would show up in the curriculum of an undergrad CS class, but much more slowly improving at nonstandard or technical tasks.
I do use LLMs for coding assistance every time I code now, and I have in fact noticed improvements in the coding abilities of the new models, but I basically endorse this. I mostly make small asks of the sort that sifting through docs or stack-overflow would normally answer. When I feel tempted to make big asks of the models, I end up spending more time trying to get the LLMs to get the bugs out than I’d have spent writing it all myself, and having the LLM produce code which is “close but not quite and possibly buggy and possibly subtly so” that I then have to understand and debug could maybe save time but I haven’t tried because it is more annoying than just doing it myself.
If someone has experience using LLMs to substantially accelerate things of a similar difficulty/flavor to transpilation of a high-level torch module into a functional JITable form in JAX which produces numerically close outputs, or implementation of a JAX/numpy based renderer of a traversable grid of lines borrowing only the window logic from, for example, pyglet (no GLSL calls, rasterize from scratch,) with consistent screen-space pixel width and fade-on-distance logic, I’d be interested in seeing how you do your thing. I’ve done both of these, with and without LLM help and I think leaning hard on the LLMs took me more time rather than less.File I/O and other such ‘mundane’ boilerplate-y tasks work great right off the bat, but getting the details right on less common tasks still seems pretty hard to elicit from LLMs. (And breaking it down into pieces small enough for them to get it right is very time consuming and unpleasant.)
I think that “getting good” at the “free association” game is in finding the sweet spot / negotiation between full freedom of association and directing toward your own interests, probably ideally with a skew toward what the other is interested in. If you’re both “free associating” with a bias toward your own interests and an additional skew toward perceived overlap, updating on that understanding along the way, then my experience says you’ll have a good chance of chatting about something that interests you both. (I.e. finding a spot of conversation which becomes much more directed than vibey free association.) Conditional on doing something like that strategy, I find it ends up being just a question of your relative+combined ability at this and the extent of overlap (or lack thereof) in interests.
So short model is: Git gud at free association (+sussing out interests) → gradient ascend yourselves to a more substantial conversation interesting to you both.
wiggitywiggitywact := fact about the world which requires a typical human to cross a large inferential gap.
wact := fact about the world
mact := fact about the mind
aact := fact about the agent more generally
vwact := value assigned by some agent to a fact about the world
Seems accurate to me. This has been an exercise in the initial step(s) of CCC, which indeed consist of “the phenomenon looks this way to me. It also looks that way to others? Cool. What are we all cottoning on to?”
Wait. I thought that was crossing the is-ought gap. As I think of it, the is ought gap refers to the apparent type-clash and unclear evidential entanglement between facts-about-the-world and values-an-agent-assigns-to-facts-about-the-world. And also as I think of it, “should be” always is short hand for “should be according to me” though possibly means some kind of aggregated thing but also ground out in subjective shoulds.
So “how the external world is” does not tell us “how the external world should be” …. except in so far as the external world has become causally/logically entangled with a particular agent’s ‘true values’. (Punting on what are an agent’s “true values” are as opposed to the much easier “motivating values” or possibly “estimated true values.” But for the purposes of this comment, its sufficient to assume that they are dependent on some readable property (or logical consequence of readable properties) of the agent itself.)
We have at least one jury rigged idea! Conceptually. Kind of.
Yeeeahhh.… But maybe it’s just awkwardly worded rather than being deeply confused. Like: “The learned algorithms which an adaptive system implements may not necessarily accept, output, or even internally use data(structures) which have any relationship at all to some external environment.” “Also what the hell is ‘reference’.”
Seconded. I have extensional ideas about “symbolic representations” and how they differ from.… non-representations.… but I would not trust this understanding with much weight.
Seconded. Comments above.
Indeed, our beliefs-about-values can be integrated into the same system as all our other beliefs, allowing for e.g. ordinary factual evidence to become relevant to beliefs about values in some cases.
Super unclear to the uninitiated what this means. (And therefore threateningly confusing to our future selves.)
Maybe: “Indeed, we can plug ‘value’ variables into our epistemic models (like, for instance, our models of what brings about reward signals) and update them as a result of non-value-laden facts about the world.”
But clearly the reward signal is not itself our values.
Ahhhh
Maybe: “But presumably the reward signal does not plug directly into the action-decision system.”?
Or: “But intuitively we do not value reward for its own sake.”?
It does seem like humans have some kind of physiological “reward”, in a hand-wavy reinforcement-learning-esque sense, which seems to at least partially drive the subjective valuation of things.
Hrm… If this compresses down to, “Humans are clearly compelled at least in part by what ‘feels good’.” then I think it’s fine. If not, then this is an awkward sentence and we should discuss.
I’m very glad you’re in a better place now! It sounds like there was a lot going on for you and agree that, in circumstances like yours, bupropion is probably not the right starting point.