learn math or hardware
mesaoptimizer
My thought is that I don’t see why a pivotal act needs to be that.
Okay. Why do you think Eliezer proposed that, then?
Note that I agree with your sentiment here, although my concrete argument is basically what LawrenceC wrote as a reply to this post.
Ryan, this is kind of a side-note but I notice that you have a very Paul-like approach to arguments and replies on LW.
Two things that come to notice:
You have a tendency to reply to certain posts or comments with “I don’t quite understand what is being said here, and I disagree with it.” or, “It doesn’t track with my views”, or equivalent replies that seem not very useful for understanding your object level arguments. (Although I notice that in the recent comments I see, you usually postfix it with some elaboration on your model.)
In the comment I’m replying to, you use a strategy of black-box-like abstraction modeling of a situation to try to argue for a conclusion, one that usually involves numbers such as multipliers or percentages. (I have the impression that Paul uses this a lot, and one concrete example that comes to mind is the takeoff speeds essay. I usually consider such arguments invalid when they seem to throw away information we already have, or seem to use a set of abstractions that don’t particularly feel appropriate to the information I believe we have.
I just found this interesting and plausible enough to highlight to you. Its a moderate investment of my time to find out examples from your comment history to highlight all these instances, but writing this comment still seemed valuable.
This is a really well-written response. I’m pretty impressed by it.
If your acceptable lower limit for basically anything is zero you wont be allowed to do anything, really anything. You have to name some quantity of capabilities progress that’s okay to do before you’ll be allowed to talk about AI in a group setting.
Okay I just read the entire thing. Have you looked at Eric Drexler’s CAIS proposal? It seems to have played some role as the precursor to the davidad / Evan OAA proposal, and has involved the use of composable narrow AI systems.
but I’m a bit disappointed that x-risk-motivated researchers seem to be taking the “safety”/”harm” framing of refusals seriously
I’d say a more charitable interpretation is that it is a useful framing: both in terms of a concrete thing one could use as scaffolding for alignment-as-defined-by-Zack research progress, and also a thing that is financially advantageous to focus on since frontier labs are strongly incentivized to care about this.
Haven’t read the entire post, but my thoughts on seeing the first image: Pretty sure this is priced into Anthropic / Redwood / OpenAI cluster of strategies where you use an aligned boxed (or ’mostly aligned) generative LLM-style AGI to help you figure out what to do next.
e/acc is not a coherent philosophy and treating it as one means you are fighting shadows.
Landian accelerationism at least is somewhat coherent. “e/acc” is a bundle of memes that support the self-interest of the people supporting and propagating it, both financially (VC money, dreams of making it big) and socially (the non-Beff e/acc vibe is one of optimism and hope and to do things—to engage with the object level—instead of just trying to steer social reality). A more charitable interpretation is that the philosophical roots of “e/acc” are founded upon a frustration with how bad things are, and a desire to improve things by yourself. This is a sentiment I share and empathize with.
I find the term “techno-optimism” to be a more accurate description of the latter, and perhaps “Beff Jezos philosophy” a more accurate description of what you have in your mind. And “e/acc” to mainly describe the community and its coordinated movements at steering the world towards outcomes that the people within the community perceive as benefiting them.
I use GreaterWrong as my front-end to interface with LessWrong, AlignmentForum, and the EA Forum. It is significantly less distracting and also doesn’t make my ~decade old laptop scream in agony when multiple LW tabs are open on my browser.
The main part of the issue was actually that I was not aware I had internal conflicts. I just mysteriously felt less emotions and motivation.
Yes, I believe that one can learn to entirely stop even considering certain potential actions as actions available to us. I don’t really have a systematic solution for this right now aside from some form of Noticing practice (I believe a more refined version of this practice is called Naturalism but I don’t have much experience with this form of practice).
What do you think antidepressants would be useful for?
In my experience I’ve gone months through a depressive episode while remaining externally functional and convincing myself (and the people around me) that I’m not going through a depressive episode. Another thing I’ve noticed is that with medication (whether anxiolytics, antidepressants or ADHD medication), I regularly underestimate the level at which I was ‘blocked’ by some mental issue that, after taking the medication, would not exist, and I would only realize it previously existed due to the (positive) changes in my behavior and cognition.
Essentially, I’m positing that you may be in a similar situation.
Have you considered antidepressants? I recommend trying them out to see if they help. In my experience, antidepressants can have non-trivial positive effects that can be hard-to-put-into-words, except you can notice the shift in how you think and behave and relate to things, and this shift is one that you might find beneficial.
I also think that slowing down and taking care of yourself can be good—it can help build a generalized skill of noticing the things you didn’t notice before that led to the breaking point you describe.
Here’s an anecdote that might be interesting to you: There’s a core mental shift I made over the past few months that I haven’t tried to elicit and describe to others until now, but in essence it involves a sort of understanding that the sort of self-sacrifice that usually is involved in working as hard as possible leads to globally unwanted outcomes, not just locally unwanted outcomes. (Of course, we can talk about hypothetical isolated thought experiments and my feelings might change, but I’m talking about a holistic relating to the world here.)
Here’s one argument for this, although I don’t think this captures the entire source of my feelings about this: When parts of someone is in conflict, and they regularly are rejecting a part of them that wants something (creature comforts) to privilege the desires of another part of them that wants another thing (work more), I expect that their effectiveness in navigating and affecting reality is lowered in comparison to one where they take the time to integrate the desires and beliefs of the parts of them that are in conflict. In extreme circumstances, it makes sense for someone to ‘override’ other parts (which is how I model the flight-fight-fawn-freeze response, for example), but this seems unsustainable and potentially detrimental when it comes to navigating a reality where sense-making is extremely important.
This is a very interesting paper, thanks.
What was the requirement? Seems like this was a deliberate effect instead of a side effect.
which I know you object to
Buck, could you (or habryka) elaborate on this? What does Buck call the set of things that ARC theory and METR (formerly known as ARC evals) does, “AI control research”?
My understanding is that while Redwood clearly does control research, METR evals seem more of an attempt to demonstrate dangerous capabilities than help with control. I haven’t wrapped my head around ARC’s research philosophy and output to confidently state anything.
If you haven’t read CEV, I strongly recommend doing so. It resolved some of my confusions about utopia that were unresolved even after reading the Fun Theory sequence.
Specifically, I had an aversion to the idea of being in a utopia because “what’s the point, you’ll have everything you want”. The concrete pictures that Eliezer gestures at in the CEV document do engage with this confusion, and gesture at the idea that we can have a utopia where the AI does not simply make things easy for us, but perhaps just puts guardrails onto our reality, such that we don’t die, for example, but we do have the option to struggle to do things by ourselves.
Yes, the Fun Theory sequence tries to communicate this point, but it didn’t make sense to me until I could conceive of an ASI singleton that could actually simply not help us.
I dropped the book within the first chapter. For one, I found the way Bostrom opened the chapter as very defensive and self-conscious. I imagine that even Yudkowsky wouldn’t start a hypothetical 2025 book with fictional characters caricaturing him. Next, I felt like I didn’t really know what the book was covering in terms of subject matter, and I didn’t feel convinced it was interesting enough to continue the meandering path Nick Bostrom seem to have laid out before me.
Eliezer’s CEV document and the Fun Theory sequence were significantly more pleasant experiences, based on my memory.
we don’t give a shit about morality. Instead, we care about social norms that we can use to shame other people, masquerading under the banner of morality.
I think that basically all of moral cognition, actually.
Caring about others to me seems to be entirely separate from moral cognition. (Note that this may be a controversial statement and it is on my to-do list to make a detailed argument for this claim.)
Could you elaborate on how Tata Industries is relevant here? Based on a DDG search, the only news I find involving Tata and AI infrastructure is one where a subsidiary named TCS is supposedly getting into the generative AI gold rush.