Here you go: https://chatgpt.com/share/67b31788-32b0-8013-8bbf-a4100abf0457
Jay Bailey
I bought a month of Deep Research and am open to running queries if people have a few but don’t want to spend 200 bucks for them. Will spend up to 25 queries in total.
A paragraph or two of detail is good—you can send me supporting documents via wnlonvyrlpf@tznvy.pbz (ROT13) if you want. Offer is open publicly or via PM.
Having reflected on this decision more, I have decided I no longer endorse those feelings in point B of my second-to-last paragraph. In fact, I’ve decided that “I donated roughly 1k to a website that provided way more expected value than that to me over my lifetime, and also if it shut down I think that would be a major blow to one of the most important causes in the world” is something to be proud of, not embarrassed by, and something worthy of being occasionally reminded of.
So if you’re still sending them out I’d gladly take one after all :)
I’ve been procrastinating on this, but I heard it was the last day to do this, so here I am. I’ve utilised LessWrong for years, but am also a notoriously cheap bastard. I’m working on this. That said, I feel I should pay something back, for what I’ve gotten out of it.
When I was 20 or so, I was rather directionless, and didn’t know what I wanted to do in life, bouncing between ideas, never finishing them. I was reading LessWrong at the time. At some point, a LessWrong-ism popped into my head—“Jay—this thing you’re doing isn’t working. Your interests change faster than you can commit to a career. Therefore, you need a career strategy that does not rely on your interests.” This last sentence definitely would not have occurred to me without LessWrong. It felt like a quantitative shift in thinking, that I had finally truly learned a new pattern. Nowadays it seems obvious, and it would be obvious to many of my friends...but back then, I remember that flash of insight, and I’ve never forgotten it.
I came up with a series of desiderata—something I’d be good at, not hate, and get to work indoors for a reasonable salary. I decided to be an accountant, which is evidence for this whole “One-shot the problem” thing being hard, but wisely pivoted into pursuing a software engineering degree a year later.
While EA was what got me into AI safety, even ignoring the effect LessWrong has had on EA, the skills I decided to learn thanks to LessWrong principles are potentially the only reason I have much of a say in the future at all. Not to mention I’ve made a pretty solid amount of money out of it.
Considering the amount of value I’ve gotten out of LessWrong, I’m far too cheap to donate an amount that would be truly “fair”, but I wanted to donate a solid amount anyway—an amount that at least justifies the years of use I’ve gotten out of the site. I talked myself into donating $1,000, but then I realised that A) I didn’t want a shirt to affect my donation decisions, and B) I’d be a bit embarassed to have a shirt that symbolises how I donated four figures to a website that has helped me think good. I feel like I’ll forget the money easily once I donate it, and it won’t affect my day to day life at all. Unless, of course, I have a physical reminder of it.
Thus, I have donated $999 USD to the cause.
Hi Giorgi,
Not an expert on this, but I believe the idea is that over time the agent will learn to assign negligible probabilities to actions that don’t do anything. For instance, imagine a game where the agent can move in four directions, but if there’s a wall in front of it, moving forward does nothing. The agent will eventually learn to stop moving forward in this circumstance. So you could probably just make it work, even if it’s a bit less efficient, if you just had the environment do nothing if an invalid action was selected.
Thanks for this! I’ve changed the sentence to:
The target network gets to see one more step than the Q-network does, and thus is a better predictor.
Hopefully this prevents others from the same confusion :)
pandas is a good library for this—it takes CSV files and turns them into Python objects you can manipulate. plotly / matplotlib lets you visualise data, which is also useful. GPT-4 / Claude could help you with this. I would recommend starting by getting a language model to help you create plots of the data according to relevant subsets. Like if you think that the season matters for how much gold is collected, give the model a couple of examples of the data format and simply ask it to write a script to plot gold per season.
To provide the obvious advice first:
Attempt a puzzle.
If you didn’t get the answer, check the comments of those who did.
Ask yourself how you could have thought of that, or what common principle that answer has. (e.g, I should check for X and Y)
Repeat.
I assume you have some programming experience here—if not, that seems like a prerequisite to learn. Or maybe you can get away with using LLM’s to write the Python for you.
I don’t know about the first one—I think you’ll have to analyse each job and decide about that. I suspect the answer to your second question is “Basically nil”. I think that unless you are working on state-of-the-art advances in:
A) Frontier models B) Agent scaffolds, maybe.
You are not speeding up the knowledge required to automate people.
I guess my way of thinking of it is—you can automate tasks, jobs, or people.
Automating tasks seems probably good. You’re able to remove busywork from people, but their job is comprised of many more things than that task, so people aren’t at risk of losing their jobs. (Unless you only need 10 units of productivity, and each person is now producing 1.25 units so you end up with 8 people instead of 10 - but a lot of teams could also quite use 12.5 units of productivity well)
Automating jobs is...contentious. It’s basically the tradeoff I talked about above.
Automating people is bad right now. Not only are you eliminating someone’s job, you’re eliminating most other things this person could do at all. This person has had society pass them by, and I think we should either not do that or make sure this person still has sufficient resources and social value to thrive in society despite being automated out of an economic position. (If I was confident society would do this, I might change my tune about automating people)
So, I would ask myself—what type of automation am I doing? Am I removing busywork, replacing jobs entirely, or replacing entire skillsets? (Note: You are probably not doing the last one. Very few, if any, are. The tech does not seem there atm. But maybe the company is setting themselves up to do so as soon as it is, or something)
And when you figure out what type you’re doing, you can ask how you feel about that.
I think that there are two questions one could ask here:
-
Is this job bad for x-risk reasons? I would say that the answer to this is “probably not”—if you’re not pushing the frontier but are only commercialising already available technology, your contribution to x-risk is negligible at best. Maybe you’re very slightly adding to the generative AI hype, but that ship’s somewhat sailed at this point.
-
Is this job bad for other reasons? That seems like something you’d have to answer for yourself based on the particulars of the job. It also involves some philosophical/political priors that are probably pretty specific to you. Like—is automating away jobs good most of the time? Argument for yes—it frees up people to do other work, it advances the amount of stuff society can do in general. Argument for no—it takes away people’s jobs, disrupts lives, some people can’t adapt to the change.
I’ll avoid giving my personal answer to the above, since I don’t want to bias you. I think you should ask how you feel about this category of thing in general, and then decide how picky or not you should be about these AI jobs based on that category of thing. If they’re mostly good, you can just avoid particularly scummy fields and other than that, go for it. If they’re mostly bad, you shouldn’t take one unless you have a particularly ethical area you can contribute to.
-
It seems to me that either:
-
RLHF can’t train a system to approximate human intuition on fuzzy categories. This includes glitches, and this plan doesn’t work.
-
RLHF can train a system to approximate human intuition on fuzzy categories. This means you don’t need the glitch hunter, just apply RLHF to the system you want to train directly. All the glitch hunter does is make it cheaper.
-
I was about 50⁄50 on it being AI-made, but then when I saw the title “Thought That Faster” was a song, I became much more sure, because that was a post that happened only a couple weeks ago I believe, and if it was human-made I assume it would take longer to go from post to full song. Then I read this post.
In Soviet Russia, there used to be something called a Coke party. You saved up money for days to buy a single can of contraband Coca-Cola. You got all of your friends together and poured each of them a single shot. It tasted like freedom.
I know this isn’t the point of the piece, but this got to me. However much I appreciate my existence, it never quite seems to be enough to be calibrated to things like this. I suddenly feel both a deep appreciation and vague guilt. Though it does give me a new gratitude exercise—imagine the item I am about to enjoy is forbidden in my country and I have acquired a small sample at great expense.
I notice that this is a standard pattern I use and had forgotten how non-obvious it is, since you do have to imagine yourself in someone else’s perspective. If you’re a man dating women on dating apps, you also have to imagine a very different perspective than your own—women tend to have many more options of significantly lower average quality. It’s unlikely you’d imagine yourself giving up on a conversation because it required mild effort to continue, since you have less of them in the first place and invest more effort in each one.
The level above that one, by the way, is going from being “easy to respond to” to “actively intriguing”, where your messages contain some sort of hook that is not only an easy conversation-continuer, but actually wants them to either find out more (because you’re interesting) or keep talking (because the topic is interesting)
Worth noting is I don’t have enough samples of this strategy to know how good it is. However, it is also worth noting is I don’t have enough samples because I wound up saturated on new relationships a couple weeks shortly after starting this strategy, so for a small n it was definitely quite useful.
What I’m curious about is how you balance this with the art of examining your assumptions.
Puzzle games are a good way of examining how my own mind works, and I often find that I go through an algorithm like:
Do I see the obvious answer?
What are a few straightforward things I could try?
Then Step 3 I see as similar to your maze-solving method:
What are the required steps to solve this? What elements constrain the search space?
But I often find that for difficult puzzles, a fourth step is required:
What assumptions am I making, that would lead me to overlook the correct answer if the assumption was false?
For instance, I may think a lever can only be pulled, and not pushed—or I may be operating under a much harder to understand assumption, like “In this maze, the only thing that matters are visual elements” when it turns out the solution to this puzzle actually involved auditory cues.
Concrete feedback signals I’ve received:
-
I don’t find myself excited about the work. I’ve never been properly nerd-sniped by a mechanistic interpretability problem, and I find the day-to-day work to be more drudgery than exciting, even though the overall goal of the field seems like a good one.
-
When left to do largely independent work, after doing the obvious first thing or two (“obvious” at the level of “These techniques are in Neel’s demos”) I find it hard to figure out what to do next, and hard to motivate myself to do more things if I do think of them because of the above drudgery.
-
I find myself having difficulty backchaining from the larger goal to the smaller one. I think this is a combination of a motivational issue and having less grasp on the concepts.
By contrast, in evaluations, none of this is true. I am able to solve problems more effectively, I find myself actively interested in problems, (the ones I’m working on and ones I’m not) and I find myself more able to solve problems and reason about how they matter for the bigger picture.
I’m not sure how much of each is a contributor, but I suspect that if I was sufficiently excited about the day-to-day work, all the other problems would be much more fixable. There’s a sense of reluctance, a sense of burden, that saps a lot of energy when it comes to doing this kind of work.
As for #2, I guess I should clarify what I mean, since there’s two ways you could view “not suited”.
-
I will never be able to become good enough at this for my funding to be net-positive. There are fundamental limitations to my ability to succeed in this field.
-
I should not be in this field. The amount of resources required to make me competitive in this field is significantly larger than other people who would do equally good work, and this is not true for other subfields in alignment.
I view my use of “I’m not suited” more like 2 than 1. I think there’s a reasonable chance that, given enough time with proper effort and mentorship in a proper organisational setting (being in a setting like this is important for me to reliably complete work that doesn’t excite me), I could eventually do okay at this field. But I also think that there are other people who would do better, faster, and be a better use of an organisation’s money than me.
This doesn’t feel like the case in evals. I feel like I can meaningfully contribute immediately, and I’m sufficiently motivated and knowledgable that I can understand the difference between my job and my mission (making AI go well) and feel confident that I can take actions to succeed in both of them.
If Omega came down from the sky and said “Mechanistic interpretability is the only way you will have any impact on AI alignment—it’s this or nothing” I might try anyway. But I’m not in that position, and I’m actually very glad I’m not.
-
Significant Digits is (or was, a few years ago) considered the best one, to my recollection.