TL;DR The hamster wheel is bad for you. Rationalists often see participation in the hamster wheel as instrumentally good. I don’t think that is true.
Meet Alice. She has had the opportunity to learn many skills in her school years. Alice is a bright high school student with a mediocre GPA and a very high SAT score. She doesn’t particularly enjoy school, and has no real interest in engaging in the notoriously soul-crushing college admissions treadmill.
Meet Bob. Bob understands that AGI is an imminent existential threat. Bob thinks AI alignment is not only urgent and pressing but also tractable. Bob is a second-year student at Ivy League U studying computer science.
Meet Charlie. Charlie is an L4 engineer at Google. He works on applied machine learning for the Maps team. He is very good at what he does.
Each of our characters has approached you for advice. Their terminal goals might be murky, but they all empathize deeply with the AI alignment problem. They’d like to do their part in decreasing X-risk.
You give Alice the following advice:
It’s statistically unlikely that you’re the sort of genius who’d be highly productive without at least undergraduate training. At a better college, you will not only receive better training and have better peers; you will also have access to opportunities and signalling advantages that will make you much more useful.
I understand your desire to change the world, and it’s a wonderful thing. If you’d just endure the boredom of school for a few more years, you’ll have much more impact.
Right now, MIRI wouldn’t even hire you. I mean, look at the credentials most AI researchers have!
Statistically, you are not Eliezer.
You give Bob the following advice:
Graduating is a very good signal. A IvyLeagueU degree carries a lot of signalling value! Have you gotten an internship yet? It’s great that you are looking into alignment work, but it’s also important that you take care of yourself.
It’s only your second year. If the college environment does not seem optimal to you, you can certainly change that. Do you want study tips?
Listen to me. Do not drop out. All those stories you hear about billionaires who dropped out of college might be somewhat relevant if you actually wanted to be a billionaire. If you’re optimizing for social impact, you do not do capricious things like that.
Remember, you must optimize for expected value. Seriously consider grad school, since it’s a great place to improve your skills at AI Alignment work.
You give Charlie the following advice:
Quit your job and go work on AI Alignment. I understand that Google is a fun place to work, but seriously, you’re not living your values.
But it is too late, because Charlie has already been injected with a deadly neurotoxin which removes his soul from his skeleton. He is now a zombie, only capable of speaking to promo committees.
--
You want geniuses, yet you despise those who attempt to attain genius.
It seems blatantly obvious to you that the John von Neumanns and Paul Erdoses of the world do not beg for advice on internet forums. They must have already built a deep confidence in their capabilities from fantastical childhood endeavors.
And even if Alice wrote a working C++ compiler in Brainfuck at 15 years old, it’s unlikely that she can solve such a momentous problem alone.
Better to keep your head down. Follow the career track. Deliberate. Plan. Optimize.
So with your reasonable advice, Alice went to Harvard and Bob graduated with honors. All of them wish to incrementally contribute to the important project of building safe AI.
They’re capable people now. They understand jargon like prosaic alignment and myopic models. They’re good programmers, though paralyzed whenever they are asked the Hamming questions. They’re not too far off from a job at MIRI or FHI or OpenPhil or Redwood. They made good, wise decisions.
--
I hate people like you.
You say things like, “if you need to ask questions like this, you’re likely not cut out for it. That’s ok, I’m not either.”
I want to grab you by your shoulders, shake you, and scream. Every time I hear the phrase “sphere of competence,” I want to cry. Are you so cynical as to assume that people cannot change their abilities? Do you see people rigid as stone, grey as granite?
Do I sound like a cringey, irrational liberal for my belief that people are not stats sheets? Is this language wishful and floaty and dreamy? Perhaps I am betraying my young age, and reality will set in.
Alternatively, perhaps you have Goodharted. You saw cold calculation and wistful “acceptance” as markers of rationality and adopted them. In your wise, raspy voice you killed dreams with jargon.
Don’t drop out. Don’t quit your job. Don’t get off the hamster wheel. Don’t rethink. Don’t experiment. Optimize.
You people hate fun. I’d like to package this in nice-sounding mathematical terms, but I have nothing for you. Nothing except for a request that you’d be a little less fucking cynical. Nothing except, reflect on what Alice and Bob could’ve accomplished if you hadn’t discouraged them from chasing their dreams.
My least favorite thing
Epistemic status: Anger. Not edited.
TL;DR The hamster wheel is bad for you. Rationalists often see participation in the hamster wheel as instrumentally good. I don’t think that is true.
Meet Alice. She has had the opportunity to learn many skills in her school years. Alice is a bright high school student with a mediocre GPA and a very high SAT score. She doesn’t particularly enjoy school, and has no real interest in engaging in the notoriously soul-crushing college admissions treadmill.
Meet Bob. Bob understands that AGI is an imminent existential threat. Bob thinks AI alignment is not only urgent and pressing but also tractable. Bob is a second-year student at Ivy League U studying computer science.
Meet Charlie. Charlie is an L4 engineer at Google. He works on applied machine learning for the Maps team. He is very good at what he does.
Each of our characters has approached you for advice. Their terminal goals might be murky, but they all empathize deeply with the AI alignment problem. They’d like to do their part in decreasing X-risk.
You give Alice the following advice:
You give Bob the following advice:
You give Charlie the following advice:
But it is too late, because Charlie has already been injected with a deadly neurotoxin which removes his soul from his skeleton. He is now a zombie, only capable of speaking to promo committees.
--
You want geniuses, yet you despise those who attempt to attain genius.
It seems blatantly obvious to you that the John von Neumanns and Paul Erdoses of the world do not beg for advice on internet forums. They must have already built a deep confidence in their capabilities from fantastical childhood endeavors.
And even if Alice wrote a working C++ compiler in Brainfuck at 15 years old, it’s unlikely that she can solve such a momentous problem alone.
Better to keep your head down. Follow the career track. Deliberate. Plan. Optimize.
So with your reasonable advice, Alice went to Harvard and Bob graduated with honors. All of them wish to incrementally contribute to the important project of building safe AI.
They’re capable people now. They understand jargon like prosaic alignment and myopic models. They’re good programmers, though paralyzed whenever they are asked the Hamming questions. They’re not too far off from a job at MIRI or FHI or OpenPhil or Redwood. They made good, wise decisions.
--
I hate people like you.
You say things like, “if you need to ask questions like this, you’re likely not cut out for it. That’s ok, I’m not either.”
I want to grab you by your shoulders, shake you, and scream. Every time I hear the phrase “sphere of competence,” I want to cry. Are you so cynical as to assume that people cannot change their abilities? Do you see people rigid as stone, grey as granite?
Do I sound like a cringey, irrational liberal for my belief that people are not stats sheets? Is this language wishful and floaty and dreamy? Perhaps I am betraying my young age, and reality will set in.
Alternatively, perhaps you have Goodharted. You saw cold calculation and wistful “acceptance” as markers of rationality and adopted them. In your wise, raspy voice you killed dreams with jargon.
Don’t drop out. Don’t quit your job. Don’t get off the hamster wheel. Don’t rethink. Don’t experiment. Optimize.
You people hate fun. I’d like to package this in nice-sounding mathematical terms, but I have nothing for you. Nothing except for a request that you’d be a little less fucking cynical. Nothing except, reflect on what Alice and Bob could’ve accomplished if you hadn’t discouraged them from chasing their dreams.