Abundance and scarcity; working forwards and working backwards
Epistemic status: I was bet $100 that I would not write this. Processing drastic opportunities, this is an extremely decision-relevant post and if my friends are right millions of dollars are lying in it.
Managing expectations: I am not writing a guide to hacking the mindset, I’m journaling my progress through reasoning about what hacking the mindset would look like.
Citational status: I am not naming names in this post, but if you recognize yourself in it I’m sending you much thanks and love
Content warning: cash, rich people problems.
I’m not great at “shut up and multiply”-ing my life. I have a scarcity mindset, probably from the years I was so poor that the (literal) man on the street (literally) stopped me to tell me I looked too skinny, that he could tell I wasn’t eating enough. In 2019 when I was managing teaching assistants for a code bootcamp for 17.5/hr, I felt rich, in such a way that my bones don’t feel too much richer now that I’ve 6x’d that. I don’t think philanthropic goals are really in my bones. Sure I’ll donate at least 20k this year, just because it feels like not doing that would be stupid, but when someone tells me about an opportunity to 2x or 10x from where I’m at now, my bones don’t immediately say “ah yes, for those people I’d help with my donations”, my response—my intuitive, gut response—is still “what? what for? I’m already rich”. Yes, there are other reasons for savings: runway for self-funded alignment research, funding other alignment researchers, buying a house, etc. System 2, post-hoc, I can say a variety of reasons why more money is good, but I can’t get system 1 to believe it. My system 1 just associates abundance with trivially/mindlessly getting takeout every meal because I don’t like to invest time into food, and returns “we won”. (I still don’t get delivery, because I think that’s too decadent, but that’s another story).
But takeout every meal is not the story of abundance mindset, and in this post I will try to explain the difference.
I have a job that pays me in the 96th percentile of one of the richest countries, and I have massive intellectual satisfaction at work. Having been cash poor, and having been in stupid jobs that I hated, I’ve become rather conservative. My brain thinks that from “just like a capitalism” perspective (where you might nominally be creating nonzero prosocial value, but your emphasis is on just scamboozling the economy to pay rent and donate), it can’t get better than this (my cynicism about cash as a proxy for prosocial value probably has to do with my inside view of the blockchain engineering market, lol). When I explained this to someone, they told me that this is a scarcity mindset, insofar as it’s keeping me from startups or alignment research.
To be just a hair more cunning, I think I’m building up nontrivial career capital (for a variety of reasons that are too in the weeds for this post) and I’m acting in a low-replaceability mode. These are reasonable reasons to stay in a job. But are they reasonable reasons to keep yourself out of entrepreneurship or alignment research?
My current guess at the optimal time for me to bail for either a startup (I’d really like to do something in epistemic public goods (EPGs) or improving institutional decision making (IIDM)) or full time alignment research (see below) is in 6-12 months. Two friends are adamantly telling me that now, not in 6-12 months, is the time to jump on a particular startup (that doesn’t excite me very much; my friends just aggravatingly have a good point about my comparative advantage). It would not be as prosocial as a EPGs or IIDM project, but the idea is that it would make me rich (i.e. 7-8 figures). See, it’s nearly a moral intuition I have that prioritizing getting rich over happiness or prosocial impact is a bad move, but in the words of a friend “if you have more money you can afford more morals”.
Back and forth
There’s a great Aaron Schwartz piece called theory of change. In it, he says that working backwards strictly outperforms working forwards.
A theory of change is the opposite of a theory of action — it works backwards from the goal, in concrete steps, to figure out what you can do to achieve it. To develop a theory of change, you need to start at the end and repeatedly ask yourself, “Concretely, how does one achieve that?”
This can be expressed in instrumental vs. terminal values. You form a picture of yourself eating chocolate, so you form a picture of yourself at the store, and so on until you form a picture of yourself grabbing your keys, wallet, and mask. Line up the pictures, and pull them into reality.
For the opposite, consider my writer friend: every day, she works on what she feels like working on, based on what she’s thinking about at the time. She reports to me that she does not say “I shall tell the most devastating story about x that leverages my perspective on y and it shall be a screenplay” ever. Even when her TV producer friend asks her for a pilot she does not form a picture of the most awesome TV show and write backwards, she tells me, even the increments of writing backwards that come from the request to deliver a TV pilot do not massively change her process. Another friend compared this to hill climbing, which I found very useful.
I’m definitely a nonzero work backwards guy, but I could be much moreso one. When I look back on big wins, I think I was passively seizing opportunities I stumbled upon rather than building opportunities, which sounds like hill climbing. In some sense, I find this reliance on luck to be unacceptable.
Hacking the motivation
I’m not a super high performer, I just get nerd sniped when I’m lucky. I flunked out of the first startup I worked at cuz I didn’t do anything all day, because I thought the product was stupid. My boss and colleagues at current role report that I’m very valued and productive, and I believe them. I’m happy and productive in a coq job via the obscene luck of getting a coq job without a phd (I was just hanging out in a groupchat when some guy wrote “need proofs”). I think being productive only when nerd sniped—my ailment—is like working forwards (or in Aaron’s term theory of action).
There’s a quote somewhere of unknown attribution that goes something like you are what you can’t stop yourself from doing. Leveraging intrinsic motivation is critical for any optimizer. How do you square that with working backwards?
A friend even tried the following on me: “Every day you spend not getting super rich is a murder of a person you would have helped in worlds where you were super rich”. Besides being manipulative and annoying, I think this is ineffective. I’m so bad at piloting my own motivation that I would simply murder the people, every day.
Under what circumstances should you attempt something that you can stop yourself from doing?
This is unclear to me. Open problem.
Let’s go back to basics. I am the recipient of several birth lotteries, I have a comparative advantage in a niche domain, it’s remotely plausible that I’m a mulimillionaire in some worlds. What does it look like to focus on those worlds and escort them into this particular timeline? I claim there is only a working backwards approach to this. And I’m not enough of a working backwards guy to think I can pull it off, if you were to ask me honestly. I’m talking about failing, not in the gold rush / slot machine sense of failure, but in the plays world of warcraft sense of failure. Not failing because I delivered a product that didn’t make me rich, but failing because I emotionally divested from the product.
Bottlenecks I have with working backwards totally aside, I have to be able to buy the picture of myself as a multimillionaire, get emotionally invested in the picture, if working backwards would even be on the table. It took me 6 months of making 85/hour to get over the sticker shock of my own invoices, and actually grok that the money was being deposited in my account. This is beyond an argument of the utility of being a multimillionaire. It’s obvious that I could help more people with their projects if I just scaled up my notion of abundance. Guilty pleasure of mine being Aaron Sorkin, we recall when Sean Parker said “a million isn’t cool. you know what’s cool? a billion”. Takeout every meal isn’t cool. You know what’s cool? Funding projects. System 2 can buy it, sure, but how do I scamboozle my system 1 into craving for this world/picture to enter my particular timeline?
Scarcity and abundance
Being conservative because you feel like the move is to cling to what you have is a scarcity mindset. One of the great injustices of poverty, as we know, is that it dismantles ambition. Even just accepting that I’m at the pareto frontier of “just like a capitalism” job reveals an underlying assumption of scarcity; a more ambitious person would more naturally imagine worlds with improvements. An abundance mindset would look at the opportunity I lucked into and say “either it’s gonna happen again, something better will happen, or you’ll build something better yourself”.
Abundance mindset tolerates risk, because it knows everything is going to work out.
Regarding hero license
I do reasonable in the virtue of hero license. For example, in late 2016 when I was improvising in philly’s music scene and writing film scores and so on all funded by delivering food on my bike, I somehow began seriously reading the sequences, made it to intelligence.org/research-guide back when it was current, bought some of the introductory textbooks listed there. Because when I saw it I was like “huh. They’re saying they’re looking for more people. I can do that. Let me just learn CS, wrap up this alignment silliness, and get back to music in a few years”. Lmao, CS ended up being both difficult and beautiful, outperforming music in every way, and alignment ended up being harder than I thought. But the point about 2016 quinn is “he a little confused but he got the spirit”. The arrogance seems like an asset, that it would be a shame to lose. But last night someone told me that they don’t like hero license as a framework, because it doesn’t capture the quadrants of what society thinks of you cross what you think of yourself. I think there’s a corollary to abundance mindset, which is to cultivate orthogonality of these things. Friends from discord may recall me talking about a hubris budget, that you can invest or spend or hoard some finite sum of risk tolerance, capacity to ignore the haters, and ambition (which I was going to explain in a dedicated post, but I abandoned it because I didn’t feel credible enough to write it. I’m detecting some irony here).
I think permission is a good way of thinking about scarcity mindset. Sometimes, bestowing upon someone by the power vested in you by nothing in particular permission to be awesome creates a lot of value. Or perhaps, one’s belief that they don’t need permission may outperform a stranger giving them fiat permission. Perhaps it does not occur to the abundant mindset that permission is even a thing. But scarcity mindsets can get bogged down in “well would anyone look at the outside view and conclude that I’m the guy to attempt this thing?” in a way that abundant mindsets may not. (I recall tutoring a really hopeless case in discrete math at the community college, was way out of his league, struggled a lot. I finally asked him what was the point, in my head I’m like cursing the academic advisors for putting him in the harder degree path rather than the easier one, because some IT majors don’t have to take the same discrete class as the math majors. Well it turns out he just told the academic advisors that he liked modding video games in lua, so they put him in the programmer pipeline even tho he didn’t have the math talent for the coursework, but his takeaway from that conversation with them was that the discrete course was a gatekeeper. He was convinced that if he failed at discrete he couldn’t write games. Which, might be true in some sense. But you’re damn right I told him “Eff discrete. Just write a game”, and he was surprised not just because he was at the time unaware of Khanacademy’s javascript IDE, he was surprised because I was an institutional voice telling him to factor out the institution. I’m deeply proud of that moment.)
The way I’m thinking about alignment research
A stipulation of the bet that I wouldn’t write this was that I include some object level stuff about the approach to alignment I’d like to take if I was a full time researcher, so here are some quick notes:
I am bullish on the computational social choice theory angle as prodded by Critch.
I’d like to find a pipeline for scaling outer alignment proposals to multi-stakeholder scenarios, I’d like to build out some kind of epistemic infrastructure that reduces friction of generalizing single-single projects.
I have one low-level hard CS problem in this space that has to do with looking at preference aggregation from a zero-knowledge perspective.
My undelivered/failed SERI summer 2021 project was a meta/prioritization piece in this space, that was meant to be “if you believe x about takeoff you should decide y regarding research priorities”, emphasizing multipolar worlds and multi-stakeholder scenarios. This draft needs a lot of work, and I’m uncertain how valuable it’d be to me or to the literature to finish, but thinking about it was valuable to me.
This is a very rambly post. I think that the crux here is: Quinn, like possibly a decently large number of people who would have the ability to do AI alignment, is earning a lot of money doing not-too-prosocial crypto stuff. He has the plan to eventually transition to AI alignment (or to software for better institutional decision goods/epistemics), and his happy price to do this is like a quarter of what he is earning now; But the longer he stays in crypto, the more his comparative advantage will shift.
Do you have any recommendations for what would make it less rambly?
An editor
yeah the bet pressured me to post it a little early.
I’d be interested in elaboration of your view of comparative advantage shifting. You mean shifting more toward lucrative E2G opportunities? Shifting more away from capacity to make lucrative alignment contributions?