I want to optimise something like: “expected positive impact on a brighter world”. Probably, the best way I can do this is through direct work and not via earning to give. I think I have substantial intellectual endowment (more on that later), and like I’m quite privileged (more on that later as well). So, I should choose a career plan that maximises (expected utility) of my positive contribution to a brighter world conditional on the space of possible people I can be.
Near term (within the next 30 years), I want to pursue a career trajectory as an AI safety researcher and a “radical transhumanist thinkfluencer”. For the AI safety research part, I’m currently learning abstract maths (currently category theory) [maybe I’ll write another post motivating that, but I basically want to try my hand at Agent Foundations style research and I think I have the intellectual endowment for this to have considerable positive value in expectation]. I’ll be starting a CS Masters at the end of September (if my student VISA is granted), I’ll probably take a gap year (for some intensive learning project probably) then pursue a PhD in CS/AI/computational neuroscience (not mathematics. I’m under the impression I can autodidact any mathematics I find myself compelled to learn).
I’m 24 now, so I’m hoping to start my career trajectory at 32 (8 years forms a natural/compelling Schelling point [At 24 I’m a quite different person from the person I was at 16. And he is in turn a remarkably different person from the person I was at 8. I can thus expect to be a pretty different person at 32. “Who is the person I want to be at 32? What do I need to do to become that person?”).
I quit my job as a web developer at the end of July. I don’t plan to return to software development (I found it frustrating, and I think it’s neither my absolute nor comparative advantage).
Interlude
To disambiguate what I mean by “radical transhumanist thinkfluencer” a bit, I want to help sell the following ideas:
The current state of the world is very suboptimal
Vastly better world states are possible
We can take actions that would make us significantly more likely to reach those vastly better states
We should do this
I’d like to paint concrete and coherent visions for a much brighter future (not concrete utopias, but general ways that we can make the world much better off)
Paretopian outcomes
I want to get people excited about such a future as something we should aspire to and work towards.
Here are things we can do to reach towards that future
I’d like to convince people positioned to have a large positive influence the world or to attain the leverage to have such an influence.
Background
I’ve discovered that I basically can’t effectively study maths for more than 4 − 6 hours a day (I’ve somewhat slacked on this over the past month or two, but I’ve not abandoned the project of studying maths [it may even be the case that the reason I’m slacking is that 3 − 4 hours is too much for my natural mental stamina for doing maths]).
My mental stamina for learning non maths stuff seems to draw from a mostly different reserve and be 2x − 3x larger (I may have done 8+ hours of audiobooks/podcasts over the past couple of days? This wasn’t due to a deliberate target; I just basically listen to music all my waking hours and I’ve decided to swap out my music with informational audio [at least until I exhaust my mental stamina]).
So, I have a lot of time I can use for learning but can’t fung for learning more maths, so I might as well use it to try and build intellectual capital for becoming a radical transhumanist thinkfluencer.
Thus, I decided to start a new project. I want to build a comprehensive, rich and coherent world model of human civilisation (and of the world in which we inhabit).
Motivations
I’m quite economically privileged (relative to others in my age range in my country). My parents are pretty well off (and can afford to fund me to study a Masters program at a Russell Group university). I can somewhat afford to leech off them more? It is not the case that I need to start a career anytime soon to survive. Leeching of them would be distasteful and annoying, but the costs seem to be worth it.
I’m very epistemically privileged. I’ve been in the rationalist community since 2017. I’ve absorbed basically very good epistemic memes, and I know what to do to get even better epistemics. I think I can become someone with exceptional epistemics.
I am intellectually privileged. I have high quantitative and verbal aptitude. I was excellent at mathematics in high school (I let my aptitude rust in the 8 years since, but I’ve started learning mathematics again, and think I basically have the ability to learn any mathematics that I put my mind to. This is mostly relevant for the agent foundations style AI safety research I want to do, but the quantitative aptitude will also be useful for making sense of the world.).
I expect that I can become a prolific writer. I have been a prolific writer at various points in the past (I just don’t think such writing was valuable and so won’t link it here. It’s probably worth it to learn enough so that such writing would become very valuable).
I think there’s a chronic undersupply of people with:
Rich and comprehensive world models
Hiqh quantitative aptitude
Exceptionally good epistemics
And are prolific writers
I believe such people provide considerable value to the world (and specifically to the project of improving the world).
I think that I am unusually positioned to be able to become such a person. The main thing that might prevent me from becoming such a person is burnout/losing motivation, but like posting about it here makes me more likely to follow through on this (I don’t want to disappoint people who believe in me, and their encouragement provides the motivation to push forwards [I do have intrinsic motivation but supplementing it with extrinsic motivation seems good?]).
Approach
Pick a new important topic each month (or 2 −3 months
Potentially 2 − 3 months
Depending on how important the topic is to me
How much study is required to attain the level of understanding I want
How much free time I have that month
This month is existential security.
Existential security is pretty important, so depending on how much valuable writing there is on it, I may extend it to more than a month.
On the other hand, I’m unemployed/have not yet resumed formal education this and (most of) next month so I have abnormally high free time with which to devour information.
Find a particularly good audiobook on the topic and listen to it (at least twice [potentially more], one listen will be devoted to detailed note taking).
I’m listening to “The Precipice” for this month.
Supplement the audiobook with podcasts, audio papers and audio blog posts.
This will probably be over the last 20 − 33% of the period
Benefits
Help refine my knowledge of the topic and my broader world model
Improve my writing
Build career capital towards becoming a thinkfluencer.
I plan to post my reports on LessWrong and the Effective Altruism forum
Topics
This is a non-exhaustive list of topics I’d hope to cover at some point for the purpose of becoming a thinkfluencer. Things I’d be learning primarily to do AI safety research won’t be covered here.
Other mathematics/computer science/statistics I’ll be learning for other reasons also won’t be covered here (I have a pretty extensive list and I think I already know what I need to learn here).
Less Quantitative
Existential security
Moral philosophy
Moral uncertainty
Longtermism
Meta ethics
Hinginess
Anthropology
Macro history
Psychology
Cognitive
Evolutionary
Evolutionary biology
History and philosophy of science
History and philosophy of technological innovation
Progress studies more generally
Political theory
Memetics (in the Dawkins sense of “meme”)
How ideas spread
More Quantitative
Epistemics
Anthropics
Forecasting
Decision and game theory
Micro and macro economics
Behavioural economics
Causality
Statistical thinking/modeling/analysis
Complex systems
Chaos theory
Physics
Chemistry
I expect to spend more time on these topics in general, because quantitative fields require more effort from me. But I doubt I’ll spend more than 3 months on any of them.
Conclusions
I’d appreciate feedback on my general plans/approach and on particular topics that you think I should add to my list (or remove from it).
So, I Want to Be a “Thinkfluencer”
Epistemic Status
Rough and unpolished. I might refine it later.
Introduction
I want to optimise something like: “expected positive impact on a brighter world”. Probably, the best way I can do this is through direct work and not via earning to give. I think I have substantial intellectual endowment (more on that later), and like I’m quite privileged (more on that later as well). So, I should choose a career plan that maximises (expected utility) of my positive contribution to a brighter world conditional on the space of possible people I can be.
Near term (within the next 30 years), I want to pursue a career trajectory as an AI safety researcher and a “radical transhumanist thinkfluencer”. For the AI safety research part, I’m currently learning abstract maths (currently category theory) [maybe I’ll write another post motivating that, but I basically want to try my hand at Agent Foundations style research and I think I have the intellectual endowment for this to have considerable positive value in expectation]. I’ll be starting a CS Masters at the end of September (if my student VISA is granted), I’ll probably take a gap year (for some intensive learning project probably) then pursue a PhD in CS/AI/computational neuroscience (not mathematics. I’m under the impression I can autodidact any mathematics I find myself compelled to learn).
I’m 24 now, so I’m hoping to start my career trajectory at 32 (8 years forms a natural/compelling Schelling point [At 24 I’m a quite different person from the person I was at 16. And he is in turn a remarkably different person from the person I was at 8. I can thus expect to be a pretty different person at 32. “Who is the person I want to be at 32? What do I need to do to become that person?”).
I quit my job as a web developer at the end of July. I don’t plan to return to software development (I found it frustrating, and I think it’s neither my absolute nor comparative advantage).
Interlude
To disambiguate what I mean by “radical transhumanist thinkfluencer” a bit, I want to help sell the following ideas:
The current state of the world is very suboptimal
Vastly better world states are possible
We can take actions that would make us significantly more likely to reach those vastly better states
We should do this
I’d like to paint concrete and coherent visions for a much brighter future (not concrete utopias, but general ways that we can make the world much better off)
Paretopian outcomes
I want to get people excited about such a future as something we should aspire to and work towards.
Here are things we can do to reach towards that future
I’d like to convince people positioned to have a large positive influence the world or to attain the leverage to have such an influence.
Background
I’ve discovered that I basically can’t effectively study maths for more than 4 − 6 hours a day (I’ve somewhat slacked on this over the past month or two, but I’ve not abandoned the project of studying maths [it may even be the case that the reason I’m slacking is that 3 − 4 hours is too much for my natural mental stamina for doing maths]).
My mental stamina for learning non maths stuff seems to draw from a mostly different reserve and be 2x − 3x larger (I may have done 8+ hours of audiobooks/podcasts over the past couple of days? This wasn’t due to a deliberate target; I just basically listen to music all my waking hours and I’ve decided to swap out my music with informational audio [at least until I exhaust my mental stamina]).
So, I have a lot of time I can use for learning but can’t fung for learning more maths, so I might as well use it to try and build intellectual capital for becoming a radical transhumanist thinkfluencer.
Thus, I decided to start a new project. I want to build a comprehensive, rich and coherent world model of human civilisation (and of the world in which we inhabit).
Motivations
I’m quite economically privileged (relative to others in my age range in my country). My parents are pretty well off (and can afford to fund me to study a Masters program at a Russell Group university). I can somewhat afford to leech off them more? It is not the case that I need to start a career anytime soon to survive. Leeching of them would be distasteful and annoying, but the costs seem to be worth it.
I’m very epistemically privileged. I’ve been in the rationalist community since 2017. I’ve absorbed basically very good epistemic memes, and I know what to do to get even better epistemics. I think I can become someone with exceptional epistemics.
I am intellectually privileged. I have high quantitative and verbal aptitude. I was excellent at mathematics in high school (I let my aptitude rust in the 8 years since, but I’ve started learning mathematics again, and think I basically have the ability to learn any mathematics that I put my mind to. This is mostly relevant for the agent foundations style AI safety research I want to do, but the quantitative aptitude will also be useful for making sense of the world.).
I expect that I can become a prolific writer. I have been a prolific writer at various points in the past (I just don’t think such writing was valuable and so won’t link it here. It’s probably worth it to learn enough so that such writing would become very valuable).
I think there’s a chronic undersupply of people with:
Rich and comprehensive world models
Hiqh quantitative aptitude
Exceptionally good epistemics
And are prolific writers
I believe such people provide considerable value to the world (and specifically to the project of improving the world).
I think that I am unusually positioned to be able to become such a person. The main thing that might prevent me from becoming such a person is burnout/losing motivation, but like posting about it here makes me more likely to follow through on this (I don’t want to disappoint people who believe in me, and their encouragement provides the motivation to push forwards [I do have intrinsic motivation but supplementing it with extrinsic motivation seems good?]).
Approach
Pick a new important topic each month (or 2 −3 months
Potentially 2 − 3 months
Depending on how important the topic is to me
How much study is required to attain the level of understanding I want
How much free time I have that month
This month is existential security.
Existential security is pretty important, so depending on how much valuable writing there is on it, I may extend it to more than a month.
On the other hand, I’m unemployed/have not yet resumed formal education this and (most of) next month so I have abnormally high free time with which to devour information.
Find a particularly good audiobook on the topic and listen to it (at least twice [potentially more], one listen will be devoted to detailed note taking).
I’m listening to “The Precipice” for this month.
Supplement the audiobook with podcasts, audio papers and audio blog posts.
[80 000 Hours] Toby Ord on The Precipice and Humanity’s Potential Futures
[Radio Bostrom] The Vulnerable World Hypothesis (2019)
[Radio Bostrom] Existential Risk Prevention as Global Priority (2012)
[80 000 Hours] Dr David Denkenberger on how to feed all 8b people through an asteroid/nuclear winter
[80 000 Hours] Holden Karnofsky on the most important century
Recommendations welcome!
Prepare a comprehensive report on the topic
This will probably be over the last 20 − 33% of the period
Benefits
Help refine my knowledge of the topic and my broader world model
Improve my writing
Build career capital towards becoming a thinkfluencer.
I plan to post my reports on LessWrong and the Effective Altruism forum
Topics
This is a non-exhaustive list of topics I’d hope to cover at some point for the purpose of becoming a thinkfluencer. Things I’d be learning primarily to do AI safety research won’t be covered here.
Other mathematics/computer science/statistics I’ll be learning for other reasons also won’t be covered here (I have a pretty extensive list and I think I already know what I need to learn here).
Less Quantitative
Existential security
Moral philosophy
Moral uncertainty
Longtermism
Meta ethics
Hinginess
Anthropology
Macro history
Psychology
Cognitive
Evolutionary
Evolutionary biology
History and philosophy of science
History and philosophy of technological innovation
Progress studies more generally
Political theory
Memetics (in the Dawkins sense of “meme”)
How ideas spread
More Quantitative
Epistemics
Anthropics
Forecasting
Decision and game theory
Micro and macro economics
Behavioural economics
Causality
Statistical thinking/modeling/analysis
Complex systems
Chaos theory
Physics
Chemistry
I expect to spend more time on these topics in general, because quantitative fields require more effort from me. But I doubt I’ll spend more than 3 months on any of them.
Conclusions
I’d appreciate feedback on my general plans/approach and on particular topics that you think I should add to my list (or remove from it).