I’m Jose. I’m 20. This is a comment many years in the making.
I grew up in India, in a school that (almost) made up for the flaws in Indian academia, as a kid with some talent in math and debate. I largely never tried to learn math or science outside what was taught at school back then. I started using the internet in 2006, and eventually started to feel very strongly about what I thought was wrong with the institutions of the world, from schools to religion. I spent a lot of time then trying to make these thoughts coherent. I didn’t really think about what I wanted to do, or about the future, in anything more than abstract terms until I was 12 and a senior at my school recommended HPMOR.
I don’t remember what I thought the first time I read it up until where it had reached (I think it was chapter 95). I do remember that on my second read, by the time it had reached chapter 101, I stayed up the night before one of my finals to read it. That was around the time I started to actually believe I could do something to change the world (there may have been a long phase where I phrased it as wanting to rule the universe). But apart from an increased tendency in my thoughts at the time toward refining my belief systems, nothing changed much, and Rationality from AI to Zombies remained on my TBR until early 2017, which is when I first lurked LessWrong.
I had promised myself at the time that I would read all the Sequences properly regardless of how long it took, and so it wasn’t until late 2017 that I finally finished it. That was a long, and arduous process, and much of which came from many inner conflicts I actually noticed for the first time. Some of the ideas were ones I had tried to express long ago, far less coherently. It was epiphany and turmoil at every turn. I graduated school in 2018; I’d eventually realize this wasn’t nearly enough though, and it was pure luck that I chose a computer science undergrad because of vague thoughts about AI, despite not yet deciding on what I really wanted to do.
Over my first two years in college, I tried to actually think about that question. By this point, I had read enough about FAI to know it to be the most important thing to work on, and that anything I did would have to come back to that in some way. Despite that, I still stuck to some old wish to do something that I could call mine, and shoved the idea of direct work in AI Safety in the pile where things that you consciously know and still ignore in your real life go. Instead, I thought I’d learned the right lesson and held off on answering direct career questions until I knew more, because I had a long history of overconfidence in those answers (not that that’s a misguided principle, but there was more I could have seen at that point with what I knew).
Fast forward to late-2020. I had still been lurking on LW, reading about AI Safety, and generally immersing myself in the whole shindig for years. I even applied to the MIRIx program early that year, and held off on starting operations on that after March that year. I don’t remember what it was exactly that made me start to rethink my priors, but one day, I was shaken by the realization that I wasn’t doing anything the way I should have been if my priorities were actually what I claimed they were, to help the most people. I thought of myself as very driven by my ideals, and being wrong only on the level where you don’t notice difficult questions wasn’t comforting. I went into existential panic mode, trying to seriously recalibrate everything about my real priorities.
In early 2021, I was still confused about a lot of things. Not least because being from my country sort of limits the options one has to directly work in AI Alignment, or at least makes them more difficult. That was a couple months ago. I found that after I took a complete break from everything for a month to study for subjects I hadn’t touched in a year, all those cached thoughts I had that bred my earlier inner conflicts had mostly disappeared. I’m not entirely settled yet though, it’s been a weird few months. I’m trying to catch up on a lot of lost time and learn math (I’m working through MIRI’s research guide), focus my attention a lot more in specific areas of ML (I lucked out again there and did spend a lot of time studying it broadly earlier), and generally trying to get better at things. I’ll hopefully post infrequently here. I really hope this comment doesn’t feel like four years.
Thanks! 2006 is what I remember, and what my older brother says too. I was 5 though, so the most I got out of it was learning how to torrent movies and Pokemon ROMs until like 2008, when I joined Facebook (at the time to play an old game called FarmVille).
I think the whole FAI research is mostly bottlenecked by funding; There are many smart people who will work in any field that has funding available (in my model of the world). So unless you’re someone who does not need funding or can fund others, you might not be part of the bottleneck.
I am really quite confident that the space is not bottlenecked by funding. Maybe we have different conceptions of what we mean by funding, but there really is a lot of money (~$5-10 Billion USD) that is ready to be deployed towards promising AI Alignment opportunities, there just aren’t any that seem very promising and aren’t already funded. It really seems to me that funding is very unlikely the bottleneck for the space.
I am just speaking from general models and I have no specific model for FAI, so I was/am probably wrong.
I still don’t understand the bottleneck. There aren’t promising projects to get funded. Isn’t this just another way of saying that the problem is hard, and most research attempts will be futile, and thus to accelerate the progress, unpromising projects need to be funded? I.e., what is the bottleneck if it’s not funding? “Brilliant ideas” are not under our direct control, so this cannot be part of our operating bottleneck.
Solution space is really high-dimensional, so just funding random points has basically no chance of getting you much closer to a functioning solution. There aren’t even enough people who understand what the AI Alignment problem is to fund all of them, and frequently funding people can have downsides. Two common downsides of funding people:
They have an effect on the social context in which work happens, and if they don’t do good work, they scare away other contributors, or worsen the methodology of your field
If you give away money like candy, you attract lots of people who will try to pretend doing the work you want to do and just take away your money. There are definitely enough people who just want to take your money to exhaust $10B in financial resources (or really any reasonable amount of resources). In a lemon’s market, you need to maintain some level of vigilance, otherwise you can easily lose all of your resources at almost any level of wealth.
80,000 Hours’ data suggests that people are the bottleneck, not funding. Could you tell me why you think otherwise? It’s possible that there’s even more available funding in AI research and similar fields that are likely sources for FAI researchers.
I looked at the 80k page again, and I still don’t get their model; They say the bottleneck is people who have PhDs from top schools (an essentially supply-gated resource), and can geographically work in the FAI labs (a constant-ish fraction of the said PhD holders). It seems to me that the main lever to increase top school PhD graduates is to increase funding and thus positions in AI-related fields. (Of course, this lever might still take years to show its effects, but I do not see how individual decisions can be the bottleneck here.)
As said, I am probably wrong, but I like to understand this.
They say the bottleneck is people who have PhDs from top schools (an essentially supply-gated resource),
They write no such thing. They do say:
Might you have a shot of getting into a top 5 graduate school in machine learning? This is a reasonable proxy for whether you can get a job at a top AI research centre, though it’s not a requirement.
They use it as a proxy for cognitive ability. It’s possible for a person who writes insightful AI alignment forum posts to hired into an AI research role. It’s just very hard to develop the ability to write insightful things about AI alignment and the kind of person who can is also the kind of person who can get into a top 5 graduate school in machine learning.
When it comes to increasing the number of AI Phd’s that can accelerate AI development in general, so it’s problematic from the perspective of AI risk.
They don’t speak about a having a PhD but ability to get a into a top 5 graduate program. Many people who do have the ability to get into a top 5 program don’t get into a top 5 graduate program but persue other directions.
The number of people with that ability level is not directly dependent on the amount of of PhD’s that are given out.
I’m Jose. I’m 20. This is a comment many years in the making.
I grew up in India, in a school that (almost) made up for the flaws in Indian academia, as a kid with some talent in math and debate. I largely never tried to learn math or science outside what was taught at school back then. I started using the internet in 2006, and eventually started to feel very strongly about what I thought was wrong with the institutions of the world, from schools to religion. I spent a lot of time then trying to make these thoughts coherent. I didn’t really think about what I wanted to do, or about the future, in anything more than abstract terms until I was 12 and a senior at my school recommended HPMOR.
I don’t remember what I thought the first time I read it up until where it had reached (I think it was chapter 95). I do remember that on my second read, by the time it had reached chapter 101, I stayed up the night before one of my finals to read it. That was around the time I started to actually believe I could do something to change the world (there may have been a long phase where I phrased it as wanting to rule the universe). But apart from an increased tendency in my thoughts at the time toward refining my belief systems, nothing changed much, and Rationality from AI to Zombies remained on my TBR until early 2017, which is when I first lurked LessWrong.
I had promised myself at the time that I would read all the Sequences properly regardless of how long it took, and so it wasn’t until late 2017 that I finally finished it. That was a long, and arduous process, and much of which came from many inner conflicts I actually noticed for the first time. Some of the ideas were ones I had tried to express long ago, far less coherently. It was epiphany and turmoil at every turn. I graduated school in 2018; I’d eventually realize this wasn’t nearly enough though, and it was pure luck that I chose a computer science undergrad because of vague thoughts about AI, despite not yet deciding on what I really wanted to do.
Over my first two years in college, I tried to actually think about that question. By this point, I had read enough about FAI to know it to be the most important thing to work on, and that anything I did would have to come back to that in some way. Despite that, I still stuck to some old wish to do something that I could call mine, and shoved the idea of direct work in AI Safety in the pile where things that you consciously know and still ignore in your real life go. Instead, I thought I’d learned the right lesson and held off on answering direct career questions until I knew more, because I had a long history of overconfidence in those answers (not that that’s a misguided principle, but there was more I could have seen at that point with what I knew).
Fast forward to late-2020. I had still been lurking on LW, reading about AI Safety, and generally immersing myself in the whole shindig for years. I even applied to the MIRIx program early that year, and held off on starting operations on that after March that year. I don’t remember what it was exactly that made me start to rethink my priors, but one day, I was shaken by the realization that I wasn’t doing anything the way I should have been if my priorities were actually what I claimed they were, to help the most people. I thought of myself as very driven by my ideals, and being wrong only on the level where you don’t notice difficult questions wasn’t comforting. I went into existential panic mode, trying to seriously recalibrate everything about my real priorities.
In early 2021, I was still confused about a lot of things. Not least because being from my country sort of limits the options one has to directly work in AI Alignment, or at least makes them more difficult. That was a couple months ago. I found that after I took a complete break from everything for a month to study for subjects I hadn’t touched in a year, all those cached thoughts I had that bred my earlier inner conflicts had mostly disappeared. I’m not entirely settled yet though, it’s been a weird few months. I’m trying to catch up on a lot of lost time and learn math (I’m working through MIRI’s research guide), focus my attention a lot more in specific areas of ML (I lucked out again there and did spend a lot of time studying it broadly earlier), and generally trying to get better at things. I’ll hopefully post infrequently here. I really hope this comment doesn’t feel like four years.
Welcome! It’s people like you (and perhaps literally you) on which the future of the world depends. :)
Wait… you started using the internet in 2006? Like, when you were 5???
Thanks! 2006 is what I remember, and what my older brother says too. I was 5 though, so the most I got out of it was learning how to torrent movies and Pokemon ROMs until like 2008, when I joined Facebook (at the time to play an old game called FarmVille).
Very cool, this sounds a lot like my own story too. Welcome to the club!
I think the whole FAI research is mostly bottlenecked by funding; There are many smart people who will work in any field that has funding available (in my model of the world). So unless you’re someone who does not need funding or can fund others, you might not be part of the bottleneck.
I am really quite confident that the space is not bottlenecked by funding. Maybe we have different conceptions of what we mean by funding, but there really is a lot of money (~$5-10 Billion USD) that is ready to be deployed towards promising AI Alignment opportunities, there just aren’t any that seem very promising and aren’t already funded. It really seems to me that funding is very unlikely the bottleneck for the space.
I am just speaking from general models and I have no specific model for FAI, so I was/am probably wrong.
I still don’t understand the bottleneck. There aren’t promising projects to get funded. Isn’t this just another way of saying that the problem is hard, and most research attempts will be futile, and thus to accelerate the progress, unpromising projects need to be funded? I.e., what is the bottleneck if it’s not funding? “Brilliant ideas” are not under our direct control, so this cannot be part of our operating bottleneck.
Solution space is really high-dimensional, so just funding random points has basically no chance of getting you much closer to a functioning solution. There aren’t even enough people who understand what the AI Alignment problem is to fund all of them, and frequently funding people can have downsides. Two common downsides of funding people:
They have an effect on the social context in which work happens, and if they don’t do good work, they scare away other contributors, or worsen the methodology of your field
If you give away money like candy, you attract lots of people who will try to pretend doing the work you want to do and just take away your money. There are definitely enough people who just want to take your money to exhaust $10B in financial resources (or really any reasonable amount of resources). In a lemon’s market, you need to maintain some level of vigilance, otherwise you can easily lose all of your resources at almost any level of wealth.
One good example of what funding can do is nanotech. https://www.lesswrong.com/posts/Ck5cgNS2Eozc8mBeJ/a-review-of-where-is-my-flying-car-by-j-storrs-hall describes how strong funding killed of the nanotech industry by getting people to compete for that funding.
80,000 Hours’ data suggests that people are the bottleneck, not funding. Could you tell me why you think otherwise? It’s possible that there’s even more available funding in AI research and similar fields that are likely sources for FAI researchers.
(First read my comment on the sister comment: https://www.lesswrong.com/posts/hKNJSiyzB5jDKFytn/open-and-welcome-thread-may-2021?commentId=iLrAts3ghiBc37X3j )
I looked at the 80k page again, and I still don’t get their model; They say the bottleneck is people who have PhDs from top schools (an essentially supply-gated resource), and can geographically work in the FAI labs (a constant-ish fraction of the said PhD holders). It seems to me that the main lever to increase top school PhD graduates is to increase funding and thus positions in AI-related fields. (Of course, this lever might still take years to show its effects, but I do not see how individual decisions can be the bottleneck here.)
As said, I am probably wrong, but I like to understand this.
They write no such thing. They do say:
They use it as a proxy for cognitive ability. It’s possible for a person who writes insightful AI alignment forum posts to hired into an AI research role. It’s just very hard to develop the ability to write insightful things about AI alignment and the kind of person who can is also the kind of person who can get into a top 5 graduate school in machine learning.
When it comes to increasing the number of AI Phd’s that can accelerate AI development in general, so it’s problematic from the perspective of AI risk.
[Deleted]
They don’t speak about a having a PhD but ability to get a into a top 5 graduate program. Many people who do have the ability to get into a top 5 program don’t get into a top 5 graduate program but persue other directions.
The number of people with that ability level is not directly dependent on the amount of of PhD’s that are given out.
[Deleted]