I was listening to this Dwarkesh podcast with Leopold Aschenbrenner where they talk about AGI, superintellignence and how things might unfold. All I want to say about it is that it created a sense of concreteness and urgency when considering my plans for the future.
A bit of context about myself: Since I was a teenager, I’ve always been fascinated by computers and intelligence. I did CS studies which took away the mystery about computers (to great satisfaction). But the more I read about intelligence, brains, neuroscience and machine learning, the more it was clear we don’t know how it works. I took a job as a web/database engineer after getting my CS master because I had to make a living, but I kept reading on the side. With my interest about intelligence getting stronger and no good answers, I made a plan to quit my job, and study on my own while living on my savings, with the hope of landing a research engineer position at DeepMind or GoogleBrain. One year into this self learning journey, it was clear that this would be challenging. So I turned to plan B (prompted by a bitcoin run up to almost $20000 dollars in 2017): I would salvage my newly acquired ML skills to trade those markets, then fund my own lab. And in the process, get better at applied ML and prove to myself I can do ML.
Fast forward 6 years, the plan has worked (to everyone’s surprise myself included). The project has grown into a 10-person organization. I recently stepped down, after having transferred the skills and knowledge to the team that is now more competent than me to run it. Now is the time to activate the next step of the plan.
But things have changed. In 2018, concerns like alignment, controllability and misuse felt very theoretical and distant. Not anymore. The big change that occurred is my belief that the ML/AI field as a whole has a very high chance to achieve AGI, followed by superintelligence. Wether I get involved or not.
This is of course adds to all the other concerns regarding starting an AI lab: should I first study on my own to get better? Partner with other labs? Start recruiting now vs later?
AI safety being more important now, I’m telling myself is that the best way to approach it, is to be able to train good models, so I should work on AI capabilities regardless. Researching AI safety in a vacuum is much harder if you don’t have AI capability expertise. But I wonder if I’m fully honest with myself when thinking that.
Back to the original question: given this nice situation where I have lots of funding, some confidence that I can at least do applied ML, and my strong curiosity about intelligence still being there, what should I do ?
I see two parts to this question:
First, should I re-think the plan and focus on AI safety, or other things that I’m better positioned to do?
Second, if I stick to the plan, how to best approach starting an AI lab? (I didn’t talk about my research interests, but very briefly: probabilistic programming, neurosymbolic programming, world models, self play, causality).
I’m happy to react to comments and provide more info/context if needed.
[Question] What should I do? (long term plan about starting an AI lab)
I was listening to this Dwarkesh podcast with Leopold Aschenbrenner where they talk about AGI, superintellignence and how things might unfold. All I want to say about it is that it created a sense of concreteness and urgency when considering my plans for the future.
A bit of context about myself: Since I was a teenager, I’ve always been fascinated by computers and intelligence. I did CS studies which took away the mystery about computers (to great satisfaction). But the more I read about intelligence, brains, neuroscience and machine learning, the more it was clear we don’t know how it works. I took a job as a web/database engineer after getting my CS master because I had to make a living, but I kept reading on the side. With my interest about intelligence getting stronger and no good answers, I made a plan to quit my job, and study on my own while living on my savings, with the hope of landing a research engineer position at DeepMind or GoogleBrain. One year into this self learning journey, it was clear that this would be challenging. So I turned to plan B (prompted by a bitcoin run up to almost $20000 dollars in 2017): I would salvage my newly acquired ML skills to trade those markets, then fund my own lab. And in the process, get better at applied ML and prove to myself I can do ML.
Fast forward 6 years, the plan has worked (to everyone’s surprise myself included). The project has grown into a 10-person organization. I recently stepped down, after having transferred the skills and knowledge to the team that is now more competent than me to run it. Now is the time to activate the next step of the plan.
But things have changed. In 2018, concerns like alignment, controllability and misuse felt very theoretical and distant. Not anymore. The big change that occurred is my belief that the ML/AI field as a whole has a very high chance to achieve AGI, followed by superintelligence. Wether I get involved or not.
This is of course adds to all the other concerns regarding starting an AI lab: should I first study on my own to get better? Partner with other labs? Start recruiting now vs later?
AI safety being more important now, I’m telling myself is that the best way to approach it, is to be able to train good models, so I should work on AI capabilities regardless. Researching AI safety in a vacuum is much harder if you don’t have AI capability expertise. But I wonder if I’m fully honest with myself when thinking that.
Back to the original question: given this nice situation where I have lots of funding, some confidence that I can at least do applied ML, and my strong curiosity about intelligence still being there, what should I do ?
I see two parts to this question:
First, should I re-think the plan and focus on AI safety, or other things that I’m better positioned to do?
Second, if I stick to the plan, how to best approach starting an AI lab? (I didn’t talk about my research interests, but very briefly: probabilistic programming, neurosymbolic programming, world models, self play, causality).
I’m happy to react to comments and provide more info/context if needed.