Andrej is a great teacher, he has created a lot of very useful pedagogical materials helpful to people who want to learn AI. I have used some of them to my significant benefit, I am even citing his famous 2015 The Unreasonable Effectiveness of Recurrent Neural Networks in some of my texts, and I have used his minGPT and nanoGPT to improve my understanding of decoder-only Transformers and to experiment with them.
He is also a very strong practitioner, with an impressive track record, and I expect his new org will be successful in creating novel education-oriented AI.
Safety-wise, I think Andrej cares a lot about routine AI safety matters (being in charge of AI for Tesla autopilot for many years makes one to care for routine safety in a very visceral sense). I don’t have a feel for his position on X-risk. I think he tends to be skeptical of AI regulation efforts.
The plan for their future AI course I link above does not seem to have any safety-oriented content whatsoever, but, perhaps, this might change, if people who can create that kind of content were to eventually join that effort.
This will likely be a very good AI capability-oriented course, approximately along these lines:
https://github.com/karpathy/LLM101n
Andrej is a great teacher, he has created a lot of very useful pedagogical materials helpful to people who want to learn AI. I have used some of them to my significant benefit, I am even citing his famous 2015 The Unreasonable Effectiveness of Recurrent Neural Networks in some of my texts, and I have used his minGPT and nanoGPT to improve my understanding of decoder-only Transformers and to experiment with them.
He is also a very strong practitioner, with an impressive track record, and I expect his new org will be successful in creating novel education-oriented AI.
Safety-wise, I think Andrej cares a lot about routine AI safety matters (being in charge of AI for Tesla autopilot for many years makes one to care for routine safety in a very visceral sense). I don’t have a feel for his position on X-risk. I think he tends to be skeptical of AI regulation efforts.
The plan for their future AI course I link above does not seem to have any safety-oriented content whatsoever, but, perhaps, this might change, if people who can create that kind of content were to eventually join that effort.