I’m a bit late to the party. I postponed writing this because it felt so important that I wanted to reply correctly. I sincerely hope that my enthusiasm will not be mistaken for egotism. Forgive my Chutzpah!
Let me introduce myself. After a background in hard sciences I am currently a medical student in a european country (hint, president’s name sounds like a pastry and loves nuclear energy). I am confident that I will become a psychiatrist (i.e. I am still a few years before the final exam where you choose the specialty but it’s pretty easy to reach) but my love for computer science is burning as strong as ever. Hence, I have a strong motivation toward the field of computational psychiatry. I am particulary captivated by AI and think I am a pretty competent coder.
When people ask me what I want to do, I have several possible answers : understanding psychiatric pathologies and help patients (this also has the benefit of helping me get along with my medical peers), understanding counscioussness (this is my existencial goal), playing a role in creating artificial consciousness (that’s my personnal motivation). But I am of course interested in all things AI related, including alignment research.
Hence, it is with strong enthusiasm that I read the term “AI psychologist”, knowing that I introduced myself several times as “wanna be AI psychiatrist”. Those passions of mine are intertwined as I’m convinced that (as Feynmann put it) “If you want to understand the mind, you have to build it”.
You said :
(And it wouldn’t shock me if “AI psychologist” turns out to be an economically important occupation in the future, and if you got a notable advantage from having a big head start on it.) I think this is especially likely to be a good fit for analytically strong people who love thinking about language and are interested in AI but don’t love math or computer science.
I recognize myself in this paragraph, although I do love math and computer science.
Having to juggle between medical school and programming, I don’t have the brains to be as competent and experienced as I’d like in ML but I think that interpretability research is a sweet spot where my transdisciplinary skills would be useful. Btw if anyone has specific courses, books or material on interpretability, I would be delighted!
I am writing this to signal that this kind of people exist. Unfortunately I am still about 5 to 10 years to completely finish my (currently planned) full curriculum (this includes medical and computer science degrees as well as PhDs), but I hope there will still be hanging fruits by this time :)
Btw I am also a LW meetup organizer in my country. If you ever come in europe in the coming years we could definitely have a chat. Otherwise don’t hesitate to reach me, even/especially in years from now, as I’m still in the learning phase.
Note that I subscibed to your comments as well as to comments on this page, this way I can see the advance you publish in this field. I will also take a look every month at Redwood Research website news section.
THANK YOU
ahem
Hi!
I’m a bit late to the party. I postponed writing this because it felt so important that I wanted to reply correctly. I sincerely hope that my enthusiasm will not be mistaken for egotism. Forgive my Chutzpah!
Let me introduce myself. After a background in hard sciences I am currently a medical student in a european country (hint, president’s name sounds like a pastry and loves nuclear energy). I am confident that I will become a psychiatrist (i.e. I am still a few years before the final exam where you choose the specialty but it’s pretty easy to reach) but my love for computer science is burning as strong as ever. Hence, I have a strong motivation toward the field of
computational psychiatry
. I am particulary captivated by AI and think I am a pretty competent coder.When people ask me what I want to do, I have several possible answers : understanding psychiatric pathologies and help patients (this also has the benefit of helping me get along with my medical peers), understanding counscioussness (this is my existencial goal), playing a role in creating artificial consciousness (that’s my personnal motivation). But I am of course interested in all things AI related, including alignment research.
Hence, it is with strong enthusiasm that I read the term “AI psychologist”, knowing that I introduced myself several times as “wanna be AI psychiatrist”. Those passions of mine are intertwined as I’m convinced that (as Feynmann put it) “If you want to understand the mind, you have to build it”.
You said :
I recognize myself in this paragraph, although I do love math and computer science.
Having to juggle between medical school and programming, I don’t have the brains to be as competent and experienced as I’d like in ML but I think that interpretability research is a sweet spot where my transdisciplinary skills would be useful. Btw if anyone has specific courses, books or material on interpretability, I would be delighted!
I am writing this to signal that this kind of people exist. Unfortunately I am still about 5 to 10 years to completely finish my (currently planned) full curriculum (this includes medical and computer science degrees as well as PhDs), but I hope there will still be hanging fruits by this time :)
Btw I am also a LW meetup organizer in my country. If you ever come in europe in the coming years we could definitely have a chat. Otherwise don’t hesitate to reach me, even/especially in years from now, as I’m still in the learning phase.
Note that I subscibed to your comments as well as to comments on this page, this way I can see the advance you publish in this field. I will also take a look every month at Redwood Research website news section.
Sincerely