Could you talk about your graduate work in AI? Also, out of curiosity, did you weight possible contribution towards a positive singularity heavily in choosing your subfield/projects?
(I am trying to figure out whether it would be productive for me to become familiar with AI in mainstream academia and/or apply for PhD programs eventually.)
I work on computationally bounded statistical inference. Most theoretical paradigms don’t have a clean way of handling computational constraints, and I think it’s important to address this since the computationally complexity of exact statistical inference scales extremely rapidly with model complexity. I also have recently starting working on applications in program analysis, both because I think it provides a good source of computationally challenging problems, and because it seems like a domain that will force us into using models with high complexity.
Singularity considerations were a factor when choosing to work on AI, although I went into the field because AI seems like a robustly game-changing technology across a wide variety of scenarios, whether or not a singularity occurs. I certainly think that software safety is an important issue more broadly, and this partially influences my choice of problems, although I am more guided by the problems that seem technically important (and indeed, I think this is mostly the right strategy even if you care about safety to a fair degree).
Learning more about mainstream AI has greatly shaped my beliefs regarding AGI, so it’s something that I would certainly recommend. Going to grad school shaped my beliefs even further, even though I had already read many AI papers prior to arriving at Stanford.
Could you talk about your graduate work in AI? Also, out of curiosity, did you weight possible contribution towards a positive singularity heavily in choosing your subfield/projects?
(I am trying to figure out whether it would be productive for me to become familiar with AI in mainstream academia and/or apply for PhD programs eventually.)
I work on computationally bounded statistical inference. Most theoretical paradigms don’t have a clean way of handling computational constraints, and I think it’s important to address this since the computationally complexity of exact statistical inference scales extremely rapidly with model complexity. I also have recently starting working on applications in program analysis, both because I think it provides a good source of computationally challenging problems, and because it seems like a domain that will force us into using models with high complexity.
Singularity considerations were a factor when choosing to work on AI, although I went into the field because AI seems like a robustly game-changing technology across a wide variety of scenarios, whether or not a singularity occurs. I certainly think that software safety is an important issue more broadly, and this partially influences my choice of problems, although I am more guided by the problems that seem technically important (and indeed, I think this is mostly the right strategy even if you care about safety to a fair degree).
Learning more about mainstream AI has greatly shaped my beliefs regarding AGI, so it’s something that I would certainly recommend. Going to grad school shaped my beliefs even further, even though I had already read many AI papers prior to arriving at Stanford.