Hi. I’ve been a distant LW lurker for a while now; I first encountered the Sequences sometime around 2009, and have been an avid HP:MOR fan since mid-2011.
I work in computer security with a fair bit of software verification as flavoring, so the AI confinement problem is of interest to me, particularly in light of recent stunts like arbitrary computation in zero CPU instructions via creative abuse of the MMU trap handler. I’m also interested in applying instrumental rationality to improve the quality and utility of my research in general. I flirt with some other topics as well, including capability security, societal iterated game theory, trust (e.g., PKI), and machine learning; a meta-goal is to figure out how to organize my time so that I can do more applied work in these areas.
Apart from that, lately I’ve become disillusioned with my usual social media circles, in part due to a perceived* uptick in terrible epistemology and in part due to facing the fact that I use them as procrastination tools. I struggle with akrasia, and am experiencing less of it since quitting my previous haunts cold turkey, but things could still be better and I hope to improve them by seeking out positive influences here.
*I haven’t measured this. It’s entirely possible I’ve become more sensitive to bad epistemology, or some other influence is lowering my tolerance to bad epistemology.
Hi. I’ve been a distant LW lurker for a while now; I first encountered the Sequences sometime around 2009, and have been an avid HP:MOR fan since mid-2011.
I work in computer security with a fair bit of software verification as flavoring, so the AI confinement problem is of interest to me, particularly in light of recent stunts like arbitrary computation in zero CPU instructions via creative abuse of the MMU trap handler. I’m also interested in applying instrumental rationality to improve the quality and utility of my research in general. I flirt with some other topics as well, including capability security, societal iterated game theory, trust (e.g., PKI), and machine learning; a meta-goal is to figure out how to organize my time so that I can do more applied work in these areas.
Apart from that, lately I’ve become disillusioned with my usual social media circles, in part due to a perceived* uptick in terrible epistemology and in part due to facing the fact that I use them as procrastination tools. I struggle with akrasia, and am experiencing less of it since quitting my previous haunts cold turkey, but things could still be better and I hope to improve them by seeking out positive influences here.
*I haven’t measured this. It’s entirely possible I’ve become more sensitive to bad epistemology, or some other influence is lowering my tolerance to bad epistemology.