I’m John Ku. I’ve been lurking on lesswrong since its beginning. I’ve also been following MIRI since around 2006 and attended the first CFAR mini-camp.
I became very interested in traditional rationality when I used analytic philosophy to think my way out of a very religious upbringing in what many would consider to be a cult. After I became an atheist, I set about rebuilding my worldview and focusing especially on metaethics to figure out what remains of ethics without God.
This process landed me in University of Michigan’s Philosophy PhD program, during which time I read Kurzweil’s The Singularity is Near. This struck me as very important and I quickly followed a chain of references and searches to discover what was to become MIRI and the lesswrong community. Partly due to lesswrong’s influence, I dropped out of my PhD program to become a programmer and entrepreneur and I now live in Berkeley and work as CTO of an organic growth startup.
I have, however, continued my philosophical research in my spare time, focusing largely on metaethics, psychosemantics and metaphilosophy. I believe I have worked out a decent initial overview of how to formalize a friendly utility function. The major pieces include:
adapting David Chalmers’ theory of when a physical system instantiates a computation,
formalizing a version of Daniel Dennett’s intentional stance to determine when and which decision algorithm is implemented by a computation, and
modelling how we decide how to value by positing (possibly rather thin and homuncular) higher order decision algorithms, which according to my metaethics is what ethical facts get reduced to.
Since I think much of philosophy boils down to conceptual analysis, and I’ve also largely worked out how to assign an intensional semantics to a decision algorithm, I think my research also has the resources to meta-philosophically validate that the various philosophical propositions involved are correct. I hope to fill in many remaining details in my research and find a way to communicate them better in the not too distant future.
Compared to others, I think of myself as having been focused more on object-level concerns than more meta-level instrumental rationality improvements. But I would like to thank everyone for their help which I’m sure I’ve absorbed over time through lesswrong and the community. And if any attempts to help have backfired, I would assume it was due to my own mistakes.
I would also like to ask for any anonymous feedback, which you can submit here. Of course, I would greatly appreciate any non-anonymous feedback as well; an email to ku@johnsku.com would be the preferred method.
I am especially hoping to receive any information that may help out with some confusing memories I have.
I understand that you might not want to give details but I’m unclear what information I might provide. Maybe you could drop a few hints. You might also look at the Baseline of my opinion on LW topics.
You’re right that I was being intentionally vague. For what it’s worth, I was trying to drop some hints targeted at some who might be particularly helpful. If you didn’t notice them, I wouldn’t worry about it. This is especially true if we haven’t met in person and you don’t know much about me or my situation.
Hi everyone!
I’m John Ku. I’ve been lurking on lesswrong since its beginning. I’ve also been following MIRI since around 2006 and attended the first CFAR mini-camp.
I became very interested in traditional rationality when I used analytic philosophy to think my way out of a very religious upbringing in what many would consider to be a cult. After I became an atheist, I set about rebuilding my worldview and focusing especially on metaethics to figure out what remains of ethics without God.
This process landed me in University of Michigan’s Philosophy PhD program, during which time I read Kurzweil’s The Singularity is Near. This struck me as very important and I quickly followed a chain of references and searches to discover what was to become MIRI and the lesswrong community. Partly due to lesswrong’s influence, I dropped out of my PhD program to become a programmer and entrepreneur and I now live in Berkeley and work as CTO of an organic growth startup.
I have, however, continued my philosophical research in my spare time, focusing largely on metaethics, psychosemantics and metaphilosophy. I believe I have worked out a decent initial overview of how to formalize a friendly utility function. The major pieces include:
adapting David Chalmers’ theory of when a physical system instantiates a computation,
formalizing a version of Daniel Dennett’s intentional stance to determine when and which decision algorithm is implemented by a computation, and
modelling how we decide how to value by positing (possibly rather thin and homuncular) higher order decision algorithms, which according to my metaethics is what ethical facts get reduced to.
Since I think much of philosophy boils down to conceptual analysis, and I’ve also largely worked out how to assign an intensional semantics to a decision algorithm, I think my research also has the resources to meta-philosophically validate that the various philosophical propositions involved are correct. I hope to fill in many remaining details in my research and find a way to communicate them better in the not too distant future.
Compared to others, I think of myself as having been focused more on object-level concerns than more meta-level instrumental rationality improvements. But I would like to thank everyone for their help which I’m sure I’ve absorbed over time through lesswrong and the community. And if any attempts to help have backfired, I would assume it was due to my own mistakes.
I would also like to ask for any anonymous feedback, which you can submit here. Of course, I would greatly appreciate any non-anonymous feedback as well; an email to ku@johnsku.com would be the preferred method.
You are welcome! And Don’t Be Afraid of Asking Personally Important Questions of Less Wrong.
I understand that you might not want to give details but I’m unclear what information I might provide. Maybe you could drop a few hints. You might also look at the Baseline of my opinion on LW topics.
You’re right that I was being intentionally vague. For what it’s worth, I was trying to drop some hints targeted at some who might be particularly helpful. If you didn’t notice them, I wouldn’t worry about it. This is especially true if we haven’t met in person and you don’t know much about me or my situation.