Hi! So I’ve actually already made a few comments on this site, but had neglected to introduce myself so I thought I’d do so now. I’m a PhD candidate in computer science at the University of Maryland, Baltimore County. My research interests are in AI and Machine Learning. Specifically, my dissertation topic is on generalization in reinforcement learning (policy transfer and function approximation).
Given this, AI is obviously my biggest interest, but as a result, my study of AI has led me to applying the same concepts to human life and reasoning. Lately, I’ve also been thinking more about systems of morality and how an agent should reach rational moral conclusions. My knowledge of existing working in ethics is not profound, but my impression is that most systems seem to be at too high a level to make concrete (my metric is whether we could implement it in an AI; if we cannot, then it’s probably too high-level for us to reason strongly with it ourselves). Even desirism, which I’ve examined at least somewhat, seems to be a bit too high-level, but is perhaps closer to the mark than others (to be fair, I may just not know enough about it). In response to these observations, I’ve been developing my own system of morality that I’d like to share here in the near future to receive input.
Hi! So I’ve actually already made a few comments on this site, but had neglected to introduce myself so I thought I’d do so now. I’m a PhD candidate in computer science at the University of Maryland, Baltimore County. My research interests are in AI and Machine Learning. Specifically, my dissertation topic is on generalization in reinforcement learning (policy transfer and function approximation).
Given this, AI is obviously my biggest interest, but as a result, my study of AI has led me to applying the same concepts to human life and reasoning. Lately, I’ve also been thinking more about systems of morality and how an agent should reach rational moral conclusions. My knowledge of existing working in ethics is not profound, but my impression is that most systems seem to be at too high a level to make concrete (my metric is whether we could implement it in an AI; if we cannot, then it’s probably too high-level for us to reason strongly with it ourselves). Even desirism, which I’ve examined at least somewhat, seems to be a bit too high-level, but is perhaps closer to the mark than others (to be fair, I may just not know enough about it). In response to these observations, I’ve been developing my own system of morality that I’d like to share here in the near future to receive input.