Strong agree. I don’t personally use (much) math when I reason about moral philosophy, so I’m pessimistic about being able to somehow teach an AI to use math in order to figure out how to be good.
If I can reduce my own morality into a formula and feel confident that I personally will remain good if I blindly obey that formula, then sure, that seems like a thing to teach the AI. However, I know my morality relies on fuzzy feature-recognition encoded in population vectors which cannot efficiently be compressed into simple math. Thus, if the formula doesn’t even work for my own decisions, I don’t expect it to work for the AI.
Strong agree. I don’t personally use (much) math when I reason about moral philosophy, so I’m pessimistic about being able to somehow teach an AI to use math in order to figure out how to be good.
If I can reduce my own morality into a formula and feel confident that I personally will remain good if I blindly obey that formula, then sure, that seems like a thing to teach the AI. However, I know my morality relies on fuzzy feature-recognition encoded in population vectors which cannot efficiently be compressed into simple math. Thus, if the formula doesn’t even work for my own decisions, I don’t expect it to work for the AI.