Hah. Well, yes. I don’t exactly have a working AI in my pocket, even an unFriendly one.
I do think getting an AI to do things we value is a good deal harder than just making it do things, though, even if they’re both out of my grasp right now.
There’s some good stuff on this floating around this site; try searching for “complexity of value” to start off. There’s likely to be dependencies, though; you might want to read through the Sequences, daunting as they are.
That seems to imply we understand our rationality...
More research…
Gerd Gigerenzer’s views on heuristics in moral decision making are very interesting though.
Hah. Well, yes. I don’t exactly have a working AI in my pocket, even an unFriendly one.
I do think getting an AI to do things we value is a good deal harder than just making it do things, though, even if they’re both out of my grasp right now.
There’s some good stuff on this floating around this site; try searching for “complexity of value” to start off. There’s likely to be dependencies, though; you might want to read through the Sequences, daunting as they are.