That’s why I specified superintelligent; a human-level mind would fail hilariously. On the other hand, we are human minds ourselves; if we want to program our emotional values into an AI, we’ll need to understand them using our own rationality, which is sadly lacking, I fear.
Hah. Well, yes. I don’t exactly have a working AI in my pocket, even an unFriendly one.
I do think getting an AI to do things we value is a good deal harder than just making it do things, though, even if they’re both out of my grasp right now.
There’s some good stuff on this floating around this site; try searching for “complexity of value” to start off. There’s likely to be dependencies, though; you might want to read through the Sequences, daunting as they are.
Well, if you can build the damn thing, it should be better equipped than we are, being superintelligent and all.
Having only the disadvantages of no emotions itself, and an outside view...
..but if we build an Intelligence based on the only template we have, our own, its likely to be emotional. That seems to be the easy way.
That’s why I specified superintelligent; a human-level mind would fail hilariously. On the other hand, we are human minds ourselves; if we want to program our emotional values into an AI, we’ll need to understand them using our own rationality, which is sadly lacking, I fear.
That seems to imply we understand our rationality...
More research…
Gerd Gigerenzer’s views on heuristics in moral decision making are very interesting though.
Hah. Well, yes. I don’t exactly have a working AI in my pocket, even an unFriendly one.
I do think getting an AI to do things we value is a good deal harder than just making it do things, though, even if they’re both out of my grasp right now.
There’s some good stuff on this floating around this site; try searching for “complexity of value” to start off. There’s likely to be dependencies, though; you might want to read through the Sequences, daunting as they are.