In my experience homo sapiens does not come ‘out of a box.’ Are you a MacBook Pro? :-)
But seriously, I have seen some interestingly flawed ‘decision-making systems’ in Psych Wards. And I think Reason (whatever it is taught to be) matters. Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together. I don’t think Reason alone (however you construe it) is up to the job of friendly AI.
Of course, bringing Emotion in to ethics has issues. Who is to say whose Emotions are ‘valid’ or ‘correct?’
“Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together.”
That statement is too strong. I can think of several instances where certain emotions, especially negative ones, can impair decision making. It is reasonable to assume that impaired decision making can extend into making ethical decisions.
The first page of the paper linked below provides a good summary of when emotions, and what emotions, can be helpful or harmful in making decisions. I do acknowledge that some emotions can be helpful in certain situations. Perhaps you should modify your statement.
Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together. I don’t think Reason alone (however you construe it) is up to the job of friendly AI.
Certainly, our desires are emotional in nature; “reason” is merely how we achieve them. But wouldn’t it be better to have a Superintelligent AI deduce our emotions itself, rather than programming it in ourselves? Introspection is hard.
Of course, bringing Emotion in to ethics has issues. Who is to say whose Emotions are ‘valid’ or ‘correct?’
Have you read the Metaethics Sequence? It’s pretty good at this sort of question.
That’s why I specified superintelligent; a human-level mind would fail hilariously. On the other hand, we are human minds ourselves; if we want to program our emotional values into an AI, we’ll need to understand them using our own rationality, which is sadly lacking, I fear.
Hah. Well, yes. I don’t exactly have a working AI in my pocket, even an unFriendly one.
I do think getting an AI to do things we value is a good deal harder than just making it do things, though, even if they’re both out of my grasp right now.
There’s some good stuff on this floating around this site; try searching for “complexity of value” to start off. There’s likely to be dependencies, though; you might want to read through the Sequences, daunting as they are.
In my experience homo sapiens does not come ‘out of a box.’ Are you a MacBook Pro? :-)
But seriously, I have seen some interestingly flawed ‘decision-making systems’ in Psych Wards. And I think Reason (whatever it is taught to be) matters. Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together. I don’t think Reason alone (however you construe it) is up to the job of friendly AI.
Of course, bringing Emotion in to ethics has issues. Who is to say whose Emotions are ‘valid’ or ‘correct?’
“Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together.”
That statement is too strong. I can think of several instances where certain emotions, especially negative ones, can impair decision making. It is reasonable to assume that impaired decision making can extend into making ethical decisions.
The first page of the paper linked below provides a good summary of when emotions, and what emotions, can be helpful or harmful in making decisions. I do acknowledge that some emotions can be helpful in certain situations. Perhaps you should modify your statement.
http://www.cognitive-neuroscience.ro/pdf/11.%20Anxiety%20impairs%20decision-making.pdf
A thousand sci-fi authors would agree with you that AIs are not going to have emotion. One prominent AI researcher will disagree
Certainly, our desires are emotional in nature; “reason” is merely how we achieve them. But wouldn’t it be better to have a Superintelligent AI deduce our emotions itself, rather than programming it in ourselves? Introspection is hard.
Have you read the Metaethics Sequence? It’s pretty good at this sort of question.
Would it be easier?
Especially about other people
Well, if you can build the damn thing, it should be better equipped than we are, being superintelligent and all.
Having only the disadvantages of no emotions itself, and an outside view...
..but if we build an Intelligence based on the only template we have, our own, its likely to be emotional. That seems to be the easy way.
That’s why I specified superintelligent; a human-level mind would fail hilariously. On the other hand, we are human minds ourselves; if we want to program our emotional values into an AI, we’ll need to understand them using our own rationality, which is sadly lacking, I fear.
That seems to imply we understand our rationality...
More research…
Gerd Gigerenzer’s views on heuristics in moral decision making are very interesting though.
Hah. Well, yes. I don’t exactly have a working AI in my pocket, even an unFriendly one.
I do think getting an AI to do things we value is a good deal harder than just making it do things, though, even if they’re both out of my grasp right now.
There’s some good stuff on this floating around this site; try searching for “complexity of value” to start off. There’s likely to be dependencies, though; you might want to read through the Sequences, daunting as they are.