My name is Sean Welsh. I am a graduate student at the University of Canterbury in Christchurch NZ. I was most recently a Solution Architect working on software development projects for telcos. I have decided to take a year off to do a Master’s. My topic is Ethical Algorithms: Modelling Moral Decisions in Software. I am particularly interested in questions of machine ethics & robot ethics (obviously).
I would say at the outset that I think ‘the hard problem of ethics’ remains unsolved. Until it is solved, the prospects for any benign or friendly AI seem remote.
I can’t honestly say that I identify as a rationalist. I think the Academy puts for too much faith in their technological marvel of ‘Reason.’ However, I have a healthy and robustly expressed disregard for all forms of bullshit—be they theist or atheist.
As Confucius said: Shall I teach you the meaning of knowledge? If you know a thing, to know that you know it. And if you do not know, to know that you do not know. THAT is the meaning of knowledge.
Apart from working in software development, I have also been an English teacher, a taxi driver, a tourism industry operator, online travel agent and a media adviser to a Federal politician (i.e. a spin doctor).
I don’t mind a bit of biff—but generally regard it as unproductive.
I can’t honestly say that I identify as a rationalist. I think the Academy puts for too much faith in their technological marvel of ‘Reason.’
Not sure why you link rationality with “Academy” (academia?). Consider scanning through the sequences to learn with is generally considered rationality on this forum and how Eliezer Yudkowsky treats metaethics. Whether you agree with him or not, you are likely to find a lot of insights into machine (and human) ethics, maybe even helpful in your research.
Not sure why you link rationality with “Academy” (academia?).
Pirsig calls the Academy “the Church of Reason” in Zen and the Art of Motorcycle Maintenance. I think there is much evidence to suggest academia has been strongly biased to ‘Reason’ for most of its recorded history. It is only very recently that research is highlighting the role of Emotion in decision making.
Let’s not get started on the medical profession’s bias towards health..maybe it’s just their job to teach reason..have you ever met someone who couldn’t do emotional/system-I decision-making right out of the box?
In my experience homo sapiens does not come ‘out of a box.’ Are you a MacBook Pro? :-)
But seriously, I have seen some interestingly flawed ‘decision-making systems’ in Psych Wards. And I think Reason (whatever it is taught to be) matters. Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together. I don’t think Reason alone (however you construe it) is up to the job of friendly AI.
Of course, bringing Emotion in to ethics has issues. Who is to say whose Emotions are ‘valid’ or ‘correct?’
“Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together.”
That statement is too strong. I can think of several instances where certain emotions, especially negative ones, can impair decision making. It is reasonable to assume that impaired decision making can extend into making ethical decisions.
The first page of the paper linked below provides a good summary of when emotions, and what emotions, can be helpful or harmful in making decisions. I do acknowledge that some emotions can be helpful in certain situations. Perhaps you should modify your statement.
Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together. I don’t think Reason alone (however you construe it) is up to the job of friendly AI.
Certainly, our desires are emotional in nature; “reason” is merely how we achieve them. But wouldn’t it be better to have a Superintelligent AI deduce our emotions itself, rather than programming it in ourselves? Introspection is hard.
Of course, bringing Emotion in to ethics has issues. Who is to say whose Emotions are ‘valid’ or ‘correct?’
Have you read the Metaethics Sequence? It’s pretty good at this sort of question.
That’s why I specified superintelligent; a human-level mind would fail hilariously. On the other hand, we are human minds ourselves; if we want to program our emotional values into an AI, we’ll need to understand them using our own rationality, which is sadly lacking, I fear.
Hah. Well, yes. I don’t exactly have a working AI in my pocket, even an unFriendly one.
I do think getting an AI to do things we value is a good deal harder than just making it do things, though, even if they’re both out of my grasp right now.
There’s some good stuff on this floating around this site; try searching for “complexity of value” to start off. There’s likely to be dependencies, though; you might want to read through the Sequences, daunting as they are.
I think the Academy puts for too much faith in their technological marvel of ‘Reason.’
I don’t think I’m parsing this correctly. Could you expand on it a bit?
I would say at the outset that I think ‘the hard problem of ethics’ remains unsolved. Until it is solved, the prospects for any benign or friendly AI seem remote.
Well, you’ll find plenty of agreement here, for certain definitions of “unsolved”.
I don’t think I’m parsing this correctly. Could you expand on it a bit?
You need the Sith parser :-)
I guess the point I am making is that Reason alone is not enough and a lot of what we call Reason is technology derived from the effect on brains on being able to write. There is some interesting research on how cognition and reasoning differs between literate and preliterate people. I think Emotion plays a critical role in decision making. I am not going out to bat for Faith except in the Taras Bulba sense: “I put my faith in my sword and my sword in the Pole!” (The Polish were the enemy of the Cossack Taras Bulba in the ancient Yul Brynner flick I am quoting from.)
We create technologies to help us do stuff better—Taras’ sword being only one example. Why not a technology to help us think better? Heck, there are plenty of “mental technologies” besides Rationality—a great example would be the Memory Palace visualization technique (it’s featured in an episode of Sherlock, for bonus reference points, but it’s not portrayed very well; Google it instead.)
Hi Less Wrong,
My name is Sean Welsh. I am a graduate student at the University of Canterbury in Christchurch NZ. I was most recently a Solution Architect working on software development projects for telcos. I have decided to take a year off to do a Master’s. My topic is Ethical Algorithms: Modelling Moral Decisions in Software. I am particularly interested in questions of machine ethics & robot ethics (obviously).
I would say at the outset that I think ‘the hard problem of ethics’ remains unsolved. Until it is solved, the prospects for any benign or friendly AI seem remote.
I can’t honestly say that I identify as a rationalist. I think the Academy puts for too much faith in their technological marvel of ‘Reason.’ However, I have a healthy and robustly expressed disregard for all forms of bullshit—be they theist or atheist.
As Confucius said: Shall I teach you the meaning of knowledge? If you know a thing, to know that you know it. And if you do not know, to know that you do not know. THAT is the meaning of knowledge.
Apart from working in software development, I have also been an English teacher, a taxi driver, a tourism industry operator, online travel agent and a media adviser to a Federal politician (i.e. a spin doctor).
I don’t mind a bit of biff—but generally regard it as unproductive.
Welcome!
Not sure why you link rationality with “Academy” (academia?). Consider scanning through the sequences to learn with is generally considered rationality on this forum and how Eliezer Yudkowsky treats metaethics. Whether you agree with him or not, you are likely to find a lot of insights into machine (and human) ethics, maybe even helpful in your research.
Pirsig calls the Academy “the Church of Reason” in Zen and the Art of Motorcycle Maintenance. I think there is much evidence to suggest academia has been strongly biased to ‘Reason’ for most of its recorded history. It is only very recently that research is highlighting the role of Emotion in decision making.
Let’s not get started on the medical profession’s bias towards health..maybe it’s just their job to teach reason..have you ever met someone who couldn’t do emotional/system-I decision-making right out of the box?
In my experience homo sapiens does not come ‘out of a box.’ Are you a MacBook Pro? :-)
But seriously, I have seen some interestingly flawed ‘decision-making systems’ in Psych Wards. And I think Reason (whatever it is taught to be) matters. Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together. I don’t think Reason alone (however you construe it) is up to the job of friendly AI.
Of course, bringing Emotion in to ethics has issues. Who is to say whose Emotions are ‘valid’ or ‘correct?’
“Reason and Emotion are a tag team in decision making in ethical domains. They do their best work together.”
That statement is too strong. I can think of several instances where certain emotions, especially negative ones, can impair decision making. It is reasonable to assume that impaired decision making can extend into making ethical decisions.
The first page of the paper linked below provides a good summary of when emotions, and what emotions, can be helpful or harmful in making decisions. I do acknowledge that some emotions can be helpful in certain situations. Perhaps you should modify your statement.
http://www.cognitive-neuroscience.ro/pdf/11.%20Anxiety%20impairs%20decision-making.pdf
A thousand sci-fi authors would agree with you that AIs are not going to have emotion. One prominent AI researcher will disagree
Certainly, our desires are emotional in nature; “reason” is merely how we achieve them. But wouldn’t it be better to have a Superintelligent AI deduce our emotions itself, rather than programming it in ourselves? Introspection is hard.
Have you read the Metaethics Sequence? It’s pretty good at this sort of question.
Would it be easier?
Especially about other people
Well, if you can build the damn thing, it should be better equipped than we are, being superintelligent and all.
Having only the disadvantages of no emotions itself, and an outside view...
..but if we build an Intelligence based on the only template we have, our own, its likely to be emotional. That seems to be the easy way.
That’s why I specified superintelligent; a human-level mind would fail hilariously. On the other hand, we are human minds ourselves; if we want to program our emotional values into an AI, we’ll need to understand them using our own rationality, which is sadly lacking, I fear.
That seems to imply we understand our rationality...
More research…
Gerd Gigerenzer’s views on heuristics in moral decision making are very interesting though.
Hah. Well, yes. I don’t exactly have a working AI in my pocket, even an unFriendly one.
I do think getting an AI to do things we value is a good deal harder than just making it do things, though, even if they’re both out of my grasp right now.
There’s some good stuff on this floating around this site; try searching for “complexity of value” to start off. There’s likely to be dependencies, though; you might want to read through the Sequences, daunting as they are.
I don’t think I’m parsing this correctly. Could you expand on it a bit?
Well, you’ll find plenty of agreement here, for certain definitions of “unsolved”.
You need the Sith parser :-)
I guess the point I am making is that Reason alone is not enough and a lot of what we call Reason is technology derived from the effect on brains on being able to write. There is some interesting research on how cognition and reasoning differs between literate and preliterate people. I think Emotion plays a critical role in decision making. I am not going out to bat for Faith except in the Taras Bulba sense: “I put my faith in my sword and my sword in the Pole!” (The Polish were the enemy of the Cossack Taras Bulba in the ancient Yul Brynner flick I am quoting from.)
Whee, references!
We create technologies to help us do stuff better—Taras’ sword being only one example. Why not a technology to help us think better? Heck, there are plenty of “mental technologies” besides Rationality—a great example would be the Memory Palace visualization technique (it’s featured in an episode of Sherlock, for bonus reference points, but it’s not portrayed very well; Google it instead.)