If I can’t decide which will be best, I’ll just choose one.
Elliot seems to have problems valuing things—not surprising, since the frontal lobes make it possible to associate abstract ideas and the valences of preference, among other things.
It seem to me that he would have made a decision based on his feelings, and how that his feelings can no longer be associated with states, the decision process no longer terminates.
Think rather of people with “flattened affect”. That’s what we should be aiming for. Think Mr. Data.
Yeah, see, figuring things out from first principles, rigorously applying your values, calculating the best option given a multi-dimensional array of preferences in various categories and doing a weighted sum on them to determine an appropriate course is a good thing. People should definitely know how to do that. I’m glad I have whatever basic grasp on the functions involved that I do have.
But it’s not actually how human beings think.
Even determining what heuristic you’d use to judge a situation’s utility on that multidimensional scale would be a monumental undertaking. It’d take an age.
What actual human beings do is let their subconscious brain do all that tricky heuristic-based summing and weighting and determining which aspects are important and then signal it up to consciousness via a wooly, fuzzy, sometimes vague, often powerful, emotional response.
Data wanted to be human, but he can never be human coz his physiology just wasn’t wired that way.
You may determine that you want to be an android, dispassionately calculating every move from first principles and values. But you just ain’t wired that way. You won’t have time to run a wetware program to emulate it. Not one that doesn’t take advantage of your emotions anyway.
God knows how an emotionless person would fair trying to predict or influence another human being’s actions.
Better to learn to hear your emotions, to understand the message they’re giving you, learn to fine-tune them if they’re lying or wrong, than to ignore them.
If I can’t decide which will be best, I’ll just choose one.
Elliot seems to have problems valuing things—not surprising, since the frontal lobes make it possible to associate abstract ideas and the valences of preference, among other things.
It seem to me that he would have made a decision based on his feelings, and how that his feelings can no longer be associated with states, the decision process no longer terminates.
Think rather of people with “flattened affect”. That’s what we should be aiming for. Think Mr. Data.
Yeah, see, figuring things out from first principles, rigorously applying your values, calculating the best option given a multi-dimensional array of preferences in various categories and doing a weighted sum on them to determine an appropriate course is a good thing. People should definitely know how to do that. I’m glad I have whatever basic grasp on the functions involved that I do have.
But it’s not actually how human beings think.
Even determining what heuristic you’d use to judge a situation’s utility on that multidimensional scale would be a monumental undertaking. It’d take an age.
What actual human beings do is let their subconscious brain do all that tricky heuristic-based summing and weighting and determining which aspects are important and then signal it up to consciousness via a wooly, fuzzy, sometimes vague, often powerful, emotional response.
Data wanted to be human, but he can never be human coz his physiology just wasn’t wired that way.
You may determine that you want to be an android, dispassionately calculating every move from first principles and values. But you just ain’t wired that way. You won’t have time to run a wetware program to emulate it. Not one that doesn’t take advantage of your emotions anyway.
God knows how an emotionless person would fair trying to predict or influence another human being’s actions.
Better to learn to hear your emotions, to understand the message they’re giving you, learn to fine-tune them if they’re lying or wrong, than to ignore them.