I guess what I mean to say is—if killing smart people is the solution, the outcome you are after, almost by definition, cannot be an improvement. I guess maybe in theory there are some scenarios where this might be possible, but those will be few and far between.
Suppose that you are a member of a political party, and you are told that the less educated a person is, the more likely they will vote for you (and vice versa). If this was me, I would feel morally obliged to immediately disassociate myself from such a party, for it is all but impossible that what it stands for is a Good Thing.
If you come up with a political plan, and realize that the way to achieve it is to kill the smart people, the only possible prudent conclusion must surely be that your plan is fundamentally misguided, wrong, and cannot possibly be an outcome worth pursuing.
Right, OK, I see what you’re getting at. And I guess it would be therefore reasonable to make some sort of… allowance for seemingly sub-optimal/unintelligent behavior in the pursuit of some goals… but this is a tricky situation when the behavior is especially devious or deadly ;)
Though I am not entirely sure if the Orthogonality Thesis is entirely applicable within the context in our thread here; If millions of voters who have millions of motivations, reasons, points of view and convictions become less and less likely to vote for you as they are more and more educated, it is a fairly good guess to infer from this that, in the main, your party program is not likely to result in significant improvements to the majority of the population when implemented. It might, of course; perhaps you are misunderstood, or your strategy is high-risk, high-gain.. but it would be unreasonable to assume this. I try to live by the rule “If you think the whole world doesn’t get something, but you do, you’re almost certainly ignorant” ;) Often appearing together with “If the solution you came up with to an intractable complex problem basically comes down to ‘If only [affected groups] would do [single action], then [solution]!’ - you’re 99% certainly wrong.
And—as an aside, we are now also entering the territory of intent, and how much, if at all, this should influence our assessment of a given action. I’ll be honest—I am really tired of thinking about intent. This is not meant to suggest I don’t appreciate your suggestion and comment by the way… just that I have deliberated on this for quite many times and it is one of those things where human beings really can be frustrating and ridiculously irrational.
Like… why is it that someone who helps you cross the street because he wants to make sure you’re safe is OK, but someone doing the exact same thing for you because he thinks you’re a loser and need to be protected from your own incompetence is totally not OK? The utility of both interventions is identical, and the personal opinion of a random stranger about you as a person is very close to completely irrelevant… yet, many people would refuse the help, even if they actually need it, if they knew it was motivated by the belief they are a loser. Being human myself I of course understand this, but it remains one of those things that are just… I guess, something I’ll never truly want to accept.
Even in the context of justice and punishment, I can’t say I am unequivocally supportive of how important intent is, although there at least it has some justification in deciding the severity of punishment. But it works in mysterious ways. Suppose that one day an AGI will tell us with 90% certainty that, if it wasn’t for Hitler starting WW2 in 1939, sometime at the end of the forties we would have had to suffer a nuclear war that would kill 10 times as many people and make vast tracts of Europe uninhabitable for many centuries. Since Adolf did not intend the war for the express purpose of preventing this tragedy, he gets zero credit.
On the other hand, if I do something because I claim to sincerely want to improve the situation of my countrymen but my plans actually cause mass crop failures and famines that kill a couple hundred thousand, it is very probable that, actually, it won’t be such a big deal, and I can have another go at it next year. When you think about it, this is seriously weird. But anyhow, TL;DR ;)
-------------------
Now—I think you and I can both agree that it is highly probable that, almost regardless of the ultimate goal, pursuing a nation wide strategy of “putting away” thousands of intellectuals (and doing this by using “methods” as crude as rounding up anyone who wears glasses, say...) is virtually guaranteed to be a dumb objective. Quietly getting rid of a few noisy dissenters with a radically different agenda so you can have the field free to implement your ideas—okay, I wouldn’t condone it, but this could perhaps be a dark chapter in an otherwise successful book, however deplorable.
Now—I think you and I can both agree that it is highly probable that, almost regardless of the ultimate goal, pursuing a nation wide strategy of “putting away” thousands of intellectuals (and doing this by using “methods” as crude as rounding up anyone who wears glasses, say...) is virtually guaranteed to be a dumb objective.
I do not agree. Morality (ethics) is orthogonal to intelligence (strategy).
Mmmyes well while your theoretical framework may be sound, such an outcome is almost certainly not. I’d be willing to agree that it is not impossible such a step might, in the most uncommon of circumstances be part of an otherwise sound strategy/goal. Intelligence without morality is neither.
I guess what I mean to say is—if killing smart people is the solution, the outcome you are after, almost by definition, cannot be an improvement. I guess maybe in theory there are some scenarios where this might be possible, but those will be few and far between.
Suppose that you are a member of a political party, and you are told that the less educated a person is, the more likely they will vote for you (and vice versa). If this was me, I would feel morally obliged to immediately disassociate myself from such a party, for it is all but impossible that what it stands for is a Good Thing.
If you come up with a political plan, and realize that the way to achieve it is to kill the smart people, the only possible prudent conclusion must surely be that your plan is fundamentally misguided, wrong, and cannot possibly be an outcome worth pursuing.
What are your thoughts on the Orthogonality Thesis?
Right, OK, I see what you’re getting at. And I guess it would be therefore reasonable to make some sort of… allowance for seemingly sub-optimal/unintelligent behavior in the pursuit of some goals… but this is a tricky situation when the behavior is especially devious or deadly ;)
Though I am not entirely sure if the Orthogonality Thesis is entirely applicable within the context in our thread here; If millions of voters who have millions of motivations, reasons, points of view and convictions become less and less likely to vote for you as they are more and more educated, it is a fairly good guess to infer from this that, in the main, your party program is not likely to result in significant improvements to the majority of the population when implemented. It might, of course; perhaps you are misunderstood, or your strategy is high-risk, high-gain.. but it would be unreasonable to assume this. I try to live by the rule “If you think the whole world doesn’t get something, but you do, you’re almost certainly ignorant” ;) Often appearing together with “If the solution you came up with to an intractable complex problem basically comes down to ‘If only [affected groups] would do [single action], then [solution]!’ - you’re 99% certainly wrong.
And—as an aside, we are now also entering the territory of intent, and how much, if at all, this should influence our assessment of a given action. I’ll be honest—I am really tired of thinking about intent. This is not meant to suggest I don’t appreciate your suggestion and comment by the way… just that I have deliberated on this for quite many times and it is one of those things where human beings really can be frustrating and ridiculously irrational.
Like… why is it that someone who helps you cross the street because he wants to make sure you’re safe is OK, but someone doing the exact same thing for you because he thinks you’re a loser and need to be protected from your own incompetence is totally not OK? The utility of both interventions is identical, and the personal opinion of a random stranger about you as a person is very close to completely irrelevant… yet, many people would refuse the help, even if they actually need it, if they knew it was motivated by the belief they are a loser. Being human myself I of course understand this, but it remains one of those things that are just… I guess, something I’ll never truly want to accept.
Even in the context of justice and punishment, I can’t say I am unequivocally supportive of how important intent is, although there at least it has some justification in deciding the severity of punishment. But it works in mysterious ways. Suppose that one day an AGI will tell us with 90% certainty that, if it wasn’t for Hitler starting WW2 in 1939, sometime at the end of the forties we would have had to suffer a nuclear war that would kill 10 times as many people and make vast tracts of Europe uninhabitable for many centuries. Since Adolf did not intend the war for the express purpose of preventing this tragedy, he gets zero credit.
On the other hand, if I do something because I claim to sincerely want to improve the situation of my countrymen but my plans actually cause mass crop failures and famines that kill a couple hundred thousand, it is very probable that, actually, it won’t be such a big deal, and I can have another go at it next year. When you think about it, this is seriously weird. But anyhow, TL;DR ;)
-------------------
Now—I think you and I can both agree that it is highly probable that, almost regardless of the ultimate goal, pursuing a nation wide strategy of “putting away” thousands of intellectuals (and doing this by using “methods” as crude as rounding up anyone who wears glasses, say...) is virtually guaranteed to be a dumb objective. Quietly getting rid of a few noisy dissenters with a radically different agenda so you can have the field free to implement your ideas—okay, I wouldn’t condone it, but this could perhaps be a dark chapter in an otherwise successful book, however deplorable.
I do not agree. Morality (ethics) is orthogonal to intelligence (strategy).
Mmmyes well while your theoretical framework may be sound, such an outcome is almost certainly not. I’d be willing to agree that it is not impossible such a step might, in the most uncommon of circumstances be part of an otherwise sound strategy/goal. Intelligence without morality is neither.