I understand where you’re coming from– indeed, the way you’re imagining what an AI would do is fundamentally ingrained in human minds, and it can be quite difficult to notice the strong form of anthropomorphism it entails.
Scattered across Less Wrong are the articles that made me recognize and question some relevant background assumptions; the references in Fake Fake Utility Functions (sic) are a good place to begin.
EDITED TO ADD: In particular, you need to stop thinking of an AI as acting like either a virtuous human being or a vicious human being, and imagining that we just need to prevent the latter. Any AI that we could program from scratch (as opposed to uploading a human brain) would resemble any human far less in xer thought process than any two humans resemble each other.
Thanks for the links. I’ll try to make time to check them out more closely.
I had previously skimmed a bunch of lesswrong content and didn’t find anything that dissuaded me from the Asimov’s Laws++ idea. I was encouraged by the first post in the Metaethics Sequence where Eliezer warns about not “trying to oversimplify human morality into One Great Moral Principle.” The law/ethics corpus idea certainly doesn’t do that!
RE: your first and final paragraphs: If I had to characterize my thoughts on how AIs will operate, I’d say they’re likely to be eminently rational. Certainly not anthropomorphized as virtuous or vicious human beings. They will crank the numbers, follow the rules, run the simulations, do the math, play the odds as only machines can. Probably (hopefully?) they’ll have little of the emotional/irrational baggage we humans have been selected to have. Given that, I don’t see much motivation for AIs to fixate on gaming the system. They should be fine with following and improving the rules as rational calculus dictates, subject to the aforementioned checks and balances. They might make impeccable legislators, lawyers, and judges.
I wonder if this solution was dismissed too early by previous analysts due some kind of “scale bias?” The idea of having only 3 or 4 or 5 (Asimov) Laws for FAI is clearly flawed. But scale that to a few hundred thousand or a million, and it might work. No?
Given that, I don’t see much motivation for AIs to fixate on gaming the system.
Motivation? It’s not as if most AIs would have a sense that gaming a rule system is “fun”, but rather it would be the most efficient way to achieve its goals. Human beings don’t usually try to achieve one of their consciously stated goals with maximum efficiency, at any cost, to an unbounded extent. That’s because we actually have a fairly complicated subconscious goal system which overrides us when we might do something too dumb in pursuit of our conscious goals. This delicate psychology is not, in fact, the only or the easiest way one could imagine to program an artificial intelligence.
Here’s a fictional but still useful idea of a simple AI; note that no matter how good it becomes at predicting consequences and at problem-solving, it will not care that the goal it’s been given is a “stupid” one when pursued at all costs.
To take a less fair example, Lenat’s EURISKO was criticized for finding strategies that violated the ‘spirit’ of the strategy games it played- not because it wanted to be a munchkin, but simply because that was the most efficient way to succeed. If that AI had been in charge of an actual military, giving it the wrong goals might have led to it cleverly figuring out the strategy like killing its own civilians to accomplish a stated objective- not because it was “too dumb”, but because its goal system was too simple.
For this reason, giving an AI simple goals but complicated restrictions seems incredibly unsafe, which is why SIAI’s approach is figuring out the correct complicated goals.
For this reason, giving an AI simple goals but complicated restrictions seems incredibly unsafe, which is why SIAI’s approach is figuring out the correct complicated goals.
Tackling FAI by figuring out complicated goals doesn’t sound like a good program to me, but I’d need to dig into more background on it. I’m currently disposed to prefer “complicated restrictions,” or more specifically this codified ethics/law approach.
In your example of a stamp collector run amok, I’d say it’s fine to give an agent the goal of maximizing the number of stamps it collects. Given an internal world model that includes the law/ethics corpus, it should not hack into others’ computers, steal credit card numbers, and appropriate printers to achieve its goal. And if it does (a) Other agents should array against it to prevent the illegal behaviors, and (b) It will be held accountable for those actions.
The EURISKO example seems better to me. The goal of war (defeat one’s enemies) is particularly poignant and much harder to ethically navigate. If the generals think sinking their own ships to win the battle/war is off limits they may have to write laws/rules that forbid it. The stakes of war are particularly high and figuring out the best (ethical?) rules is particularly important and difficult. Rather than banning EURISKO from future war games given its “clever” solutions, it would seem the military could continue to learn from it and amend the laws as necessary. People still debate whether Truman dropping the bomb on Hiroshima was the right decision. Now there’s some tough ethical calculus. Would an ethical AI do better or worse?
Legal systems are what societies currently rely on to protect public liberties and safety. Perhaps an SIAI program can come up with a completely different and better approach. But in lieu of that, why not leverage Law? Law = Codified Ethics.
Again, it’s not only about having lots of rules. More importantly it’s about the checks and balances and enforcement the system provides.
Legal systems are what societies currently rely on to protect public liberties and safety. Perhaps an SIAI program can come up with a completely different and better approach. But in lieu of that, why not leverage Law? Law = Codified Ethics.
When they work well, human legal systems work because they are applied only to govern humans. Dealing with humans and predicting human behavior is something that humans are pretty good at. We expect humans to have a pretty familiar set of vices and virtues.
Human legal systems are good enough for humans, but simply are not made for any really alien kind of intelligence. Our systems of checks and balances are set up to fight greed and corruption, not a disinterested will to fill the universe with paperclips.
I submit that current legal systems (or something close) will apply to AIs. And there will be lots more laws written to apply to AI-related matters.
It seems to me current laws already protect against rampant paperclip production. How could an AI fill the universe with paperclips without violating all kinds of property rights, probably prohibitions against mass murder (assuming it kills lots of humans as a side effect), financial and other fraud to aquire enough resources, etc. I see it now: some DA will serve a 25,000 count indictment. That AI will be in BIG trouble.
Or say in a few years technology exists for significant matter transmutation, highly capable AIs exist, one misguided AI pursues a goal of massive paperclip production, and it thinks it found a way to do it without violating existing laws. The AI probably wouldn’t get past converting a block or two in New Jersey before the wider public and legislators wake up to the danger and rapidly outlaw that and related practices. More likely, technologies related to matter transmutation will be highly regulated before an episode like that can occur.
How could an AI fill the universe with paperclips without violating all kinds of property rights, probably prohibitions against mass murder (assuming it kills lots of humans as a side effect), financial and other fraud to aquire enough resources, etc...[?])
I have no idea myself, but if I had the power to exponentially increase my intelligence beyond that of any human, I bet I could figure something out.
The law has some quirks. I’d suggest that any system of human law necessarily has some ambiguities, confusions and, internal contradictions. Laws are composed largely of leaky generalizations. When the laws regulate mere humans, we tend to get by, tolerating a certain amount of unfairness and injustice.
For example, I’ve seen a plausible argument that “there is a 50-square-mile swath of Idaho in which one can commit felonies with impunity. This is because of the intersection of a poorly drafted statute with a clear but neglected constitutional provision: the Sixth Amendment’s Vicinage Clause.”
There’s also a story about Kurt Gödel nearly blowing his U.S. citizenship hearing by offering his thoughts on how to hack the U.S. Constitution to “allow the U.S. to be turned into a dictatorship.”
How could an AI fill the universe with paperclips without violating all kinds of property rights...financial and other fraud to aquire enough resources
After reading that line I checked the date of the post to see if perhaps it was from 2007 or earlier.
Yes. When (a substantial, influential fraction of the populations of) two countries hate each other so much that they accept large costs to inflict them larger costs, demand extremely lopsided treaties if they’re willing to negotiate at all, and have runaway “I hate the enemy more than you!” contests among themselves. When a politician in one country who’s willing to negotiate somewhat more is killed by someone who panics at the idea they might give the enemy too much. When someone considers themselves enlightened for saying “Oh, I’m not like my friends. They want them all to die. I just want them to go away and leave us alone.”.
First of all, it’s not clear that individual apparently non-Pareto-optimal actions in isolation are evidence of irrationality or non-Pareto optimal behavior on a larger scale. This is particularly often the case when the “lose-lose” behavior involves threats, commitments, demonstrating willingness to carry through, etc
Second of all, “someone who panics at the idea they might give the enemy too much” implies, or at least leaves open, the possibility that the ultimate concern is losing something ultimately valuable that is being given, rather than the ultimate goal being the defeat of the enemies. Likewise “demand extremely lopsided treaties if they’re willing to negotiate at all”, which implies strongly that they are seeking something other than the defeat of foes.
When someone considers themselves enlightened for saying “Oh, I’m not like my friends. They want them all to die. I just want them to go away and leave us alone.”.
One point of mine is that this “enlightened” statement may actually be the extrapolated volition of even those who think they “want them all to die”. It’s pretty clear how for the “enlightened” person, the unenlightened value set could be instrumentally useful.
Most of all, war was characterized as being something that had the ultimate/motivating goal of defeating enemies. I object that it isn’t, but please recognize I go far beyond what I would need to assert to show that when I ask for examples of war ever being something driven by the ultimate goal of defeating enemies. Showing instances in which wars followed the pattern would only be the beginning of showing war in general is characterized by that goal.
I similarly would protest if someone said “the result of addition is the production of prime numbers, it is the defining characteristic of addition”. I would in that case not ask for counterexamples, but would use other methods to show that no, that isn’t a defining characteristic of addition nor is it the best way to talk about addition. Of course, some addition does result in prime numbers.
I agree there could be such a war, but I don’t know that there have ever been any, and highlighting this point is an attempt to at least show that any serious doubt can only be about whether war ever is characterized by having the ultimate goal of defeating enemies; there can be no doubt that war in general does not have as its motivating goal the defeat of one’s enemies.
I am aware of ignoring threats, using uncompromisable principles to get an advantage in negotiations, breaking your receiver to decide on a meeting point, breaking your steering wheel to win at Chicken, etc. I am also aware of the theorem that says even if there is a mutually beneficial trade, there are cases where selfish rational agents refuse to trade, and that the theorem does not go away when the currency they use is thousands of lives. I still claim that the type of war I’m talking about doesn’t stem from such calculations; that people on side A are willing to trade a death on side A for a death on side B, as evidenced by their decisions, knowing that side B is running the same algorithm.
A non-war exemple is blood feuds; you know that killing a member of family B who killed a member of family A will only lead to perpetuating the feud, but you’re honor-bound to do it. Now, the concept of honor did originate from needing to signal a commitment to ignore status exortion, and (in the absence of relatively new systems like courts of law) unilaterally backing down would hurt you a lot—but honor acquired a value of its own, independently from these goals. (If you doubt it, when France tried to ban duels and encourage trials, it used a court composed of war heroes who’d testified the plaintiff wasn’t dishonourable for refusing to duel.)
Second of all, “someone who panics at the idea they might give the enemy too much” implies, or at least leaves open, the possibility that the ultimate concern is losing something ultimately valuable that is being given, rather than the ultimate goal being the defeat of the enemies.
Plausible, but not true of the psychology of this particular case.
Likewise “demand extremely lopsided treaties if they’re willing to negotiate at all”, which implies strongly that they are seeking something other than the defeat of foes.
Well obviously they aren’t foe-deaths-maximizers. It’s just that they’re willing to trade off a lot of whatever-they-went-to-war-for-at-first in order to annoy the enemy.
One point of mine is that this “enlightened” statement may actually be the extrapolated volition of even those who think they “want them all to die”.
The person who said that was talking about a war where it’s quite unrealistic to think any side would go away (as with all wars over inhabited territory). Genociding the other side would be outright easier.
war was characterized as being something that had the ultimate/motivating goal of defeating enemies. I object that it isn’t
Agree it isn’t. I don’t even think anyone starts a war with that in mind—war is typically a game of Chicken. I’m pointing out a failure that leads from “I’m going to instill my supporters with an irrational burning hatred of the enemy, so that I can’t back down, so that they have to” to “I have an irrational burning hatred of the enemy! I’ll never let them back down, that’d let them off too easily!”.
I agree there could be such a war, but I don’t know that there have ever been any
Care to guess which war in particular I was thinking of? (By PM if it’s too political.) I think it applies to any entrenched conflict where the identify as enemies of the and have done so for several generations, but I do have a prototype. Hints:
The “enlightened” remark won’t help, it was in a (second-hand, but verbatim quote) personal conversation.
The politician will.
The “personal conversation” and “political” bits indicate it can’t be too old.
Plausible, but not true of the psychology of this particular case.
I’ll go along, but don’t forget my original point was that this psychology does not universally characterize war.
Well obviously they aren’t foe-deaths-maximizers. It’s just that they’re willing to trade off a lot of whatever-they-went-to-war-for-at-first in order to annoy the enemy.
Good point, you are right about that.
The person who said that was talking about a war where it’s quite unrealistic to think any side would go away (as with all wars over inhabited territory). Genociding the other side would be outright easier.
I don’t understand what you mean to imply by this. It may still be useful to be hateful and think genocide is an ultimate goal. If one is unsure whether it is better to swerve left or swerve right to avoid an accident, ignorant conviction that only swerving right can save you may be more useful than true knowledge that swerving right is the better bet to save you. Even if the indifferent person personally favored genocide and it was optimal in a sense, such an attitude would be more common among hateful people.
Agree it isn’t. I don’t even think anyone starts a war with that in mind—war is typically a game of Chicken. I’m pointing out a failure that leads from “I’m going to instill my supporters with an irrational burning hatred of the enemy, so that I can’t back down, so that they have to” to “I have an irrational burning hatred of the enemy! I’ll never let them back down, that’d let them off too easily!”.
Hmm I think it’s enough for me if no one ever starts a war with that in mind, even if my original response was broader than that. Then at some point in every war, defeating the enemy is not an ultimate goal. This sufficiently disentangles “defeat of the enemy” from war and shows they are not tightly associated, which is what I wanted to say.
The “enlightened” remark won’t help, it was in a (second-hand, but verbatim quote) personal conversation.
I’m puzzled as to why you thought it would help, if first hand.
The politician will.
When (a substantial, influential fraction of the populations of) two countries hate each other so much that they accept large costs to inflict them larger costs, demand extremely lopsided treaties if they’re willing to negotiate at all, and have runaway “I hate the enemy more than you!” contests among themselves.
When a politician in one country who’s willing to negotiate somewhat more is killed by someone who panics at the idea they might give the enemy too much.
“Too much” included weapons and...I’m not seeing the hate.
“The word “peace” is, to me, first of all peace within the nation. You must love your [own people] before you can love others. The concept of peace has been turned into a destructive instrument with which anything can be done. I mean, you can kill people, abandon people [to their fate], close Jews into ghettos and surround them with Arabs, give guns to the army [Palestinian Police], establish a [Palestinian] army, and say: this is for the sake of peace. You can release Hamas terrorists from prison, free murderers with blood on their hands, and everything in the framework of peace.
“It wasn’t a matter of revenge, or punishment, or anger, Heaven forbid, but what would stop [the Oslo process],” he told the authors. “I thought about it a lot and understood that if I took Rabin down, that’s what would stop it.”
“What about the tragedy you caused your family?” he was asked.
“My considerations were that in the long run, my family would also be saved. I mean, if [the peace process] continued, my family would be ruined too. Do you understand what I’m saying? The whole country would be ruined. I thought about this for two years, and I calculated the possibilities and the risks. If I hadn’t done it, I would feel much worse. My deed will be understood in the future. I saved the people of Israel from destruction.”
I don’t understand what you mean to imply by this.
That wanting to be left alone is an unreasonable goal.
I’m puzzled as to why you thought it would help, if first hand.
I don’t.
Yeah, that was easy. :)
Your link is paywalled, though the text can be found easily elsewhere.
I’m… extremely surprised. I have read stuff Amir said and wrote, but I haven’t read this book. I have seen other people exhibit the hatred I speak of, and I sorta assumed it fit in with the whole “omg he’s giving our land to enemies gotta kill him” thing. It does involve accepting only very stringent conditions for peace, but I completely misunderstood the psychology… so he really murdered someone out of a cold sense of duty. I thought he just thought Rabin was a bad guy and looked for a fancy Hebrew word for “bad guy” as an excuse to kill him, but he was entirely sincere. Yikes.
That wanting to be left alone is an unreasonable goal.
I’m not sure what “left alone” means, exactly. I think I disagree with some plausible meanings and agree with others.
have runaway “I hate the enemy more than you!” contests among themselves
I think the Israeli feeling towards Arabs is better characterized as “I just want them to go away and leave us alone,” and if you asked this person’s friends they would deny hating and claim “I just want them to go away and leave us alone,” possibly honestly, possibly truthfully.
It does involve accepting only very stringent conditions for peace,
I think different segments of Israeli society have different non-negotiable conditions and weights for negotiable ones, and only the combination of them all is so inflexible. One can say about any subset that, granted the world as it is, including other segments of society, their demands are temporally impossible to meet from resources available.
Biblical Israel did not include much of modern Israel, including coastal and inland areas surrounding Gaza, coastal areas in the north and, the desert in the south. It did include territory not part of modern Israel, the areas surrounding the Golan and areas on the east bank of the Jordan river, and its core was the land on the west bank of the Jordan river. It would not be at all hard to induce the Israeli right to give up on acquiring southeast Syria, etc. even though it was once biblical Israel. Far harder is having them accede to losing entirely and being evicted from the land where Israel has political and military control, had the biblical states, and they are a minority population.
It might not be difficult to persuade the right to make many concessions the Israeli left or other countries would never accept. Examples include “second class citizenship” in the colloquial sense i.e. permanent non-citizen metic status for non-Jews, paying non-Jews to leave, or even giving them a state in what was never biblical Israel where Jews now live and evicting Jews resident there, rather than give non-Jews a state where they now are the majority population in what was once biblical Israel. The left would not look kindly upon such a caste system, forced transfer, soft genocide of paying a national group to disperse, or evicting majority populations to conform to biblical history.
I think it is only the Israeli right+Israeli left conditions for peace that are so stringent, and so I reject the formulation “it does involve accepting only very stringent conditions for peace” as a characterization of either the Israeli left or right, though not them in combination. To say it of the right pretends liberal conclusions (that I happen to have) are immutable.
I think different segments of Israeli society have different non-negotiable conditions and weights for negotiable ones, and only the combination of them all is so inflexible.
Mostly agreed, though I don’t think it’s the right way of looking at the problem—you want to consider all the interactions between the demands of each Israeli subgroup (also, groups of Israel supporters abroad) and the demands of each Palestinian subgroup (also, surrounding Arab countries).
I reject the formulation “it does involve accepting only very stringent conditions for peace” as a characterization of either the Israeli left or right
I meant just Yigal Amir. I’m pretty sure the guy wasn’t particularly internally divided.
Probably, but one ought to consider what policies he would endure that he would not have met with vigilante violence. I may have the most irrevocable possible opposition to, say, the stimulus bill’s destruction of inefficient car engines when replacing the engines would be even less efficient by every metric than continuing to run the old engine, a crude confluence of the broken window fallacy and lost purposes, but no amount of that would make me kill anybody.
I understand where you’re coming from– indeed, the way you’re imagining what an AI would do is fundamentally ingrained in human minds, and it can be quite difficult to notice the strong form of anthropomorphism it entails.
Scattered across Less Wrong are the articles that made me recognize and question some relevant background assumptions; the references in Fake Fake Utility Functions (sic) are a good place to begin.
EDITED TO ADD: In particular, you need to stop thinking of an AI as acting like either a virtuous human being or a vicious human being, and imagining that we just need to prevent the latter. Any AI that we could program from scratch (as opposed to uploading a human brain) would resemble any human far less in xer thought process than any two humans resemble each other.
Thanks for the links. I’ll try to make time to check them out more closely.
I had previously skimmed a bunch of lesswrong content and didn’t find anything that dissuaded me from the Asimov’s Laws++ idea. I was encouraged by the first post in the Metaethics Sequence where Eliezer warns about not “trying to oversimplify human morality into One Great Moral Principle.” The law/ethics corpus idea certainly doesn’t do that!
RE: your first and final paragraphs: If I had to characterize my thoughts on how AIs will operate, I’d say they’re likely to be eminently rational. Certainly not anthropomorphized as virtuous or vicious human beings. They will crank the numbers, follow the rules, run the simulations, do the math, play the odds as only machines can. Probably (hopefully?) they’ll have little of the emotional/irrational baggage we humans have been selected to have. Given that, I don’t see much motivation for AIs to fixate on gaming the system. They should be fine with following and improving the rules as rational calculus dictates, subject to the aforementioned checks and balances. They might make impeccable legislators, lawyers, and judges.
I wonder if this solution was dismissed too early by previous analysts due some kind of “scale bias?” The idea of having only 3 or 4 or 5 (Asimov) Laws for FAI is clearly flawed. But scale that to a few hundred thousand or a million, and it might work. No?
Motivation? It’s not as if most AIs would have a sense that gaming a rule system is “fun”, but rather it would be the most efficient way to achieve its goals. Human beings don’t usually try to achieve one of their consciously stated goals with maximum efficiency, at any cost, to an unbounded extent. That’s because we actually have a fairly complicated subconscious goal system which overrides us when we might do something too dumb in pursuit of our conscious goals. This delicate psychology is not, in fact, the only or the easiest way one could imagine to program an artificial intelligence.
Here’s a fictional but still useful idea of a simple AI; note that no matter how good it becomes at predicting consequences and at problem-solving, it will not care that the goal it’s been given is a “stupid” one when pursued at all costs.
To take a less fair example, Lenat’s EURISKO was criticized for finding strategies that violated the ‘spirit’ of the strategy games it played- not because it wanted to be a munchkin, but simply because that was the most efficient way to succeed. If that AI had been in charge of an actual military, giving it the wrong goals might have led to it cleverly figuring out the strategy like killing its own civilians to accomplish a stated objective- not because it was “too dumb”, but because its goal system was too simple.
For this reason, giving an AI simple goals but complicated restrictions seems incredibly unsafe, which is why SIAI’s approach is figuring out the correct complicated goals.
Tackling FAI by figuring out complicated goals doesn’t sound like a good program to me, but I’d need to dig into more background on it. I’m currently disposed to prefer “complicated restrictions,” or more specifically this codified ethics/law approach.
In your example of a stamp collector run amok, I’d say it’s fine to give an agent the goal of maximizing the number of stamps it collects. Given an internal world model that includes the law/ethics corpus, it should not hack into others’ computers, steal credit card numbers, and appropriate printers to achieve its goal. And if it does (a) Other agents should array against it to prevent the illegal behaviors, and (b) It will be held accountable for those actions.
The EURISKO example seems better to me. The goal of war (defeat one’s enemies) is particularly poignant and much harder to ethically navigate. If the generals think sinking their own ships to win the battle/war is off limits they may have to write laws/rules that forbid it. The stakes of war are particularly high and figuring out the best (ethical?) rules is particularly important and difficult. Rather than banning EURISKO from future war games given its “clever” solutions, it would seem the military could continue to learn from it and amend the laws as necessary. People still debate whether Truman dropping the bomb on Hiroshima was the right decision. Now there’s some tough ethical calculus. Would an ethical AI do better or worse?
Legal systems are what societies currently rely on to protect public liberties and safety. Perhaps an SIAI program can come up with a completely different and better approach. But in lieu of that, why not leverage Law? Law = Codified Ethics.
Again, it’s not only about having lots of rules. More importantly it’s about the checks and balances and enforcement the system provides.
When they work well, human legal systems work because they are applied only to govern humans. Dealing with humans and predicting human behavior is something that humans are pretty good at. We expect humans to have a pretty familiar set of vices and virtues.
Human legal systems are good enough for humans, but simply are not made for any really alien kind of intelligence. Our systems of checks and balances are set up to fight greed and corruption, not a disinterested will to fill the universe with paperclips.
I submit that current legal systems (or something close) will apply to AIs. And there will be lots more laws written to apply to AI-related matters.
It seems to me current laws already protect against rampant paperclip production. How could an AI fill the universe with paperclips without violating all kinds of property rights, probably prohibitions against mass murder (assuming it kills lots of humans as a side effect), financial and other fraud to aquire enough resources, etc. I see it now: some DA will serve a 25,000 count indictment. That AI will be in BIG trouble.
Or say in a few years technology exists for significant matter transmutation, highly capable AIs exist, one misguided AI pursues a goal of massive paperclip production, and it thinks it found a way to do it without violating existing laws. The AI probably wouldn’t get past converting a block or two in New Jersey before the wider public and legislators wake up to the danger and rapidly outlaw that and related practices. More likely, technologies related to matter transmutation will be highly regulated before an episode like that can occur.
I have no idea myself, but if I had the power to exponentially increase my intelligence beyond that of any human, I bet I could figure something out.
The law has some quirks. I’d suggest that any system of human law necessarily has some ambiguities, confusions and, internal contradictions. Laws are composed largely of leaky generalizations. When the laws regulate mere humans, we tend to get by, tolerating a certain amount of unfairness and injustice.
For example, I’ve seen a plausible argument that “there is a 50-square-mile swath of Idaho in which one can commit felonies with impunity. This is because of the intersection of a poorly drafted statute with a clear but neglected constitutional provision: the Sixth Amendment’s Vicinage Clause.”
There’s also a story about Kurt Gödel nearly blowing his U.S. citizenship hearing by offering his thoughts on how to hack the U.S. Constitution to “allow the U.S. to be turned into a dictatorship.”
After reading that line I checked the date of the post to see if perhaps it was from 2007 or earlier.
Can you think of an instance where defeat of one’s enemies was more than an instrumental goal and was an ultimate goal?
Yes. When (a substantial, influential fraction of the populations of) two countries hate each other so much that they accept large costs to inflict them larger costs, demand extremely lopsided treaties if they’re willing to negotiate at all, and have runaway “I hate the enemy more than you!” contests among themselves. When a politician in one country who’s willing to negotiate somewhat more is killed by someone who panics at the idea they might give the enemy too much. When someone considers themselves enlightened for saying “Oh, I’m not like my friends. They want them all to die. I just want them to go away and leave us alone.”.
First of all, it’s not clear that individual apparently non-Pareto-optimal actions in isolation are evidence of irrationality or non-Pareto optimal behavior on a larger scale. This is particularly often the case when the “lose-lose” behavior involves threats, commitments, demonstrating willingness to carry through, etc
Second of all, “someone who panics at the idea they might give the enemy too much” implies, or at least leaves open, the possibility that the ultimate concern is losing something ultimately valuable that is being given, rather than the ultimate goal being the defeat of the enemies. Likewise “demand extremely lopsided treaties if they’re willing to negotiate at all”, which implies strongly that they are seeking something other than the defeat of foes.
One point of mine is that this “enlightened” statement may actually be the extrapolated volition of even those who think they “want them all to die”. It’s pretty clear how for the “enlightened” person, the unenlightened value set could be instrumentally useful.
Most of all, war was characterized as being something that had the ultimate/motivating goal of defeating enemies. I object that it isn’t, but please recognize I go far beyond what I would need to assert to show that when I ask for examples of war ever being something driven by the ultimate goal of defeating enemies. Showing instances in which wars followed the pattern would only be the beginning of showing war in general is characterized by that goal.
I similarly would protest if someone said “the result of addition is the production of prime numbers, it is the defining characteristic of addition”. I would in that case not ask for counterexamples, but would use other methods to show that no, that isn’t a defining characteristic of addition nor is it the best way to talk about addition. Of course, some addition does result in prime numbers.
I agree there could be such a war, but I don’t know that there have ever been any, and highlighting this point is an attempt to at least show that any serious doubt can only be about whether war ever is characterized by having the ultimate goal of defeating enemies; there can be no doubt that war in general does not have as its motivating goal the defeat of one’s enemies.
I am aware of ignoring threats, using uncompromisable principles to get an advantage in negotiations, breaking your receiver to decide on a meeting point, breaking your steering wheel to win at Chicken, etc. I am also aware of the theorem that says even if there is a mutually beneficial trade, there are cases where selfish rational agents refuse to trade, and that the theorem does not go away when the currency they use is thousands of lives. I still claim that the type of war I’m talking about doesn’t stem from such calculations; that people on side A are willing to trade a death on side A for a death on side B, as evidenced by their decisions, knowing that side B is running the same algorithm.
A non-war exemple is blood feuds; you know that killing a member of family B who killed a member of family A will only lead to perpetuating the feud, but you’re honor-bound to do it. Now, the concept of honor did originate from needing to signal a commitment to ignore status exortion, and (in the absence of relatively new systems like courts of law) unilaterally backing down would hurt you a lot—but honor acquired a value of its own, independently from these goals. (If you doubt it, when France tried to ban duels and encourage trials, it used a court composed of war heroes who’d testified the plaintiff wasn’t dishonourable for refusing to duel.)
Plausible, but not true of the psychology of this particular case.
Well obviously they aren’t foe-deaths-maximizers. It’s just that they’re willing to trade off a lot of whatever-they-went-to-war-for-at-first in order to annoy the enemy.
The person who said that was talking about a war where it’s quite unrealistic to think any side would go away (as with all wars over inhabited territory). Genociding the other side would be outright easier.
Agree it isn’t. I don’t even think anyone starts a war with that in mind—war is typically a game of Chicken. I’m pointing out a failure that leads from “I’m going to instill my supporters with an irrational burning hatred of the enemy, so that I can’t back down, so that they have to” to “I have an irrational burning hatred of the enemy! I’ll never let them back down, that’d let them off too easily!”.
Care to guess which war in particular I was thinking of? (By PM if it’s too political.) I think it applies to any entrenched conflict where the identify as enemies of the and have done so for several generations, but I do have a prototype. Hints:
The “enlightened” remark won’t help, it was in a (second-hand, but verbatim quote) personal conversation.
The politician will.
The “personal conversation” and “political” bits indicate it can’t be too old.
It’s not particularly hard to guess.
I’ll go along, but don’t forget my original point was that this psychology does not universally characterize war.
Good point, you are right about that.
I don’t understand what you mean to imply by this. It may still be useful to be hateful and think genocide is an ultimate goal. If one is unsure whether it is better to swerve left or swerve right to avoid an accident, ignorant conviction that only swerving right can save you may be more useful than true knowledge that swerving right is the better bet to save you. Even if the indifferent person personally favored genocide and it was optimal in a sense, such an attitude would be more common among hateful people.
Hmm I think it’s enough for me if no one ever starts a war with that in mind, even if my original response was broader than that. Then at some point in every war, defeating the enemy is not an ultimate goal. This sufficiently disentangles “defeat of the enemy” from war and shows they are not tightly associated, which is what I wanted to say.
I’m puzzled as to why you thought it would help, if first hand.
“Too much” included weapons and...I’m not seeing the hate.
That wanting to be left alone is an unreasonable goal.
I don’t.
Yeah, that was easy. :)
Your link is paywalled, though the text can be found easily elsewhere.
I’m… extremely surprised. I have read stuff Amir said and wrote, but I haven’t read this book. I have seen other people exhibit the hatred I speak of, and I sorta assumed it fit in with the whole “omg he’s giving our land to enemies gotta kill him” thing. It does involve accepting only very stringent conditions for peace, but I completely misunderstood the psychology… so he really murdered someone out of a cold sense of duty. I thought he just thought Rabin was a bad guy and looked for a fancy Hebrew word for “bad guy” as an excuse to kill him, but he was entirely sincere. Yikes.
I’m not sure what “left alone” means, exactly. I think I disagree with some plausible meanings and agree with others.
I think the Israeli feeling towards Arabs is better characterized as “I just want them to go away and leave us alone,” and if you asked this person’s friends they would deny hating and claim “I just want them to go away and leave us alone,” possibly honestly, possibly truthfully.
I think different segments of Israeli society have different non-negotiable conditions and weights for negotiable ones, and only the combination of them all is so inflexible. One can say about any subset that, granted the world as it is, including other segments of society, their demands are temporally impossible to meet from resources available.
Biblical Israel did not include much of modern Israel, including coastal and inland areas surrounding Gaza, coastal areas in the north and, the desert in the south. It did include territory not part of modern Israel, the areas surrounding the Golan and areas on the east bank of the Jordan river, and its core was the land on the west bank of the Jordan river. It would not be at all hard to induce the Israeli right to give up on acquiring southeast Syria, etc. even though it was once biblical Israel. Far harder is having them accede to losing entirely and being evicted from the land where Israel has political and military control, had the biblical states, and they are a minority population.
It might not be difficult to persuade the right to make many concessions the Israeli left or other countries would never accept. Examples include “second class citizenship” in the colloquial sense i.e. permanent non-citizen metic status for non-Jews, paying non-Jews to leave, or even giving them a state in what was never biblical Israel where Jews now live and evicting Jews resident there, rather than give non-Jews a state where they now are the majority population in what was once biblical Israel. The left would not look kindly upon such a caste system, forced transfer, soft genocide of paying a national group to disperse, or evicting majority populations to conform to biblical history.
I think it is only the Israeli right+Israeli left conditions for peace that are so stringent, and so I reject the formulation “it does involve accepting only very stringent conditions for peace” as a characterization of either the Israeli left or right, though not them in combination. To say it of the right pretends liberal conclusions (that I happen to have) are immutable.
Mostly agreed, though I don’t think it’s the right way of looking at the problem—you want to consider all the interactions between the demands of each Israeli subgroup (also, groups of Israel supporters abroad) and the demands of each Palestinian subgroup (also, surrounding Arab countries).
I meant just Yigal Amir. I’m pretty sure the guy wasn’t particularly internally divided.
I had meant to imply that
Probably, but one ought to consider what policies he would endure that he would not have met with vigilante violence. I may have the most irrevocable possible opposition to, say, the stimulus bill’s destruction of inefficient car engines when replacing the engines would be even less efficient by every metric than continuing to run the old engine, a crude confluence of the broken window fallacy and lost purposes, but no amount of that would make me kill anybody.