Yep. But the universe is huge, and it will be around for a long time, which, in my mind, is an even stronger reason to get technological progress right and not destroy ourselves. That’s why I consider conflict avoidance to be a higher priority than speed of technological advance.
Indeed, however this is dependent upon utility function—many people value the people who are alive now to an extent that cannot be compensated by future lives, even if there may be many orders of magnitude more people in the future. If everyone could co-ordinate and decide to develop disruptive technologies slowly then the future would be a lot safer, but realistically this is unlikely to happen in most cases.
AGI might be an exception, as it might be such a hard problem that anyone who might solve it will understand the danger it poses. Theres no first-mover advantage to being the first to develop clippy. But genetic engineering is far simpler and far safer, and since some actor is bound to develop it, it’s in everyone’s interests to develop it first.
So I see what you mean in principle, but in practice I think the co-ordination problem is too hard.
Indeed, however this is dependent upon utility function—many people value the people who are alive now to an extent that cannot be compensated by future lives, even if there may be many orders of magnitude more people in the future.
Sure. One thing I might mention to someone with that utility function is that if humanity gets destroyed by an enhanced psychopath, that will probably happen right around the same time that enhanced scientists would be working to speed technological progress. So even someone with a relatively myopic utility function will in many cases favor caution.
So I see what you mean in principle, but in practice I think the co-ordination problem is too hard.
Clearly there are a lot of people very interested in the ethics of genetic enhancement. The current consensus among the scientific community in the West seems to be that enhancing kids is totally unethical, and gene modification techs should only be used to fix genetic diseases. In other words, currently in the West at least, there is a very strong (and effective, within the West) attempt being made to enforce coordination on this problem.
I think the current coordination strategy is a fairly hopeless one, for reasons I outlined in my post. All I’m trying to do is improve on it. Do you think I’ve succeeded there? Can you think of an even better coordination strategy than mine? The thing I like about my idea is that it doesn’t require total coordination. It just requires that some things get discovered before other things, which is something that individual scientists can affect.
I agree that affecting the future is hard. But from my perspective (and the perspective of many other people who do think future lives are very important), it’s worth attempting even if it’s hard. If you’re the kind of person who gives up when faced with hard challenges, that’s fine; I guess we’re just different in that way. “Shut up and do the impossible” and all that—the logic is similar to that of FAI. (Challenges can be exciting; easy video games aren’t always very fun.) And in some cases things can be surprisingly possible (for example, it’s surprisingly easy to find the email addresses of prominent scientists online, and they also have office hours).
I appreciate specific criticisms but if you’re just going to be generically demoralizing, I don’t usually find myself getting a lot out of that.
Can you think of an even better coordination strategy than mine?
Why are you calling your suggestions a “coordination strategy”? As far as I can see you are suggesting top-down policies enforced by the usual state enforcement mechanisms. You are talking in the language of “require”, “forbid”, “regulate”—that’s not coordination, that’s the usual command-and-control.
You are talking in the language of “require”, “forbid”, “regulate”
Connotations again...
If the cooperative thing to do is to have a nice medium-height kid, and the selfish thing to do is to have a mean tall one, then in principle you can “command-and-control” people to cooperate. Standard prisoner’s dilemma scenario.
I didn’t think about the legal part very hard; it was an off-the-cuff idea. Feel free to come up with something better or explain why laws are unnecessary.
For example, maybe people will choose the benevolence of their kid in far mode and make them nice because that’s socially desirable and an easier job for them as a parent.
LW is a biased sample but it’s better than nothing. I would prefer to have a kid that’s...
If the cooperative thing to do is to have a nice medium-height kid, and the selfish thing to do is to have a mean tall one, then in principle you can “command-and-control” people to cooperate.
No. You can force people to do something you want. That’s not cooperation at all, that’s just plain-vanilla coercion.
I’m using the word “cooperate” in the technical sense of “cooperate in a prisoner’s dilemma”. In this sense it’s possible for an outside force to coerce cooperation, in the same way that e.g. the government forces your neighbor to cooperate rather than defect and steal your stuff, or anti-doping agencies force athletes to cooperate in the prisoner’s dilemma of whether to use performance-enhancing drugs.
I’m using the word “cooperate” in the technical sense of “cooperate in a prisoner’s dilemma”. In this sense it’s possible for an outside force to coerce cooperation
For the technical sense of “cooperate in a prisoner’s dilemma” you need to have a prisoner’s dilemma situation to start with. Once you coerce cooperation you have effectively changed the payoffs in the matrix—the “defect” cell now has a huge negative number in it, that’s what coercion means. It’s not a prisoner’s dilemma any more.
in the same way that e.g. the government forces your neighbor to cooperate rather than defect and steal your stuff
Huh? Why do you think I’m in a prisoner’s dilemma situation with my neighbour?
Huh? Why do you think I’m in a prisoner’s dilemma situation with my neighbour?
If you make your child taller, your child is better off (+competitive advantages, -other disadvantages of being taller) and your neighbor’s child is worse off (-competitive advantages).
If your neighbor makes his child taller, his child is better off and yours is worse off.
If you both make your children taller, the competitive advantages cancel out and you each have only the disadvantages.
Being tall is not a disadvantage even if you take away “competitive advantages” (normally tall, not freakishly tall). An arms race is a different situation that a prisoner’s dilemma.
The original claim was that the neighbor might “steal your stuff” which isn’t a prisoner’s dilemma either.
And most importantly, I do have neighbors. I don’t feel I am in a prisoner’s dilemma situation with them and I suspect they don’t feel it either.
And most importantly, I do have neighbors. I don’t feel I am in a prisoner’s dilemma situation with them and I suspect they don’t feel it either.
Because the government altered the payoff matrix making cooperation individually preferable to defection.
Imagine you were a hunter-gatherer: within your tribe, a system of reputation and customs, with associated punishments for defectors, tended to enforce cooperation, but different tribes occupying in neighboring areas typically recognized no social obligations towards each other, and as a result all encounters were tense and very often violent, warfare and marauding were endemic.
With a modern government you can interact with most strangers from your country or most other countries with a reasonable expectation that the interaction will be peaceful and productive.
Because the government altered the payoff matrix making cooperation individually preferable to defection.
It wasn’t a prisoner’s dilemma to start with. Hunter-gatherers do not live in a constant prisoner’s dilemma situations.
I don’t get the LW’s obsession with the prisoner’s dilemma. It’s a very specific kind of situation, rare in normal life. If you have a choice between cooperation and non-cooperation that does not automatically mean you’re in a prisoner’s dilemma.
Prisoner’s dilemma is the simplest idealized form of all scenarios where a group of agents prefer that everyone cooperates with each other rather than everyone defects to each other, but for each individual agent, whatever the other agents do, it has an incentive to defect.
There are other common types of scenarios, of course: in zero-sum scenarios cooperation is not possible: a hunter and their prey can’t cooperate to split calories between each other in a way that benefits both. In other scenarios, cooperation is trivially the best choice: if Alice and Bob want to move a heavy object from point A to point B and neither is strong enough to move it alone, but they can move with their combined strength, then they have an incentive to cooperate, neither has an incentive to defect since if one of them defects then the heavy object doesn’t reach point B.
These scenarios are trivial from a game-theoretical perspective. The simplest and arguably the most practically relevant scenario where coordination is beneficial but can’t be trivially achieved is the prisoner’s dilemma.
Stag hunts (which are not the same as the hunter/prey scenarios discussed elsewhere in this thread) are another theoretically nontrivial category of coordination games with interesting social/behavioral implications—arguably more than the prisoner’s dilemma, though that probably depends on what kind of life you happen to find yourself in. I don’t know why they don’t get much exposure on LW, but it might have something to do with the fact that they don’t have the PD’s historical links to AI.
I agree that Stag hunt is theoretically and practically interesting, but I would say that it is not as interesting as the Prisoner’s dilemma.
In order to “solve” a Stag hunt (in the sense of realizing the Pareto-optimal outcome), all you need is a communication channel between the players, even an one-shot one-way channel suffices. In a Prisoner’s dilemma, communication is not enough, you need either to iterate the game or to modify the payoff matrix.
I’m not aware of these links, do you have a reference?
Not offhand, but the PD (specifically, the iterated version) is a classic exercise to motivate prediction and interaction between software agents. I wrote a few in school, though I was better at market simulations. Believe LW ran a PD tournament at some point, too, though I didn’t participate in that one.
I understand that the prisoner’s dilemma is interesting and non-trivial from the game-theoretic perspective. That does not contradict my point that it’s rare in normal life and that most choices people actually make are not in this framework.
In other scenarios, cooperation is trivially the best choice: if Alice and Bob want to move a heavy object from point A to point B and neither is strong enough to move it alone, but they can move with their combined strength, then they have an incentive to cooperate, neither has an incentive to defect since if one of them defects then the heavy object doesn’t reach point B.
Unless the object weighs exactly enough that it requires both of their full strength to move, then they both have an incentive to defect (not to put their full effort in, and let the other work harder). Mutual defection then results in the object not reaching point A.
Most scenarios involve some variation. Even the hunter-prey scenario; the herd or hunters could deliberately choose a sacrifice, saving both hunters and prey from running and expending additional calories on all sides, and reducing the number of prey animals, overall, that the hunters would need to eat. (Consider a real-life example of this—human herders, and their herds. Human-herd relationships are more complex than that, but it could be modeled that way.)
Hunter A steals Hunter B’s kills/wives/whatever. Defection pays off. Cooperation always pays more overall, defection pays the defector better. “Government” in this case is tribal; we’ll kill or exile defectors. (Exile is probably the genetically preferable option, since it may result in some of your genes being spread to other tribes, assuming you share more genetics with in-tribe than with out-tribe individuals; a prisoner’s dilemma in itself.)
Pretty much every situation in real life involves some variant on the prisoner’s dilemma, almost always with etiquette, ethical, or legal prohibitions against defection.
Cooperation always pays more overall, defection pays the defector better.
Nonsense. First, cooperation does not always pay more, and second, the whole point of the prisoner’s dilemma is that cooperation pays each agent better, conditional on them cooperating. “Overall” is a very nebulous concept, anyway, unless you take the hard utilitarian position and start adding up utils.
If cooperation were that beneficial, unconditional cooperation would have been hardwired in our genes.
Pretty much every situation in real life involves some variant on the prisoner’s dilemma
Nope, I strongly disagree. To take a trivial example, Alice doesn’t steal Bob’s car because she thinks she’ll be caught and sent to prison. Alice is NOT “cooperating” with Bob, she is reacting to incentives (in this case, threat of imprisonment) which have nothing to do with the prisoner’s dilemma.
Nonsense. Hunter A kills hunter B, takes his wives, his meat, and his cave and lives in it happily thereafter.
“Overall” means “Combining the utility-analog of both parties”, not “More utility-analog for a given party”. With only one hunter, there are fewer kills/less meat overall, at the least.
Nope, I strongly disagree. To take a trivial example, Alice doesn’t steal Bob’s car because she thinks she’ll be caught and sent to prison. Alice is NOT “cooperating” with Bob, she is reacting to incentives (in this case, threat of imprisonment) which have nothing to do with the prisoner’s dilemma.
The incentives are the product of breaking the prisoner’s dilemma—the “government altered the payoff matrix” and all that. Etiquette, ethics, and law are all increasing levels of rules, and punishment for those rules, whose core purpose is to alter payoffs for defection; from as subtle as the placement of utensils at a dinner table to prohibit subtle threats to other guests, and less desirable seat placements as punishments for not living up to standards of etiquette, to shooting somebody for escalating a police situation one time too many in an attempt to escape punishment.
I am not a utilitarian. I don’t understand how are you going to combine the utils of both parties.
With one hunter less, there are fewer kills but fewer mouths to feed as well.
The incentives are the product of breaking the prisoner’s dilemma
If it’s broken, it’s not a prisoner’s dilemma situation any more. If you want to argue that it exists as a counterfactual I’ll agree and point out that a great variety of things (including ravenous pink unicorns with piranha teeth) exist as a counterfactual.
I am not a utilitarian. I don’t understand how are you going to combine the utils of both parties.
I’m also not a utilitarian, and at this point you’re just quibbling over semantics rather than making any kind of coherent point. Of course you can’t combine the utils, that’s the -point- of the problem. Arguing that cooperation-defection results in the most gain for the defector is just repeating part of the problem statement of the prisoner’s dilemma.
If it’s broken, it’s not a prisoner’s dilemma situation any more. If you want to argue that it exists as a counterfactual I’ll agree and point out that a great variety of things (including ravenous pink unicorns with piranha teeth) exist as a counterfactual.
Please, if you would, maintain the context of the conversation taking place. This gets very tedious when I have to repeat everything that was said in every previous comment. http://lesswrong.com/lw/m6b/thoughts_on_minimizing_designer_baby_drama/cdaa ← This is where this chain of conversation began. If this is your response, you’re doing nothing but conceding the point in a hostile and argumentative way.
Then I have no idea what you meant by “Cooperation always pays more overall, defection pays the defector better”—what is the “more overall” bit?
This is where this chain of conversation began
Yes, and I still don’t get the LW’s obsession with it. You are just providing supporting examples by claiming that everthing is PD and only the government’s hand saves us from an endless cycle of defections.
I will repeat my assertion that in real life, the great majority of choices people make are NOT in the PD context. This might or might not be different in the counterfactual anarchy case where there is no government, but in reality I claim that PD is rare and unusual.
So Lumifer, I appreciate the time you’ve taken to engage on this thread. I think the topic is an important one and it’s great to see more people discussing it. But...
I agree with OrphanWilde that you would be more pleasant to engage with if you tried to meet people halfway during discussions. Have you read Paul Graham on disagreement? The highest form of disagreement is to improve your opponent’s argument, then refute it. If we’re collaborating to figure out the truth, it’s possible for me to skip spelling out a particular point I’m making in full detail and trust that you’re a smart person and you can figure out that part of the argument. (That’s not to say that there isn’t a flaw in that part of the argument. If you understand the thrust of the argument and also notice a flaw, pointing out the flaw is appreciated.) Being forced to spell things out, especially repeatedly, can be very tedious. Assume good faith, principle of charity, construct steel men instead of straw men, etc. I wrote more on this.
You seem like a smart guy, and I appreciate the cynical perspective you have to offer. But I think I could get even more out of talking to you if you helped me make my arguments for me, e.g. the way I tried to do for you here and here. Let’s collaborate and figure out what’s true!
In real life (aka meatspace) I usually have to control my speech for nuances, implications, connotations, etc. It is not often that you can actually tell a fucking idiot that he is a fucking idiot.
One of the advantages of LW is that I can call a “digging implement named without any disrespect for oppressed people of color” a “spade” and be done with it. I value this advantage and use it. Clarity of speech leads to clarity of thought.
If I may make a recommendation about speaking to me, it would be useful to assume I am not stupid (most of the time, that is :-/). If I’m forcing you to “spell things out” that’s because there is a point to it which you should be able to discover after a bit of thought and just shortcut to the end. If I’m arguing with you this means I already disagree with some issue and the reason for the arguments is to figure out whether it’s a real (usually value-based) disagreement, a definition problem, or just a misunderstanding. A lot of my probing is aimed at firming up and sharpening your argument so that we can see where in that amorphous mass the kernel of contention is. I do steelman the opponents’ position, but if the steelman succeeds, I usually just agree and move to the parts where there is still disagreement or explicitly list the conditions under which the steelman works.
In arguments I mostly aim to define, isolate, and maximally sharpen the point of disagreement—because only then can you really figure out what the disagreement is about and whether it’s real or imaginary. I make no apologies for that—I think it’s good practice.
Cool, it sounds like we’re mostly on the same page about how disagreements should proceed, in theory at least. I’m a bit surprised when you say that your disagreements are usually values-based. It seems like in a lot of cases when I disagree with people it’s because we have different information, and over the course of our conversation, we share information and often converge on the same conclusion.
If I’m forcing you to “spell things out” that’s because there is a point to it which you should be able to discover after a bit of thought and just shortcut to the end.
So maybe this is what frustrated me about our previous discussion… I think I would have appreciated a stronger pointer from you as to where our actual point of disagreement might lay. I’d rather you explain your perceived weakness in my argument rather than forcing me to discover it for myself. (Having arguments is frustrating enough without adding on a puzzle solving aspect.) For example, if you had said something like “communism was a movement founded by people with genes for altruism, and look where that went” earlier in our discussion, I think I would have appreciated that.
If you want, try predicting how I feel about communism, then rot13 the rest of this paragraph. V guvax pbzzhavfz vf n snyfvsvrq ulcbgurfvf ng orfg. Fbpvrgl qrfvta vf n gevpxl ceboyrz, fb rzcvevpvfz vf xrl. Rzcvevpnyyl, pbzzhavfg fbpvrgvrf (bapr gurl fpnyr cnfg ivyyntr-fvmrq) qba’g frrz irel shapgvbany, juvpu vf fgebat rivqrapr gung pbzzhavfz vf n onq zbqry. V qba’g guvax jr unir n inyhrf qvfnterrzrag urer—jr frrz gb or va nterrzrag gung pbzzhavfz naq eryngrq snvyher zbqrf ner onq bhgpbzrf. Engure, V guvax jr unq na vasb qvfpercnapl, jvgu lbh univat gur vafvtug gung nygehvfz trarf zvtug yrnq gb pbzzhavfz naq zr ynpxvat vg. Gur vyyhfvba bs genafcnerapl zvtug unir orra va bcrengvba urer.
I’m a bit surprised when you say that your disagreements are usually values-based.
I don’t know if they are “usually” value-based, but those are the serious, unresolvable ones. If the disagreement is due to miscommunication (e.g. a definitions issue), it’s easy to figure out once you get precise. If the disagreement is about empirical reality, well, you should stop arguing and go get a look at the empirical reality. But if it’s value-based, there is not much you can do.
Besides, a lot of value-based disagreements masquerade as arguments about definitions or data.
I think I would have appreciated a stronger pointer from you as to where our actual point of disagreement might lay.
Mea culpa. I do have a tendency to argue by questions—which I’m generally fine with—but sometimes it gets… excessive :-) I know it can be a problem.
how I feel about communism
Well, it’s 2015 and you’re an American, I think, so it’s highly unlikely you have (or are willing to admit) a liking for communism :-)
But the issue here is this: some people argue that communism failed, yes, but is was a noble and righteous dream which was doomed by imperfect, selfish, nasty people. If only the people were better (higher level of consciousness and all that), communism would work and be just about perfect.
Now, if you can genetically engineer people to be suitable for communism...
Then I have no idea what you meant by “Cooperation always pays more overall, defection pays the defector better”—what is the “more overall” bit?
The total payoff—the combined benefits both players receive—is better. This -matters-, because it’s possible to -bribe- cooperation. So one hunter pays the other hunter meat -not- to kill him and take his wife, or whatever. Extortionate behavior is itself another level of PD that I don’t care to get into.
Yes, and I still don’t get the LW’s obsession with it. You are just providing supporting examples by claiming that everthing is PD and only the government’s hand saves us from an endless cycle of defections.
Okay. This conversation? This is a PD. You’re defecting while I’m cooperating. You’re changing the goalposts and changing the conversational topic in an attempt to try to be right about something, violating the implicit rules of a conversation, while I’ve been polite and not calling you out on it; since this is an iterated Prisoner’s Dilemma, I can react to your defection by defecting myself. The karma system? It’s the government. It changes the payoffs. So what’s the relevance? It helps us construct better rules and plan for behaviors.
Do you also show up to parties uninvited? Yell at managers until they give in to your demands? Make shit up about people so you have something to add to conversations? Refuse to tip waitstaff, or leave subpar tips? These are all defections in variations on the Prisoner’s Dilemma, usually asymmetric variations.
I will repeat my assertion that in real life, the great majority of choices people make are NOT in the PD context. This might or might not be different in the counterfactual anarchy case where there is no government, but in reality I claim that PD is rare and unusual.
And I will repeat my assertion that in this conversation, we aren’t having that discussion. It -might- matter in a counterfactual case where we were talking about whether or not PD payoff matrices are a good model for a society with a government, but your actual claim was that PD didn’t apply in the first place, not that it doesn’t apply now.
The total payoff—the combined benefits both players receive
Sigh. So you’re looking at combined benefits, aka “utility-analog of both parties”, aka utils, about which you just said “of course you can’t combine the utils”.
Okay. This conversation? This is a PD.
Bullshit.
Instead of handwaving at each other, let’s define PD and then see what qualifies. I can start.
I’ll generalize PD—since we’re talking about social issues—to multiple agents (and call it GPD).
So, a prisoner’s dilemma is a particular situation that is characterized by the following:
Multiple agents (2 or more) have to make a particular choice after which they receive the payoffs.
All agents know they are in the GPD. There are no marks, patsies, or innocent bystanders.
All agents have to make a choice between the two alternatives, conventionally called cooperate (C) or defect (D). They have to make a choice—not making a choice is not an option, and neither is picking E. In some situations it doesn’t matter (when D is defined as not-C), in some it does.
All agents make their choice without knowing what other agents chose and before anyone receives the payoff.
For each agent the payoff from choosing D is known and fixed: decisions of other agents do not change it. In other words, if any agent chooses D, he is guaranteed to receive the D payoff known to him.
For each agent the payoff from choosing C varies depending on the decisions of other agents. If many other agents also chose C, the C payoff is high, more than D. If only a few other agents chose C, the C payoff is low, less than D (this is the generalization to multiple agents).
Given this definition, on which basis, more or less, I am arguing in this subthread, this conversation (or any single comment) is nowhere near a PD. Nor are the great majority of real-life situations calling for a choice.
Pretty much every situation in real life involves some variant on the prisoner’s dilemma, almost always with etiquette, ethical, or legal prohibitions against defection.
Chicken comes up fairly often and there mutual defection is by far the worst outcome for either party (i.e. if you knew the other guy wanted to defect, you’d cooperate).
True. But challenging somebody to a Chicken-like game in the first place can be modeled as a Defection in a prisoner’s dilemma; you win if they Cooperate and refuse, and both of you are worse off if they also Defect, and agree to the game.
can be modeled as a Defection in a prisoner’s dilemma
No, it can not—in a PD you make your decision not knowing the other party’s decision. Here if you challenge, the other party already knows your choice before having to make its own.
You’ve Defected, and they’ve Cooperated, the moment you issued your challenge, and they didn’t. They’re now in a disadvantageous position, and you’re in an advantageous position; their subsequent Defection is in a different game with altered payoffs, but it also qualifies as a PD. (You could, after all, Cooperate in the subsequent game, and retract your challenge.)
Prisoner’s Dilemma is generally iterative in real life.
Actually some of the disadvantages of being tall would disappear (in the longish run) if everybody was tall. For example, if the average person was 1.90 m, cars would be designed accordingly and wouldn’t be as uncomfortable for people 1.90 m tall.
Top-down policies happen when voluntary coordination fails. They’re generally a sign of disagreement and mistrust: building an edifice of bureaucracy so that everyone knows exactly what they’re expected to do and giving others recourse when they fail to do it.
Sure. One thing I might mention to someone with that utility function is that if humanity gets destroyed by an enhanced psychopath, that will probably happen right around the same time that enhanced scientists would be working to speed technological progress. So even someone with a relatively myopic utility function will in many cases favor caution.
I get the idea that FAI takes more intelligence than AGI, as AGI might be able to be brute-forced by reverse-engineering the brain or evolutionary approaches, whereas de novo AI is far harder, let alone AGI. This would mean that increasing intelligence would make the world safer. I don’t see why enhanced psychopaths are more likely than enhanced empaths.
If you’re the kind of person who gives up when faced with hard challenges, that’s fine; I guess we’re just different in that way.
No, I’m certainly not, however I am realistic and I do prioritise. I don’t think the risk from genetic enhancement is all that great, and indeed it may be a net positive.
Anyway, so I think that mandatory enhancement is not going to be popular. However, other ideas do seem more plausible:
One way to prepare might be differential technological development. In particular, maybe it’s possible to decrease the cost of gene editing/selection technologies while retarding advances in our knowledge of which genes contribute to intelligence.
So, this is a reasonable idea. Governments could prioritise research into stopping diseases above increasing intelligence, and indeed this is likely to be the case anyway, as this is less controversial. Increasing compassion or even docility could also be prioritised above increasing intelligence.
extend the benefits of designer babies to everyone for free regardless of their social class.
This is also a good idea. It seems inevitable that some of the rich will be early adopters before the technology is cheap enough to be made free to all. However, the cost of sequencing has been going down 5x per year, meaning that it is likely to quickly become widely available.
Overall, I would say the best strategy seems to be to take a more libertarian than authoritarian approach, but try to funnel money into researching the genetics of various antisocial personality disorders, try to make the technology free, and either don’t patient the genes or ensure that the patients don’t last that long.
Indeed, but I think it depends whether you used germline selection or germline modification. IIRC, in germline selection you create many embryos, sequence the genes, and select the embryo with the best genes.
Also, if the cost of sequencing goes down very fast, I would have thought this provides some evidence that the cost of modification would drop at a similar rate. Of course, there is already genetic modification of crops—do you know how that has changed in cost over time?
I appreciate specific criticisms but if you’re just going to be generically demoralizing, I don’t usually find myself getting a lot out of that.
Apologies if I’ve been sounding demoralising, that’s not my intention. I think your comments on this subject are interesting, and I’ve upvoted them, but since I find I have more to say about points I disagree with than points I agree with, in general I might tend to sound more critical than I actually am.
I’ll reply to the rest of your comment later, and find something positive to say.
Indeed, however this is dependent upon utility function—many people value the people who are alive now to an extent that cannot be compensated by future lives, even if there may be many orders of magnitude more people in the future. If everyone could co-ordinate and decide to develop disruptive technologies slowly then the future would be a lot safer, but realistically this is unlikely to happen in most cases.
AGI might be an exception, as it might be such a hard problem that anyone who might solve it will understand the danger it poses. Theres no first-mover advantage to being the first to develop clippy. But genetic engineering is far simpler and far safer, and since some actor is bound to develop it, it’s in everyone’s interests to develop it first.
So I see what you mean in principle, but in practice I think the co-ordination problem is too hard.
Sure. One thing I might mention to someone with that utility function is that if humanity gets destroyed by an enhanced psychopath, that will probably happen right around the same time that enhanced scientists would be working to speed technological progress. So even someone with a relatively myopic utility function will in many cases favor caution.
Clearly there are a lot of people very interested in the ethics of genetic enhancement. The current consensus among the scientific community in the West seems to be that enhancing kids is totally unethical, and gene modification techs should only be used to fix genetic diseases. In other words, currently in the West at least, there is a very strong (and effective, within the West) attempt being made to enforce coordination on this problem.
I think the current coordination strategy is a fairly hopeless one, for reasons I outlined in my post. All I’m trying to do is improve on it. Do you think I’ve succeeded there? Can you think of an even better coordination strategy than mine? The thing I like about my idea is that it doesn’t require total coordination. It just requires that some things get discovered before other things, which is something that individual scientists can affect.
I agree that affecting the future is hard. But from my perspective (and the perspective of many other people who do think future lives are very important), it’s worth attempting even if it’s hard. If you’re the kind of person who gives up when faced with hard challenges, that’s fine; I guess we’re just different in that way. “Shut up and do the impossible” and all that—the logic is similar to that of FAI. (Challenges can be exciting; easy video games aren’t always very fun.) And in some cases things can be surprisingly possible (for example, it’s surprisingly easy to find the email addresses of prominent scientists online, and they also have office hours).
I appreciate specific criticisms but if you’re just going to be generically demoralizing, I don’t usually find myself getting a lot out of that.
Why are you calling your suggestions a “coordination strategy”? As far as I can see you are suggesting top-down policies enforced by the usual state enforcement mechanisms. You are talking in the language of “require”, “forbid”, “regulate”—that’s not coordination, that’s the usual command-and-control.
Connotations again...
If the cooperative thing to do is to have a nice medium-height kid, and the selfish thing to do is to have a mean tall one, then in principle you can “command-and-control” people to cooperate. Standard prisoner’s dilemma scenario.
I didn’t think about the legal part very hard; it was an off-the-cuff idea. Feel free to come up with something better or explain why laws are unnecessary.
For example, maybe people will choose the benevolence of their kid in far mode and make them nice because that’s socially desirable and an easier job for them as a parent.
LW is a biased sample but it’s better than nothing. I would prefer to have a kid that’s...
[pollid:963]
No. You can force people to do something you want. That’s not cooperation at all, that’s just plain-vanilla coercion.
I’m using the word “cooperate” in the technical sense of “cooperate in a prisoner’s dilemma”. In this sense it’s possible for an outside force to coerce cooperation, in the same way that e.g. the government forces your neighbor to cooperate rather than defect and steal your stuff, or anti-doping agencies force athletes to cooperate in the prisoner’s dilemma of whether to use performance-enhancing drugs.
For the technical sense of “cooperate in a prisoner’s dilemma” you need to have a prisoner’s dilemma situation to start with. Once you coerce cooperation you have effectively changed the payoffs in the matrix—the “defect” cell now has a huge negative number in it, that’s what coercion means. It’s not a prisoner’s dilemma any more.
Huh? Why do you think I’m in a prisoner’s dilemma situation with my neighbour?
If you make your child taller, your child is better off (+competitive advantages, -other disadvantages of being taller) and your neighbor’s child is worse off (-competitive advantages).
If your neighbor makes his child taller, his child is better off and yours is worse off.
If you both make your children taller, the competitive advantages cancel out and you each have only the disadvantages.
Being tall is not a disadvantage even if you take away “competitive advantages” (normally tall, not freakishly tall). An arms race is a different situation that a prisoner’s dilemma.
The original claim was that the neighbor might “steal your stuff” which isn’t a prisoner’s dilemma either.
And most importantly, I do have neighbors. I don’t feel I am in a prisoner’s dilemma situation with them and I suspect they don’t feel it either.
Because the government altered the payoff matrix making cooperation individually preferable to defection.
Imagine you were a hunter-gatherer: within your tribe, a system of reputation and customs, with associated punishments for defectors, tended to enforce cooperation, but different tribes occupying in neighboring areas typically recognized no social obligations towards each other, and as a result all encounters were tense and very often violent, warfare and marauding were endemic.
With a modern government you can interact with most strangers from your country or most other countries with a reasonable expectation that the interaction will be peaceful and productive.
It wasn’t a prisoner’s dilemma to start with. Hunter-gatherers do not live in a constant prisoner’s dilemma situations.
I don’t get the LW’s obsession with the prisoner’s dilemma. It’s a very specific kind of situation, rare in normal life. If you have a choice between cooperation and non-cooperation that does not automatically mean you’re in a prisoner’s dilemma.
Prisoner’s dilemma is the simplest idealized form of all scenarios where a group of agents prefer that everyone cooperates with each other rather than everyone defects to each other, but for each individual agent, whatever the other agents do, it has an incentive to defect.
There are other common types of scenarios, of course: in zero-sum scenarios cooperation is not possible: a hunter and their prey can’t cooperate to split calories between each other in a way that benefits both.
In other scenarios, cooperation is trivially the best choice: if Alice and Bob want to move a heavy object from point A to point B and neither is strong enough to move it alone, but they can move with their combined strength, then they have an incentive to cooperate, neither has an incentive to defect since if one of them defects then the heavy object doesn’t reach point B.
These scenarios are trivial from a game-theoretical perspective. The simplest and arguably the most practically relevant scenario where coordination is beneficial but can’t be trivially achieved is the prisoner’s dilemma.
Stag hunts (which are not the same as the hunter/prey scenarios discussed elsewhere in this thread) are another theoretically nontrivial category of coordination games with interesting social/behavioral implications—arguably more than the prisoner’s dilemma, though that probably depends on what kind of life you happen to find yourself in. I don’t know why they don’t get much exposure on LW, but it might have something to do with the fact that they don’t have the PD’s historical links to AI.
I agree that Stag hunt is theoretically and practically interesting, but I would say that it is not as interesting as the Prisoner’s dilemma.
In order to “solve” a Stag hunt (in the sense of realizing the Pareto-optimal outcome), all you need is a communication channel between the players, even an one-shot one-way channel suffices.
In a Prisoner’s dilemma, communication is not enough, you need either to iterate the game or to modify the payoff matrix.
There are other games that have significant practical applicability, such as Chicken/Volunteer’s dilemma and Ultimatum.
I’m not aware of these links, do you have a reference?
Not offhand, but the PD (specifically, the iterated version) is a classic exercise to motivate prediction and interaction between software agents. I wrote a few in school, though I was better at market simulations. Believe LW ran a PD tournament at some point, too, though I didn’t participate in that one.
I believe it’s because it is at the same time very simple to explain and very interesting.
I think they ran two variations of program-equilibrium PD. I participated in the last one.
I understand that the prisoner’s dilemma is interesting and non-trivial from the game-theoretic perspective. That does not contradict my point that it’s rare in normal life and that most choices people actually make are not in this framework.
Unless the object weighs exactly enough that it requires both of their full strength to move, then they both have an incentive to defect (not to put their full effort in, and let the other work harder). Mutual defection then results in the object not reaching point A.
Most scenarios involve some variation. Even the hunter-prey scenario; the herd or hunters could deliberately choose a sacrifice, saving both hunters and prey from running and expending additional calories on all sides, and reducing the number of prey animals, overall, that the hunters would need to eat. (Consider a real-life example of this—human herders, and their herds. Human-herd relationships are more complex than that, but it could be modeled that way.)
Hunter A steals Hunter B’s kills/wives/whatever. Defection pays off. Cooperation always pays more overall, defection pays the defector better. “Government” in this case is tribal; we’ll kill or exile defectors. (Exile is probably the genetically preferable option, since it may result in some of your genes being spread to other tribes, assuming you share more genetics with in-tribe than with out-tribe individuals; a prisoner’s dilemma in itself.)
Pretty much every situation in real life involves some variant on the prisoner’s dilemma, almost always with etiquette, ethical, or legal prohibitions against defection.
Nonsense. First, cooperation does not always pay more, and second, the whole point of the prisoner’s dilemma is that cooperation pays each agent better, conditional on them cooperating. “Overall” is a very nebulous concept, anyway, unless you take the hard utilitarian position and start adding up utils.
If cooperation were that beneficial, unconditional cooperation would have been hardwired in our genes.
Nope, I strongly disagree. To take a trivial example, Alice doesn’t steal Bob’s car because she thinks she’ll be caught and sent to prison. Alice is NOT “cooperating” with Bob, she is reacting to incentives (in this case, threat of imprisonment) which have nothing to do with the prisoner’s dilemma.
“Overall” means “Combining the utility-analog of both parties”, not “More utility-analog for a given party”. With only one hunter, there are fewer kills/less meat overall, at the least.
The incentives are the product of breaking the prisoner’s dilemma—the “government altered the payoff matrix” and all that. Etiquette, ethics, and law are all increasing levels of rules, and punishment for those rules, whose core purpose is to alter payoffs for defection; from as subtle as the placement of utensils at a dinner table to prohibit subtle threats to other guests, and less desirable seat placements as punishments for not living up to standards of etiquette, to shooting somebody for escalating a police situation one time too many in an attempt to escape punishment.
I am not a utilitarian. I don’t understand how are you going to combine the utils of both parties.
With one hunter less, there are fewer kills but fewer mouths to feed as well.
If it’s broken, it’s not a prisoner’s dilemma situation any more. If you want to argue that it exists as a counterfactual I’ll agree and point out that a great variety of things (including ravenous pink unicorns with piranha teeth) exist as a counterfactual.
I’m also not a utilitarian, and at this point you’re just quibbling over semantics rather than making any kind of coherent point. Of course you can’t combine the utils, that’s the -point- of the problem. Arguing that cooperation-defection results in the most gain for the defector is just repeating part of the problem statement of the prisoner’s dilemma.
Please, if you would, maintain the context of the conversation taking place. This gets very tedious when I have to repeat everything that was said in every previous comment. http://lesswrong.com/lw/m6b/thoughts_on_minimizing_designer_baby_drama/cdaa ← This is where this chain of conversation began. If this is your response, you’re doing nothing but conceding the point in a hostile and argumentative way.
Then I have no idea what you meant by “Cooperation always pays more overall, defection pays the defector better”—what is the “more overall” bit?
Yes, and I still don’t get the LW’s obsession with it. You are just providing supporting examples by claiming that everthing is PD and only the government’s hand saves us from an endless cycle of defections.
I will repeat my assertion that in real life, the great majority of choices people make are NOT in the PD context. This might or might not be different in the counterfactual anarchy case where there is no government, but in reality I claim that PD is rare and unusual.
So Lumifer, I appreciate the time you’ve taken to engage on this thread. I think the topic is an important one and it’s great to see more people discussing it. But...
I agree with OrphanWilde that you would be more pleasant to engage with if you tried to meet people halfway during discussions. Have you read Paul Graham on disagreement? The highest form of disagreement is to improve your opponent’s argument, then refute it. If we’re collaborating to figure out the truth, it’s possible for me to skip spelling out a particular point I’m making in full detail and trust that you’re a smart person and you can figure out that part of the argument. (That’s not to say that there isn’t a flaw in that part of the argument. If you understand the thrust of the argument and also notice a flaw, pointing out the flaw is appreciated.) Being forced to spell things out, especially repeatedly, can be very tedious. Assume good faith, principle of charity, construct steel men instead of straw men, etc. I wrote more on this.
You seem like a smart guy, and I appreciate the cynical perspective you have to offer. But I think I could get even more out of talking to you if you helped me make my arguments for me, e.g. the way I tried to do for you here and here. Let’s collaborate and figure out what’s true!
I value speaking plainly and clearly.
In real life (aka meatspace) I usually have to control my speech for nuances, implications, connotations, etc. It is not often that you can actually tell a fucking idiot that he is a fucking idiot.
One of the advantages of LW is that I can call a “digging implement named without any disrespect for oppressed people of color” a “spade” and be done with it. I value this advantage and use it. Clarity of speech leads to clarity of thought.
If I may make a recommendation about speaking to me, it would be useful to assume I am not stupid (most of the time, that is :-/). If I’m forcing you to “spell things out” that’s because there is a point to it which you should be able to discover after a bit of thought and just shortcut to the end. If I’m arguing with you this means I already disagree with some issue and the reason for the arguments is to figure out whether it’s a real (usually value-based) disagreement, a definition problem, or just a misunderstanding. A lot of my probing is aimed at firming up and sharpening your argument so that we can see where in that amorphous mass the kernel of contention is. I do steelman the opponents’ position, but if the steelman succeeds, I usually just agree and move to the parts where there is still disagreement or explicitly list the conditions under which the steelman works.
In arguments I mostly aim to define, isolate, and maximally sharpen the point of disagreement—because only then can you really figure out what the disagreement is about and whether it’s real or imaginary. I make no apologies for that—I think it’s good practice.
Cool, it sounds like we’re mostly on the same page about how disagreements should proceed, in theory at least. I’m a bit surprised when you say that your disagreements are usually values-based. It seems like in a lot of cases when I disagree with people it’s because we have different information, and over the course of our conversation, we share information and often converge on the same conclusion.
So maybe this is what frustrated me about our previous discussion… I think I would have appreciated a stronger pointer from you as to where our actual point of disagreement might lay. I’d rather you explain your perceived weakness in my argument rather than forcing me to discover it for myself. (Having arguments is frustrating enough without adding on a puzzle solving aspect.) For example, if you had said something like “communism was a movement founded by people with genes for altruism, and look where that went” earlier in our discussion, I think I would have appreciated that.
If you want, try predicting how I feel about communism, then rot13 the rest of this paragraph. V guvax pbzzhavfz vf n snyfvsvrq ulcbgurfvf ng orfg. Fbpvrgl qrfvta vf n gevpxl ceboyrz, fb rzcvevpvfz vf xrl. Rzcvevpnyyl, pbzzhavfg fbpvrgvrf (bapr gurl fpnyr cnfg ivyyntr-fvmrq) qba’g frrz irel shapgvbany, juvpu vf fgebat rivqrapr gung pbzzhavfz vf n onq zbqry. V qba’g guvax jr unir n inyhrf qvfnterrzrag urer—jr frrz gb or va nterrzrag gung pbzzhavfz naq eryngrq snvyher zbqrf ner onq bhgpbzrf. Engure, V guvax jr unq na vasb qvfpercnapl, jvgu lbh univat gur vafvtug gung nygehvfz trarf zvtug yrnq gb pbzzhavfz naq zr ynpxvat vg. Gur vyyhfvba bs genafcnerapl zvtug unir orra va bcrengvba urer.
I don’t know if they are “usually” value-based, but those are the serious, unresolvable ones. If the disagreement is due to miscommunication (e.g. a definitions issue), it’s easy to figure out once you get precise. If the disagreement is about empirical reality, well, you should stop arguing and go get a look at the empirical reality. But if it’s value-based, there is not much you can do.
Besides, a lot of value-based disagreements masquerade as arguments about definitions or data.
Mea culpa. I do have a tendency to argue by questions—which I’m generally fine with—but sometimes it gets… excessive :-) I know it can be a problem.
Well, it’s 2015 and you’re an American, I think, so it’s highly unlikely you have (or are willing to admit) a liking for communism :-)
But the issue here is this: some people argue that communism failed, yes, but is was a noble and righteous dream which was doomed by imperfect, selfish, nasty people. If only the people were better (higher level of consciousness and all that), communism would work and be just about perfect.
Now, if you can genetically engineer people to be suitable for communism...
Judging by the reactions of some people in this thread, for a lot of LWers, their knowledge of game theory starts and ends with PD.
The total payoff—the combined benefits both players receive—is better. This -matters-, because it’s possible to -bribe- cooperation. So one hunter pays the other hunter meat -not- to kill him and take his wife, or whatever. Extortionate behavior is itself another level of PD that I don’t care to get into.
Okay. This conversation? This is a PD. You’re defecting while I’m cooperating. You’re changing the goalposts and changing the conversational topic in an attempt to try to be right about something, violating the implicit rules of a conversation, while I’ve been polite and not calling you out on it; since this is an iterated Prisoner’s Dilemma, I can react to your defection by defecting myself. The karma system? It’s the government. It changes the payoffs. So what’s the relevance? It helps us construct better rules and plan for behaviors.
Do you also show up to parties uninvited? Yell at managers until they give in to your demands? Make shit up about people so you have something to add to conversations? Refuse to tip waitstaff, or leave subpar tips? These are all defections in variations on the Prisoner’s Dilemma, usually asymmetric variations.
And I will repeat my assertion that in this conversation, we aren’t having that discussion. It -might- matter in a counterfactual case where we were talking about whether or not PD payoff matrices are a good model for a society with a government, but your actual claim was that PD didn’t apply in the first place, not that it doesn’t apply now.
Sigh. So you’re looking at combined benefits, aka “utility-analog of both parties”, aka utils, about which you just said “of course you can’t combine the utils”.
Bullshit.
Instead of handwaving at each other, let’s define PD and then see what qualifies. I can start.
I’ll generalize PD—since we’re talking about social issues—to multiple agents (and call it GPD).
So, a prisoner’s dilemma is a particular situation that is characterized by the following:
Multiple agents (2 or more) have to make a particular choice after which they receive the payoffs.
All agents know they are in the GPD. There are no marks, patsies, or innocent bystanders.
All agents have to make a choice between the two alternatives, conventionally called cooperate (C) or defect (D). They have to make a choice—not making a choice is not an option, and neither is picking E. In some situations it doesn’t matter (when D is defined as not-C), in some it does.
All agents make their choice without knowing what other agents chose and before anyone receives the payoff.
For each agent the payoff from choosing D is known and fixed: decisions of other agents do not change it. In other words, if any agent chooses D, he is guaranteed to receive the D payoff known to him.
For each agent the payoff from choosing C varies depending on the decisions of other agents. If many other agents also chose C, the C payoff is high, more than D. If only a few other agents chose C, the C payoff is low, less than D (this is the generalization to multiple agents).
Given this definition, on which basis, more or less, I am arguing in this subthread, this conversation (or any single comment) is nowhere near a PD. Nor are the great majority of real-life situations calling for a choice.
Chicken comes up fairly often and there mutual defection is by far the worst outcome for either party (i.e. if you knew the other guy wanted to defect, you’d cooperate).
In an even simpler case, if you are a business, trying to cooperate instead of “defecting” will get you charged with anti-trust violations.
True. But challenging somebody to a Chicken-like game in the first place can be modeled as a Defection in a prisoner’s dilemma; you win if they Cooperate and refuse, and both of you are worse off if they also Defect, and agree to the game.
No, it can not—in a PD you make your decision not knowing the other party’s decision. Here if you challenge, the other party already knows your choice before having to make its own.
So get a reputation for being revengeBot?
You’ve Defected, and they’ve Cooperated, the moment you issued your challenge, and they didn’t. They’re now in a disadvantageous position, and you’re in an advantageous position; their subsequent Defection is in a different game with altered payoffs, but it also qualifies as a PD. (You could, after all, Cooperate in the subsequent game, and retract your challenge.)
Prisoner’s Dilemma is generally iterative in real life.
Actually some of the disadvantages of being tall would disappear (in the longish run) if everybody was tall. For example, if the average person was 1.90 m, cars would be designed accordingly and wouldn’t be as uncomfortable for people 1.90 m tall.
Top-down policies enforced by the usual state enforcement mechanisms are the typical way people implement coordination.
Err… No.
Top-down policies happen when voluntary coordination fails. They’re generally a sign of disagreement and mistrust: building an edifice of bureaucracy so that everyone knows exactly what they’re expected to do and giving others recourse when they fail to do it.
But voluntary coordination is hard, especially when it involves large groups of people, which is why we invented governments.
I get the idea that FAI takes more intelligence than AGI, as AGI might be able to be brute-forced by reverse-engineering the brain or evolutionary approaches, whereas de novo AI is far harder, let alone AGI. This would mean that increasing intelligence would make the world safer. I don’t see why enhanced psychopaths are more likely than enhanced empaths.
No, I’m certainly not, however I am realistic and I do prioritise. I don’t think the risk from genetic enhancement is all that great, and indeed it may be a net positive.
Anyway, so I think that mandatory enhancement is not going to be popular. However, other ideas do seem more plausible:
So, this is a reasonable idea. Governments could prioritise research into stopping diseases above increasing intelligence, and indeed this is likely to be the case anyway, as this is less controversial. Increasing compassion or even docility could also be prioritised above increasing intelligence.
This is also a good idea. It seems inevitable that some of the rich will be early adopters before the technology is cheap enough to be made free to all. However, the cost of sequencing has been going down 5x per year, meaning that it is likely to quickly become widely available.
Overall, I would say the best strategy seems to be to take a more libertarian than authoritarian approach, but try to funnel money into researching the genetics of various antisocial personality disorders, try to make the technology free, and either don’t patient the genes or ensure that the patients don’t last that long.
I think sequencing is what lets you measure genes, not modify them.
Indeed, but I think it depends whether you used germline selection or germline modification. IIRC, in germline selection you create many embryos, sequence the genes, and select the embryo with the best genes.
Also, if the cost of sequencing goes down very fast, I would have thought this provides some evidence that the cost of modification would drop at a similar rate. Of course, there is already genetic modification of crops—do you know how that has changed in cost over time?
Good point. I don’t know about crops.
Apologies if I’ve been sounding demoralising, that’s not my intention. I think your comments on this subject are interesting, and I’ve upvoted them, but since I find I have more to say about points I disagree with than points I agree with, in general I might tend to sound more critical than I actually am.
I’ll reply to the rest of your comment later, and find something positive to say.