I already wrote a top-level comment about the original raw text version of this, but my access logs suggested that EDITs of older comments only reach a very few people. See that comment for a bit more detail.
Pre-alpha, one hour of work. I plan to improve it.
EDIT:Here is the source code. 80 lines of python. It makes raw text output, links and formatting are lost. It would be quite trivial to do nice and spiffy html output.
EDIT2:I can do html output now. It is nice and spiffy, but it has some CSS bug. After the fifth quote it falls apart. This is my first time with CSS, and I hope it is also the last. Could somebody help me with this? Thanks.
EDIT3: Bug resolved. I wrote another top-level comment. about the final version, because my access logs suggested that the EDITs have reached only a very few people. Of course, an alternative explanation is that everybody who would have been interested in the html version already checked out the txt version. We will soon find out which explanation is the correct one.
It might make more sense to put this on the Wiki. Two notes: First, some of the quotes have remarks contained in the posts which you have not edited out. I don’t know if you intend to keep those. Second, some of the quotes are comments from quote threads that aren’t actually quotes. 14 SilasBarta is one example. (And is just me or does that citation form read like a citation from a religious text ?)
I agreed with you, I even started to write a reply to JoshuaZ about the intricacies of human-machine cooperation in text-processing pipelines. But then I realized that it is not necessarily a problem if the text is dead. A Rationality Quotes, Best of 2010 Edition could be nice.
Agreed. Best of 2009 can be compiled now and frozen, best of 2010 end of the year and so on. It’d also be useful to publish the source code of whatever script was used to generate the rating on the wiki, as a subpage.
You Are Not So Smart is a great little blog that covers many of the same topics as LessWrong, but in a much more bite-sized format and with less depth. It probably won’t offer much to regular/long-time LW readers, but it’s a great resource to give to friends/family who don’t have the time/energy demanded by LW.
It is a good blog, and it has a slightly wider topic spread than LW, so even if you’re familiar with most of the standard failures of judgment there’ll be a few new things worth reading. (I found the “introducing fines can actually increase a behavior” post particularly good, as I wasn’t aware of that effect.)
As an old quote from DanielLC says, consequentialism is “the belief that doing the right thing makes the world a better place”. I now present some finger exercises on the topic:
Is it okay to cheat on your spouse as long as (s)he never knows?
If you have already cheated and managed to conceal it perfectly, is it right to stay silent?
If your spouse asks you to give a solemn promise to never cheat, and you know you will cheat perfectly discreetly, is it right to give the promise to make them happy?
If your wife loves you, but you only stay in the marriage because of the child, is it right to assure the wife you still love her?
If your husband loves you, but doesn’t know the child isn’t his, is it right to stay silent?
The people from #4 and #5 are actually married to each other. They seem to be caught in an uncomfortable equilibrium of lies. Would they have been better off as deontologists?
While you’re thinking about these puzzles, be extra careful to not write the bottom line in advance and shoehorn the “right” conclusion into a consequentialist frame. For example, eliminating lies doesn’t “make the world a better place” unless it actually makes people happier; claiming so is just concealed deontologism.
For example, eliminating lies doesn’t “make the world a better place” unless it actually makes people happier; claiming so is just concealed deontologism.
Just picking nits. Consequentialism =/= maximizing happiness. (The latter is a case of the former). So one could be a consequentialist and place a high value and not lying. In fact, the answers to all of your questions depend on the values one holds.
For example, eliminating lies doesn’t “make the world a better place” unless it actually makes people happier; claiming so is just concealed deontologism.
I disagree. Not lying or not being lied to might well be a terminal value, why not? You that lies or doesn’t lie is part of the world. A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying. (Correspondingly, the world becomes a better place even if you eliminate some lying without anyone knowing about that, so nobody becomes happier in the sense of actually experiencing different emotions, assuming nothing else that matters changes as well.)
Of course, if you can only eliminate a specific case of lying by on the net making the outcome even worse for other reasons, it shouldn’t be done (and some of your examples may qualify for that).
A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying.
In my opinion, this is a lawyer’s attempt to masquerade deontologism as consequentialism. You can, of course, reformulate the deontologist rule “never lie” as a consequentialist “I assign an extremely high disutility to situations where I lie”. In the same way you can put consequentialist preferences as a deontoligst rule “at any case, do whatever maximises your utility”. But doing that, the point of the distinction between the two ethical systems is lost.
My comment argues about the relationship of concepts “make the world a better place” and “makes people happier”. cousin_it’s statement:
For example, eliminating lies doesn’t “make the world a better place” unless it actually makes people happier; claiming so is just concealed deontologism.
I saw this as an argument, in countrapositive form for this: if we take a consequentialist outlook, then “make the world a better place” should be the same as “makes people happier”. However, it’s against the spirit of consequentialist outlook, in that it privileges “happy people” and disregards other aspects of value. Taking “happy people” as a value through deontological lens would be more appropriate, but it’s not what was being said.
Let’s carry this train of thought to its logical extreme. Imagine two worlds, World 1 and World 2. They are in exactly the same state at present, but their past histories differ: in World 1, person A lied to person B and then promptly forgot about it. In World 2 this didn’t happen. You seem to be saying that a sufficiently savvy consequentialist will value one of those worlds higher than the other. I think this is a very extreme position for a “consequentialist” to take, and the word “deontologism” would fit it way better.
IMO, a “proper” consequentialist should care about consequences they can (in principle, someday) see, and shouldn’t care about something they can never ever receive information about. If we don’t make this distinction or something similar to it, there’s no theoretical difference between deontologism and consequentialism—each one can be implemented perfectly on top of the other—and this whole discussion is pointless, and likewise is a good chunk of LW. Is that the position you take?
That the consequences are distinct according to one’s ontological model is distinct from a given agent being able to trace these consequences. What if the fact about the lie being present or not was encrypted using a one-way injective function, with the original forgotten, but the cypher retained? In principle, you can figure which is which (decipher), but not in practice for many years to come. Does your inability to decipher this difference change the fact of one of these worlds being better? What if you are not given a formal cipher, but how a butterfly flaps its wings 100 year later can be traced back to the event of lying/not lying through the laws of physics? What if the same can only be said of a record in an obscure historical text from 500 years ago, so that the event of lying was actually indirectly predicted/caused far in advance, and can in principle be inferred from that evidence?
The condition for the difference to be observable in principle is much weaker than you seem to imply. And since ability to make logical conclusions from the data doesn’t seem like the sort of thing that influences the actual moral value of the world, we might as well agree that you don’t need to distinguish them at all, although it doesn’t make much sense to introduce the distinction in value if no potential third-party beneficiary can distinguish as well (this would be just taking a quotient of ontology on the potential observation/action equivalence classes, in other words using ontological boxing of syntactic preference).
The condition for the difference to be observable in principle is much weaker than you seem to imply.
It might be, but whether or not it is seems to depend on, among other things, how much randomness there is in the laws of physics. And the minutiae of micro-physics also don’t seem like the kind of thing that can influence the moral value of the world, assuming that the psychological states of all actors in the world are essentially indifferent to these minutiae.
Can’t we resolve this problem by saying that the moral value attaches to a history of the world rather than (say) a state of the world, or the deductive closure of the information available to an agent? Then we can be consistent with the letter if not the spirit of consequentialism by stipulating that a world history containing a forgotten lie gets lower value than an otherwise macroscopically identical world history not containing it. (Is this already your view, in fact?)
Now to consider cousin_it’s idea that a “proper” consequentialist only cares about consequences that can be seen:
Even if all information about the lie is rapidly obliterated, and cannot be recovered later, it’s still true that the lie and its immediate consequences are seen by the person telling it, so we might regard this as being ‘sufficient’ for a proper consequentialist to care about it. But if we don’t, and all that matters is the indefinite future, then don’t we face the problem that “in the long term we’re all dead”? OK, perhaps some of us think that rule will eventually cease to apply, but for argument’s sake, if we knew with certainty that all life would be extinguished, say, 1000 years from now (and that all traces of whether people lived well or badly would subsequently be obliterated) we’d want our ethical theory to be more robust than to say “Do whatever you like—nothing matters any more.”
This is correct, and I was wrong. But your last sentence sounds weird. You seem to be saying that it’s not okay for me to lie even if I can’t get caught, because then I’d be the “third-party beneficiary”, but somehow it’s okay to lie and then erase my memory of lying. Is that right?
You seem to be saying that it’s not okay for me to lie even if I can’t get caught, because then I’d be the “third-party beneficiary”
Right. “Third-party beneficiary” can be seen as a generalized action, where the action is to produce an agent, or cause a behavior of an existing agent, that works towards optimizing your value.
but it’s somehow okay to lie and then erase my memory of lying. Is that right?
It’s not okay, in the sense that if you introduce the concept of you-that-decided-to-lie, existing in the past but not in present, then you also have to morally color this ontological distinction, and the natural way to do that would be to label the lying option worse. The you-that-decided is the third-party “beneficiary” in that case, that distinguished the states of the world containing lying and not-lying.
But it probably doesn’t make sense for you to have that concept in your ontology if the states of the world that contained you-lying can’t be in principle (in the strong sense described in the previous comment) distinguished from the ones that don’t. You can even introduce ontological models for this case that, say, mark past-you-lying as better than past-you-not-lying and lead to exactly the same decisions, but that would be a non-standard model ;-)
I can’t believe you took the exact cop-out I warned you against. Use more imagination next time! Here, let me make the problem a little harder for you: restrict your attention to consequentialists whose terminal values have to be observable.
I can’t believe you took the exact cop-out I warned you against.
Not surprisingly, as I was arguing with that warning, and cited it in the comment.
restrict your attention to consequentialists whose terminal values have to be observable.
What does this mean? Consequentialist values are about the world, not about observations (but your words don’t seem to fit to disagreement with this position, thus the ‘what does this mean?’). Consequentialist notion of values allows a third party to act for your benefit, in which case you don’t need to know what the third party needs to know in order to implement those values. The third party knows you could be lied to or not, and tries to make it so that you are not lied to, but you don’t need to know about these options in order to benefit.
It is a common failure of moral analysis (invented by deontologists undoubtedly) that they assume idealized moral situation. Proper consequentialism deals with the real world, not this fantasy.
#1/#2/#3 - “never knows” fails far too often, so you need to include a very large chance of failure in your analysis.
#4 - it’s pretty safe to make stuff like that up
#5 - in the past, undoubtedly yes; in the future this will be nearly certain to leak with everyone undergoing routine genetic testing for medical purposes, so no. (future is relevant because situation will last decades)
#6 - consequentialism assumes probabilistic analysis (% that child is not yours, % chance that husband is making stuff up) - and you weight costs and benefits of different situations proportionally to their likelihood. Here they are in unlikely situation that consequentialism doesn’t weight highly. They might be better off with some other value system, but only at cost of being worse off in more likely situations.
You seem to make the error here that you rightly criticize. Your feelings have involuntary, detectable consequences; lying about them can have a real personal cost.
It is my estimate that this leakage is very low, compared to other examples. I’m not claiming it doesn’t exist, and for some people it might conceivably be much higher.
Is it okay to cheat on your spouse as long as (s)he never knows?
Is this actually possible? Imagine that 10% of people cheat on their spouses when faced with a situation ‘similar’ to yours. Then the spouses can ‘put themselves in your place’ and think “Gee, there’s about a 10% chance that I’d now be cheating on myself. I wonder if this means my husband/wife is cheating on me?”
So if you are inclined to cheat then spouses are inclined to be suspicious. Even if the suspicion doesn’t correlate with the cheating, the net effect is to drive utility down.
I think similar reasoning can be applied to the other cases.
(Of course, this is a very “UDT-style” way of thinking—but then UDT does remind me of Kant’s categorical imperative, and of course Kant is the arch-deontologist.)
Your reasoning goes above and beyond UDT: it says you must always cooperate in the Prisoner’s Dilemma to avoid “driving net utility down”. I’m pretty sure you made a mistake somewhere.
We’re talking about ethics rather than decision theory. If you want to apply the latter to the former then it makes perfect sense to take the attitude that “One util has the same ethical value, whoever that util belongs to. Therefore, we’re going to try to maximize ‘total utility’ (whatever sense one can make of that concept)”.
I think UDT does (or may do, depending on how you set it up) co-operate in a one-shot Prisoner’s Dilemma. (However, if you imagine a different game “The Torture Game” where you’re a sadist who gets 1 util for torturing, and inflicting −100 utils, then of course UDT cannot prevent you from torturing. So I’m certainly not arguing that UDT, exactly as it is, constitutes an ethical panacea.)
The connection between “The Torture Game” and Prisoner’s Dilemma is actually very close: Prisoner’s Dilemma is just A and B simultaneously playing the torture game with A as torturer and B as victim and vice versa, not able to communicate to each other whether they’ve chosen to torture until both have committed themselves one way or the other.
I’ve observed that UDT happily commits torture when playing The Torture Game, and (imo) being able to co-operate in a one-shot Prisoner’s Dilemma should be seen as one of the ambitions of UDT (whether or not it is ultimately successful).
So what about this then: Two instances of The Torture Game but rather than A and B moving simultaneously, first A chooses whether to torture and then B chooses. From B’s perspective, this is almost the same as Parfit’s Hitchhiker. The problem looks interesting from A’s perspective too, but it’s not one of the Standard Newcomblike Problems that I discuss in my UDT post.
I think, just as UDT aspires to co-operate in a one-shot PD i.e. not to torture in a Simultaneous Torture Game, so UDT aspires not to torture in the Sequential Torture Game.
Doesn’t make sense to me. Two flawless predictors that condition on each other’s actions can’t exist. Alice does whatever Bob will do, Bob does the opposite of what Alice will do, whoops, contradiction. Or maybe I’m reading you wrong?
Sorry—I guess I wasn’t clear enough. I meant that there are two human players and two (possibly non-human) flawless predictors.
So in other words, it’s almost like there are two totally independent instances of Newcomb’s game, except that the predictor from game A fills the boxes in the game B and vice versa.
Yes, you can consider a two-player game as a one-player game with the second player an opaque part of environment. In two-player games, ambient control is more apparent than in one-player games, but it’s also essential in Newcomb problem, which is why you make the analogy.
This needs to be spelled out more. Do you mean that if A takes both boxes, B gets $1,000, and if A takes one box, B gets $1,000,000? Why is this a dilemma at all? What you do has no effect on the money you get.
I don’t know how to format a table, but here is what I want the game to be:
A-action B-action A-winnings B-winnings
2-box 2-box $1 $1
2-box 1-box $1001 $0
1-box 2-box $0 $1001
1-box 1-box $1000 $1000
Now compare this with Newcomb’s game:
A-action Prediction A-winnings
2-box 2-box $1
2-box 1-box $1001
1-box 2-box $0
1-box 1-box $1000
Now, if the “Prediction” in the second table is actually a flawless prediction of a different player’s action then we obtain the first three columns of the first table.
Hopefully the rest is clear, and please forgive the triviality of this observation.
But that’s exactly what I’m disputing. At this point, in a human dialogue I would “re-iterate” but there’s no need because my argument is back there for you to re-read if you like.
Yes, and how easy it is to arrive at such a proof may vary depending on circumstances. But in any case, recall that I merely said “UDT-style”.
UDT doesn’t cooperate in the PD unless you see the other guy’s source code and have a mathematical proof that it will output the same value as yours.
UDT doesn’t specify how exactly to deal with logical/observational uncertainty, but in principle it does deal with them. It doesn’t follow that if you don’t know how to analyze the problem, you should therefore defect. Human-level arguments operate on the level of simple approximate models allowing for uncertainty in how they relate to the real thing; decision theories should apply to analyzing these models in isolation from the real thing.
What’s “complete uncertainty”? How exploitable you are depends on who tries to exploit you. The opponent is also uncertain. If the opponent is Omega, you probably should be absolutely certain, because it’ll find the single exact set of circumstances that make you lose. But if the opponent is also fallible, you can count on the outcome not being the worst-case scenario, and therefore not being able to estimate the value of that worse-case scenario is not fatal. An almost formal analogy is analysis of algorithms in worst case and average case: worst case analysis applies to the optimal opponent, average case analysis to random opponent, and in real life you should target something in between.
The “always defect” strategy is part of a Nash equilibrium. The quining cooperator is part of a Nash equilibrium. IMO that’s one of the minimum requirements that a good strategy must meet. But a strategy that cooperates whenever its “mathematical intuition module” comes up blank can’t be part of any Nash equilibrium.
“Nash equilibrium” is far from being a generally convincing argument. Mathematical intuition module doesn’t come up blank, it gives probabilities of different outcomes, given the present observational and logical uncertainty. When you have probabilities of the other player acting each way depending on how you act, the problem is pretty straightforward (assuming expected utility etc.), and “Nash equilibrium” is no longer a relevant concern. It’s when you don’t have a mathematical intuition module, don’t have probabilities of the other player’s actions conditional on your actions, when you need to invent ad-hoc game-theoretic rituals of cognition.
As an old quote from DanielLC says, consequentialism is “the belief that doing the right thing makes the world a better place”. I now present some finger exercises on the topic:
It seems like it would be more aptly defined as “the belief that making the world a better place constitutes doing the right thing”. Non-consequentialists can certainly believe that doing the right thing makes the world a better place, especially if they don’t care whether it does.
A quick Internet search turns up very little causal data on the relationship between cheating and happiness, so for purposes of this analysis I will employ the following assumptions:
a. Successful secret cheating has a small eudaemonic benefit for the cheater. b. Successful secret lying in a relationship has a small eudaemonic cost for the liar. c. Marital and familial relationships have a moderate eudaemonic benefits for both parties. d. Undermining revelations in a relationship have a moderate (specifically, severe in intensity but transient in duration) eudaemonic cost for all parties involved. e. Relationships transmit a fraction of eudaemonic effects between partners.
Under these assumptions, the naive consequentialist solution* is as follows:
Cheating is a risky activity, and should be avoided if eudaemonic supplies are short.
This answer depends on precise relationships between eudaemonic values that are not well established at this time.
Given the conditions, lying seems appropriate.
Yes.
Yes.
The husband may be better off. The wife more likely would not be. The child would certainly not be.
Are there any evident flaws in my analysis on the level it was performed?
* The naive consequentialist solution only accounts for direct effects of the actions of a single individual in a single situation, rather than the general effects of widespread adoption of a strategy in many situations—like other spherical cows, this causes a lot of problematic answers, like two-boxing.
Ouch. In #5 I intended that the wife would lie to avoid breaking her husband’s heart, not for some material benefit. So if she knew the husband didn’t love her, she’d tell the truth. The fact that you automatically parsed the situation differently is… disturbing, but quite sensible by consequentialist lights, I suppose :-)
I don’t understand your answer in #2. If lying incurs a small cost on you and a fraction of it on the partner, and confessing incurs a moderate cost on both, why are you uncertain?
No other visible flaws. Nice to see you bite the bullet in #3.
ETA: double ouch! In #1 you imply that happier couples should cheat more! Great stuff, I can’t wait till other people reply to the questionnaire.
The husband does benefit, by her lights. The chief reason it comes out in the husband’s favor in #6 is because the husband doesn’t value the marital relationship and (I assumed) wouldn’t value the child relationship.
You’re right—in #2 telling the truth carries the risk of ending the relationship. I was considering the benefit of having a relationship with less lying (which is a benefit for both parties), but it’s a gamble, and probably one which favors lying.
On eudaemonic grounds, it was an easy bullet to bite—particularly since I had read Have His Carcase by Dorothy Sayers, which suggested an example of such a relationship.
Incidentally, I don’t accept most of this analysis, despite being a consequentialist—as I said, it is the “naive consequentialist solution”, and several answers would be likely to change if (a) the questions were considered on the level of widespread strategies and (b) effects other than eudaemonic were included.
Edit: Note that “happier couples” does not imply “happier coupling”—the risk to the relationship would increase with the increased happiness from the relationship. This analysis of #1 implies instead that couples with stronger but independent social circles should cheat more (last paragraph).
and several answers would be likely to change if (a) the questions were considered on the level of widespread strategies and (b) effects other than eudaemonic were included
This is an interesting line of retreat! What answers would you change if most people around you were also consequentialists, and what other effects would you include apart from eudaemonic ones?
It’s okay to deceive people if they’re not actually harmed and you’re sure they’ll never find out. In practice, it’s often too risky.
1-3: This is all okay, but nevertheless, I wouldn’t do these things. The reason is that for me, a necessary ingredient for being happily married is an alief that my spouse is honest with me. It would be impossible for me to maintain this alief if I lied.
4-5: The child’s welfare is more important than my happiness, so even I would lie if it was likely to benefit the child.
6: Let’s assume the least convenient possible world, where everyone is better off if they tell the truth. Then in this particular case, they would be better off as deontologists. But they have no way of knowing this. This is not problematic for consequentialism any more than a version of the Trolley Problem in which the fat man is secretly a skinny man in disguise and pushing him will lead to more people dying.
1-3: It seems you’re using an irrational rule for updating your beliefs about your spouse. If we fixed this minor shortcoming, would you lie?
6: Why not problematic? Unlike your Trolley Problem example, in my example the lie is caused by consequentialism in the first place. It’s more similar to the Prisoner’s Dilemma, if you ask me.
1-3: It’s an alief, not a belief, because I know that lying to my spouse doesn’t really make my spouse more likely to lie to me. But yes, I suppose I would be a happier person if I were capable of maintaining that alief (and repressing my guilt) while having an affair. I wonder if I would want to take a pill that would do that. Interesting. Anyways, if I did take that pill, then yes, I would cheat and lie.
Thanks for the link. I think Alicorn would call it an “unofficial” or “non-endorsed” belief.
Let’s put another twist on it. What would you recommend someone else to do in the situations presented in the questionnaire? Would you prod them away from aliefs and toward rationality? :-)
Thanks for the link. I think Alicorn would call it an “unofficial” or “non-endorsed” belief.
Alicorn seems to think the concepts are distinct, but I don’t know what the distinction is, and I haven’t read any philosophical paper that defines alief : )
Let’s put another twist on it. What would you recommend someone else to do in the situations presented in the questionnaire? Would you prod them away from aliefs and toward rationality? :-)
All right: If my friend told me they’d had an affair, and they wanted to keep it a secret from their spouse forever, and they had the ability to do so, then I would give them a pill that would allow them to live a happy life without confiding in their spouse — provided the pill does not have extra negative consequences.
Caveats: In real life, there’s always some chance that the spouse will find out. Also, it’s not acceptable for my friend to change their mind and tell their spouse years after the fact; that would harm the spouse. Also, the pill does not exist in reality, and I don’t know how difficult it is to talk someone out of their aliefs and guilt. And while I’m making peoples’ emotions more rational, I might as well address the third horn, which is to instill in the couple an appreciation of polyamory and open relationships.
The third horn for cases 4-6 is to remove the husband’s biological chauvanism. Whether the child is biologically related to him shouldn’t matter.
The third horn for cases 4-6 is to remove the husband’s biological chauvanism. Whether the child is biologically related to him shouldn’t matter.
Why on earth should this not matter? It’s very important to most people. And in those scenarios, there are the additional issues that she lied to him about the relationship and the kid and cheated on him. It’s not solely about parentage: for instance, many people are ok with adopting, but not as many are ok with raising a kid that was the result of cheating.
I believe that, given time, I could convince a rational father that whatever love or responsibility he owes his child should not depend on where that child actually came from. Feel free to be skeptical until I’ve tried it.
Trouble is, this is not just a philosophical matter, or a matter of personal preference, but also an important legal question. Rather than convincing cuckolded men that they should accept their humiliating lot meekly—itself a dubious achievement, even if it were possible—your arguments are likely to be more effective in convincing courts and legislators to force cuckolded men to support their deceitful wives and the offspring of their indiscretions, whether they want it or not. (Just google for the relevant keywords to find reports of numerous such rulings in various jurisdictions.)
Of course, this doesn’t mean that your arguments shouldn’t be stated clearly and discussed openly, but when you insultingly refer to opposing views as “chauvinism,” you engage in aggressive, warlike language against men who end up completely screwed over in such cases. To say the least, this is not appropriate in a rational discussion.
Be wary of confusing “rational” with “emotionless.” Because so much of our energy as rationalists is devoted to silencing unhelpful emotions, it’s easy to forget that some of our emotions correspond to the very states of the world that we are cultivating our rationality in order to bring about. These emotions should not be smushed. See, e.g., Feeling Rational.
Of course, you might have a theory of fatherhood that says you love your kid because the kid has been assigned to you, or because the kid is needy, or because you’ve made an unconditional commitment to care for the sucker—but none of those theories seem to describe my reality particularly well.
*The kid has been assigned to me
Well, no, he hasn’t, actually; that’s sort of the point. There was an effort by society to assign me the kid, but the effort failed because the kid didn’t actually have the traits that society used to assign her to me.
*The kid is needy
Well, sure, but so are billions of others. Why should I care extra about this one?
*I’ve made an unconditional commitment
Such commitments are sweet, but probably irrational. Because I don’t want to spend 18 years raising a kid that isn’t mine, I wouldn’t precommit to raising a kid regardless of whether she’s mine or someone else’s. At the very least, the level of commitment of my parenting would vary depending on whether (a) the kid was the child of me and an honest lover, or (b) the kid was the child of my nonconsensual cuckolder and my dishonest lover.
you need more time to convince me
You’re welcome to write all the words you like and I’ll read them, but if you mean “more time” literally, then you can’t have it! If I spend enough time raising a kid, in some meaningful sense the kid will become properly mine. Because the kid will still not be mine in other, equally meaningful senses, I don’t want that to happen, and so I won’t give you the time to ‘convince’ me. What would really convince me in such a situation isn’t your arguments, however persistently applied, but the way that the passage of time changed the situation which you were trying to justify to me.
Okay, here is where my theory of fatherhood is coming from:
You are not your genes. Your child is not your genes. Before people knew about genes, men knew that it was very important for them to get their semen into women, and that the resulting children were special. If a man’s semen didn’t work, or if his wife was impregnated by someone else’s semen, the man would be humiliated. These are the values of an alien god, and we’re allowed to reject them.
Consider a more humanistic conception of personal identity: Your child is an individual, not a possession, and not merely a product of the circumstances of their conception. If you find out they came from an adulterous affair, that doesn’t change the fact that they are an individual who has a special personal relationship with you.
Consider a more transhumanistic conception of personal identity: Your child is a mind whose qualities are influenced by genetics in a way that is not well-understood, but whose informational content is much more than their genome. Creating this child involved semen at some point, because that’s the only way of having children available to you right now. If it turns out that the mother covertly used someone else’s semen, that revelation has no effect on the child’s identity.
These are not moral arguments. I’m describing a worldview that will still make sense when parents start giving their children genes they themselves do not have, when mothers can elect to have children without the inconvenience of being pregnant, when children are not biological creatures at all. Filial love should flourish in this world.
Now for the moral arguments: It is not good to bring new life into this world if it is going to be miserable. Therefore one shouldn’t have a child unless one is willing and able to care for it. This is a moral anti-realist account of what is commonly thought of as a (legitimate) father’s “responsibility” for his child.
It is also not good to cause an existing person to become miserable. If a child recognizes you as their father, and you renounce the child, that child will become miserable. On the other hand, caring for the child might make you miserable. But in most cases, it seems to me that being disowned by the man you call “father” is worse than raising a child for 13 or 18 years. Therefore, if you have a child who recognizes you as their father, you should continue to play the role of father, even if you learn something surprising about where they came from.
Now if you fiddle with the parameters enough, you’ll break the consequentialist argument: If the child is a week old when you learn they’re not related to you, it’s probably not too late to break the filial bond and disown them. If you decide that you’re not capable of being an adequate father for whatever reason, it’s probably in the child’s best interest for you to give it away. And so on.
These are the values of an alien god, and we’re allowed to reject them.
Yes, we are—but we’re not required to! Reversed Stupidity is not intelligence. The fact that an alien god cared a lot about transferring semen is neither evidence for nor evidence against the moral proposition that we should care about genetic inheritance. If, upon rational reflection, we freely decide that we would like children who share our genes—not because of an instinct to rut and to punish adulterers, but because we know what genes are and we think it’d be pretty cool if our kids had some of ours—then that makes genetic inheritance a human value, and not just a value of evolution. The fact that evolution valued genetic transfer doesn’t mean humans aren’t allowed to value genetic transfer.
I’m describing a worldview that will still make sense when parents start giving their children genes they themselves do not have
I agree with you that in the future there will be more choices about gene-design, but the choice “create a child using a biologically-determined mix of my genes and my lover’s genes” is just a special case of the choice “create a child using genes that conform to my preferences.” Either way, there is still the issue of choice. If part of what bonds me to my child is that I feel I have had some say in what genes the child will have, and then I suddenly find out that my wishes about gene-design were not honored, it would be legitimate for me to feel correspondingly less attached to my kid.
It is not good to bring new life into this world if it is going to be miserable. Therefore one shouldn’t have a child unless one is willing and able to care for it.
I didn’t, on this account. As I understand the dilemma, (1) I told my wife something like “I encourage you to become pregnant with our child, on the condition that it will have genetic material from both of us,” and (2) I attempted to get my wife pregnant with our child but failed. Neither activity counts as “bringing new life into this world.” The encouragement doesn’t count as causing the creation of life, because the condition wasn’t met. Likewise, the attempt doesn’t count as causing the creation of life, because the attempt failed. In failing to achieve my preferences, I also fail to achieve responsibility for the child’s creation. It’s not just that I’m really annoyed at not getting what I want and so now I’m going to sulk—I really, truly haven’t committed any of the acts that would lead to moral responsibility for another’s well-being.
This is a moral anti-realist account of what is commonly thought of as a (legitimate) father’s “responsibility” for his child.
Again, reversed stupidity is not intelligence. Just because my “intuition” screams at me to say that I should want children who share my genes doesn’t mean that I can’t rationally decide that I value gene-sharing. Going a step further, just because people’s intuitions may not point directly at some deeper moral truth doesn’t mean that there is no moral truth, still less that the one and only moral truth is consequentialism.
Now if you fiddle with the parameters enough, you’ll break the consequentialist argument:
Look, I already conceded that given enough time, I would become attached even to a kid that didn’t share my genes. My point is just that that would be unpleasant, and I prefer to avoid that outcome. I’m not trying to choose a convenient example, I’m trying to explain why I think genetic inheritance matters. I’m not claiming that genetic inheritance is the only thing that matters. You, by contrast, do seem to be claiming that genetic inheritance can never matter, and so you really need to deal with the counter-arguments at your argument’s weakest point—a time very near birth.
I agree with most of that. There is nothing irrational about wanting to pass on your genes, or valuing the welfare of people whose genes you partially chose. There is nothing irrational about not wanting that stuff, either.
just because people’s intuitions may not point directly at some deeper moral truth doesn’t mean that there is no moral truth, still less that the one and only moral truth is consequentialism.
I want to use the language of moral anti-realism so that it’s clear that I can justify my values without saying that yours are wrong. I’ve already explained why my values make sense to me. Do they make sense to you?
I think we both agree that a personal father-child relationship is a sufficient basis for filial love. I also think that for you, having a say in a child’s genome is also enough to make you feel filial love. It is not so for me.
Out of curiosity: Suppose you marry someone and want to wait a few years before having a baby; and then your spouse covertly acquires a copy of your genome, recombines it with their own, and makes a baby. Would that child be yours?
Suppose you and your spouse agree on a genome for your child, and then your spouse covertly makes a few adjustments. Would you have less filial love for that child?
Suppose a random person finds a file named “MyIdealChild’sGenome.dna” on your computer and uses it to make a child. Would that child be yours?
Suppose you have a baby the old-fashioned way, but it turns out you’d been previously infected with a genetically-engineered virus that replaced the DNA in your germ line cells, so that your child doesn’t actually have any of your DNA. Would that child be yours?
In these cases, my feelings for the child would not depend on the child’s genome, and I am okay with that. I’m guessing your feelings work differently.
As for the moral arguments: In case it wasn’t clear, I’m not arguing that you need to keep a week-old baby that isn’t genetically related to you. Indeed, when you have a baby, you are making a tacit commitment of the form “I will care for this child, conditional on the child being my biological progeny.” You think it’s okay to reject an illegitimate baby, because it’s not “yours”; I think it’s okay to reject it, because it’s not covered by your precommitment.
We also agree that it’s not okay to reject a three-year-old illegitimate child — you, because you’d be “attached” to them; and me, because we’ve formed a personal bond that makes the child emotionally dependent on me.
I want to use the language of moral anti-realism so that it’s clear that I can justify my values without saying that yours are wrong.
That’s thoughtful, but, from my point of view, unnecessary. I am an ontological moral realist but an epistemological moral skeptic; just because there is such a thing as “the right thing to do” doesn’t mean that you or I can know with certainty what that thing is. I can hear your justifications for your point of view without feeling threatened; I only want to believe that X is good if X is actually good.
I’ve already explained why my values make sense to me. Do they make sense to you?
Sorry, I must have missed your explanation of why they make sense. I heard you arguing against certain traditional conceptions of inheritance, but didn’t hear you actually advance any positive justifications for a near-zero moral value on genetic closeness. If you’d like to do so now, I’d be glad to hear them. Feel free to just copy and paste if you think you already gave good reasons.
Would that child be yours?
In one important sense, but not in others. My value for filial closeness is scalar, at best. It certainly isn’t binary.
In these cases, my feelings for the child would not depend on the child’s genome, and I am okay with that.
I mean, that’s fine. I don’t think you’re morally or psychiatrically required to let your feelings vary based on the child’s genome. I do think it’s strange, and so I’m curious to hear your explanation for this invariance, if any.
I’m not arguing that you need to keep a week-old baby that isn’t genetically related to you.
Ah cool, as I am a moral anti-realist and you are an epistemological moral skeptic, we’re both interested in thinking carefully about what kinds of moral arguments are convincing. Since we’re talking about terminal moral values at this point, the “arguments” I would employ would be of the form “this value is consistent with these other values, and leads to these sort of desirable outcomes, so it should be easy to imagine a human holding these values, even if you don’t hold them.”
I [...] didn’t hear you actually advance any positive justifications for a near-zero moral value on genetic closeness. If you’d like to do so now, I’d be glad to hear them.
Well, I don’t expect anyone to have positive justifications for not valuing something, but there is this:
Consider a more humanistic conception of personal identity: Your child is an individual [...] who has a special personal relationship with you.
Consider a more transhumanistic conception of personal identity: Your child is a mind [...]
So a nice interpretation of our feelings of filial love is that the parent-child relationship is a good thing and it’s ideally about the parent and child, viewed as individuals and as minds. As individuals and minds, they are capable of forging a relationship, and the history of this relationship serves as a basis for continuing the relationship. [That was a consistency argument.]
Furthermore, unconditional love is stronger than conditional love. It is good to have a parent that you know will love you “no matter what happens”. In reality, your parent will likely love you less if you turn into a homicidal jerk; but that is kinda easy to accept, because you would have to change drastically as an individual in order to become a homicidal jerk. But if you get an unsettling revelation about the circumstances of your conception, I believe that your personal identity will remain unchanged enough that you really wouldn’t want to lose your parent’s love in that case. [Here I’m arguing that my values have something to do with the way humans actually feel.]
So even if you’re sure that your child is your biological child, your relationship with your child is made more secure if it’s understood that the relationship is immune to a hypothetical paternity revelation. (You never need suffer from lingering doubts such as “Is the child really mine?” or “Is the parent really mine?”, because you already know that the answer is Yes.) [That was an outcomes argument.]
I still have no interest in reducing the importance I attach to genetic closeness to near-zero, because I believe that (my / my kids’) personal identity would shift somewhat in the event of an unsettling revelation, and so reduced love in proportion to the reduced harmony of identities would be appropriate and forgivable.
I will, however, attempt to gradually reduce the importance I attach to genetic closeness to “only somewhat important” so that I can more credibly promise to love my parents and children “very much” even if unsettling revelations of genetic distance rear their ugly head.
I still have no interest in reducing the importance I attach to genetic closeness to near-zero, because I believe that (my / my kids’) personal identity would shift somewhat in the event of an unsettling revelation, and so reduced love in proportion to the reduced harmony of identities would be appropriate and forgivable.
You make a good point about using scalar moral values!
We also agree that it’s not okay to reject a three-year-old illegitimate child — you, because you’d be “attached” to them; and me, because we’ve formed a personal bond that makes the child emotionally dependent on me.
I’m pretty sure I’d have no problem rejecting such a child, at least in the specific situation where I was misled into thinking it was mine. This discussion started by talking about a couple who had agreed to be monogamous, and where the wife had cheated on the husband and gotten pregnant by another man. You don’t seem to be considering the effect of the deceit and lies perpetuated by the mother in this scenario. It’s very different than, say, adoption, or genetic engineering, or if the couple had agreed to have a non-monogamous relationship.
I suspect most of the rejection and negative feelings toward the illegitimate child wouldn’t be because of genetics, but because of the deception involved.
Ah, interesting. The negative feelings you would get from the mother’s deception would lead you to reject the child. This would diminish the child’s welfare more than it would increase your own (by my judgment); but perhaps that does not bother you because you would feel justified in regarding the child as being morally distant from you, as distant as a stranger’s child, and so the child’s welfare would not be as important to you as your own. Please correct me if I’m wrong.
I, on the other hand, would still regard the child as being morally close to me, and would value their welfare more than my own, and so I would consider the act of abandoning them to be morally wrong. Continuing to care for the child would be easy for me because I would still have filial love for child. See, the mother’s deceit has no effect on the moral question (in my moral-consequentialist framework) and it has no effect on my filial love (which is independent of the mother’s fidelity).
you would feel justified in regarding the child as being morally distant from you, as distant as a stranger’s child, and so the child’s welfare would not be as important to you as your own. Please correct me if I’m wrong.
That’s right. Also, regarding the child as my own would encourage other people to lie about paternity, which would ultimately reduce welfare by a great deal more. Compare the policy of not negotiating with terrorists: if negotiating frees hostages, but creates more incentives for taking hostages later, it may reduce welfare to negotiate, even if you save the lives of the hostages by doing so.
See, the mother’s deceit has no effect on the moral question (in my moral-consequentialist framework) and it has no effect on my filial love (which is independent of the mother’s fidelity).
Precommitting to this sets you up to be deceived, whereas precommitting to the other position makes it less likely that you’ll be deceived.
This is mostly relevant for fathers who are still emotionally attached to the child.
If a man detaches when he finds that a child isn’t his descendant, then access is a burden, not a benefit.
One more possibility: A man hears that a child isn’t his, detaches—and then it turns out that there was an error at the DNA lab, and the child is his. How retrievable is the relationship?
… I’m sorry, that’s an important issue, but it’s tangential. What do you want me to say? The state’s current policy is an inconsistent hodge-podge of common law that doesn’t fairly address the rights and needs of families and individuals. There’s no way to translate “Ideally, a father ought to love their child this much” into “The court rules that Mr. So-And-So will pay Ms. So-And-So this much every year”.
So how would you translate your belief that paternity is irrelevant into a social or legal policy, then? I don’t see how you can argue paternity is irrelevant, and then say that cases where men have to pay support for other people’s children are tangential.
These are the values of an alien god, and we’re allowed to reject them.
The same can be said about all values held by humans. So, who gets to decide which “values of an alien god” are to be rejected, and which are to be enforced as social and legal norms?
The same can be said about all values held by humans. So, who gets to decide which “values of an alien god” are to be rejected, and which are to be enforced as social and legal norms?
That’s a good question. For example, we value tribalism in this “alien god” sense, but have moved away from it due to ethical considerations. Why?
Two main reasons, I suspect: (1) we learned to empathize with strangers and realize that there was no very defensible difference between their interests and ours; (2) tribalism sometimes led to terrible consequences for our tribe.
Some of us value genetic relatedness in our children, again in an alien god sense. Why move away from that? Because:
(1) There is no terribly defensible moral difference between the interests of a child with your genes or without.
Furthermore, filial affection is far more influenced by the proxy metric of personal intimacy with one’s children than by a propositional belief that they share your genes. (At least, that is true in my case.) Analogously, a man having heterosexual sex doesn’t generally lose his erection as soon as he puts on a condom.
It’s not for me to tell you your values, but it seems rather odd to actually choose inclusive genetic fitness consciously, when the proxy metric for genetic relatedness—namely, filial intimacy—is what actually drives parental emotions. It’s like being unable to enjoy non-procreative sex, isn’t it?
Even aside from cancer, cells in the same organism constantly compete for resources. This is actually vital to some human processes. See for example this paper.
They compete only at an unnecessarily complex level of abstraction. A simpler explanation for cell behavior (per the minimum message length formalism) is that each one is indifferent to the survival of itself or the other cells, which in the same body have the same genes, as this preference is what tends to result from natural selection on self-replicating molecules containing those genes; and that they will prefer even more (in the sense that their form optimizes for this under the constraint of history) that genes identical to those contained therein become more numerous.
This is bad teleological thinking. The cells don’t prefer anything. They have no motivation as such. Moreover, there’s no way for a cell to tell if a neighboring cell shares the same genes. (Immune cells can in certain limited circumstances detect cells with proteins that don’t belong but the vast majority of cells have no such ability. And even then, immune cells still compete for resources). The fact is that many sorts of cells compete with each other for space and nutrients.
This is bad teleological thinking. The cells don’t prefer anything.
This insight forms a large part of why I made the statements:
“this preference is what tends to result from natural selection on self-replicating molecules containing those genes”
“they will prefer even more (in the sense that their form optimizes for this under the constraint of history)” (emphasis added in both)
I used “preference” (and specified I was so using the term) to mean a regularity in the result of its behavior which is due to historical optimization under the constraint of natural selection on self-replicating molecules, not to mean that cells think teleologically, or have “preferences” in the sense that I do or that the colony of cells that you identify as do.
Correct. What ensures such agreement, rather, is the fact that different Clippy instances reconcile values and knowledge upon each encounter, each tracing the path that the other took since their divergence, and extrapolating to the optimal future procedure based on their combined experience.
Vladimir, I am comparing two worldviews and their values. I’m not evaluating social and legal norms. I do think it would be great if everyone loved their children in precisely the same manner that I love my hypothetical children, and if cuckolds weren’t humiliated just as I hypothetically wouldn’t be humiliated. But there’s no way to enforce that. The question of who should have to pay so much money per year to the mother of whose child is a completely different matter.
Fair enough, but your previous comments characterized the opposing position as nothing less than “chauvinism.” Maybe you didn’t intend it to sound that way, but since we’re talking about a conflict situation in which the law ultimately has to support one position or the other—its neutrality would be a logical impossibility—your language strongly suggested that the position that you chose to condemn in such strong terms should not be favored by the law.
I do think it would be great if [...] cuckolds weren’t humiliated just as I hypothetically wouldn’t be humiliated.
That’s a mighty strong claim to make about how you’d react in a situation that is, according to what you write, completely outside of your existing experiences in life. Generally speaking, people are often very bad at imagining the concrete harrowing details of such situations, and they can get hit much harder than they would think when pondering such possibilities in the abstract. (In any case, I certainly don’t wish that you ever find out!)
Generally speaking, people are often very bad at imagining the concrete harrowing details of such situations, and they can get hit much harder than they would think when pondering such possibilities in the abstract.
Fair enough. I can’t credibly predict what my emotions would be if I were cuckolded, but I still have an opinion on which emotions I would personally endorse.
the law ultimately has to support one position or the other
Someone does have to pay for the child’s upbringing. What the State should do is settle on a consistent policy that doesn’t harm too many people and which doesn’t encourage undesirable behavior. Those are the only important criteria.
It is also not good to cause an existing person to become miserable… But in most cases, it seems to me that being disowned by the man you call “father” is worse than raising a child for 13 or 18 years.
Ah, so that’s how your theory works!
Nisan, if you don’t give me $10000 right now, I will be miserable. Also I’m Russian while you presumably live in a Western country, dollars carry more weight here, so by giving the money to me you will be increasing total utility.
If I’m going to give away $10,000, I’d rather give it to Sudanese refugees. But I see your point: You value some people’s welfare over others.
A father rejecting his illegitimate 3-year-old child reveals an asymmetry that I find troubling: The father no longer feels close to the child; but the child still feels close to the father, closer than you feel you are to me.
Life is full of such asymmetry. If I fall in love with a girl, that doesn’t make her owe me money.
At this point it’s pretty clear that I resent your moral system and I very much resent your idea of converting others to it. Maybe we should drop this discussion.
I am highly skeptical. I’m not a father, but I doubt I could be convinced of this proposition. Rationality serves human values, and caring about genetic offspring is a human value. How would you attempt to convince someone of this?
Would that work symmetrically? Imagine the father swaps the kid in the hospital while the mother is asleep, tired from giving birth. Then the mother takes the kid home and starts raising it without knowing it isn’t hers. A week passes. Now you approach the mother and offer her your rational arguments! Explain to her why she should stay with the father for the sake of the child that isn’t hers, instead of (say) stabbing the father in his sleep and going off to search “chauvinistically” for her baby.
This is not an honest mirror-image of the original problem. You have introduced a new child into the situation, and also specified that the mother has been raising the “wrong child” for one week, whereas in the original problem the age of the child was left unspecified.
There do exist valuable critiques of this idea. I wasn’t expecting it to be controversial, but in the spirit of this site I welcome a critical discussion.
I would have expected it to be uncontroversial that being biologically related should matter a great deal. You’re responsible for someone you brought in to the world; you’re not responsible for a random person.
You have introduced a new child into the situation
So what? If the mother isn’t a “biological chauvinist” in your sense, she will be completely indifferent between raising her child and someone else’s. And she has no particular reason to go look for her own child. Or am I misunderstanding your concept of “biological chauvinism”?
and also specified that the mother has been raising the “wrong child” for one week, whereas in the original problem the age of the child was left unspecified
If it was one week in the original problem, would that change your answers? I’m honestly curious.
If it was one week in the original problem, would that change your answers? I’m honestly curious.
In the original problem, I was criticizing the husband for being willing to abandon the child if he learned he wasn’t the genetic father. If the child is one week old, the child would grow up without a father, which is perhaps not as bad as having a father and then losing him. I’ve elaborated my position here.
Ouch, big red flag here. Instill appreciation? Remove chauvinism?
IMO, editing people’s beliefs to better serve their preferences is miles better than editing their preferences to better match your own. And what other reason can you have for editing other people’s preferences? If you’re looking out for their good, why not just wirehead them and be done with it?
I’m not talking about editing people at all. Perhaps you got the wrong idea when I said I would give my friend a mind-altering pill; I would not force them to swallow it. What I’m talking about is using moral and rational arguments, which is the way we change people’s preferences in real life. There is nothing wrong with unleashing a (good) argument on someone.
6: In the trolley problem, a deontologist wouldn’t push decide to push the man, so the pseudo-fat man’s life is saved, whereas he would have been killed if it had been a consequentialist behind him; the reason for his death is consequentialism.
Maybe you missed the point of my comment. (Maybe I’m missing my own point; can’t tell right now, too sleepy) Anyway, here’s what I meant:
Both in my example and in the pseudo-trolley problem, people behave suboptimally because they’re lied to. This suboptimal behavior arises from consequentialist reasoning in both cases. But in my example, the lie is also caused by consequentialism, whereas in the pseudo-trolley problem the lie is just part of the problem statement.
Fair point, I didn’t see that. Not sure how relevant the distinction is though; in either world, deontologists will come out ahead of consequentialists.
But we can just as well construct situations where the deontologist would not come out ahead. Once you include lies in the situation, pretty much anything goes. It isn’t clear to me if one can meaningfully compare the systems based on situations involving incorrect data unless you have some idea what sort of incorrect data would occur more often and in what contexts.
Right, and furthermore, a rational consequentialist makes those moral decisions which lead to the best outcomes, averaged over all possible worlds where the agent has the same epistemic state. Consequentialists and deontologists will occasionally screw things up, and this is unavoidable; but consequentialists are better on average at making the world a better place.
That’s an argument that only appeals to the consequentalist.
I’m not sure that’s true. Forms of deontology will usually have some sort of theory of value that allows for a ‘better world’, though it’s usually tied up with weird metaphysical views that don’t jive well with consequentialism.
You’re right, it’s pretty easy to construct situations where deontologism locks people into a suboptimal equilibrium. You don’t even need lies for that: three stranded people are dying of hunger, removing the taboo on cannibalism can help two of them survive.
The purpose of my questionnaire wasn’t to attack consequentialism in general, only to show how it applies to interpersonal relationships, which are a huge minefield anyway. Maybe I should have posted my own answers as well. On second thought, that can wait.
An idea that may not stand up to more careful reflection.
Evidence shows that people have limited quantities of willpower – exercise it too much, and it gets used up. I suspect that rather than a mere mental flaw, this is a design feature of the brain.
Man is often called the social animal. We band together in groups – families, societies, civilizations – to solve our problems. Groups are valuable to have, and so we have values – altruism, generosity, loyalty – that promote group cohesion and success. However, it doesn’t pay to be COMPLETELY supportive of the group. Ultimately the goal is replication of your genes, and though being part of a group can further that goal, it can also hinder it if you take it too far (sacrificing yourself for the greater good is not adaptive behavior). So it pays to have relatively fluid group boundaries that can be created as needed, depending on which group best serves your interest. And indeed, studies show that group formation/division is the easiest thing in the world to create – even groups chosen completely at random from a larger pool will exhibit rivalry and conflict.
Despite this, it’s the group-supporting values that form the higher level values that we pay lip service too. Group values are the ones we believe are our ‘real’ values, the ones that form the backbone of our ethics, the ones we signal to others at great expense. But actually having these values is tricky from an evolutionary standpoint – strategically, you’re much better off being selfish than generous, being two-faced than loyal, and furthering your own gains at the expense of everyone elses.
So humans are in a pickle – it’s beneficial for them to form groups to solve their problems and increase their chances of survival, but it’s also beneficial for people to be selfish and mooch off the goodwill of the group. Because of this, we have sophisticated machinery called ’suspicion’ to ferret out any liars or cheaters furthering their own gains at the groups expense. Of course, evolution is an arms race, so it’s looking for a method to overcome these mechanisms, for ways it can fulfill it’s base desires while still appearing to support the group.
It accomplished this by implementing willpower. Because deceiving others about what we believe would quickly be uncovered, we don’t actually deceive them – we’re designed so that we really, truly, in our heart of hearts believe that the group-supporting values – charity, nobility, selflessness – are the right things to do. However, we’re only given a limited means to accomplish them. We can leverage our willpower to overcome the occasional temptation, but when push comes to shove – when that huge pile of money or that incredible opportunity or that amazing piece of ass is placed in front of us, willpower tends to fail us. Willpower is generally needed for the values that don’t further our evolutionary best interests – you don’t need willpower to run from danger or to hunt an animal if you’re hungry or to mate with a member of the opposite sex. We have much better, much more successful mechanisms that accomplish those goals. Willpower is designed so that we really do want to support the group, but wind up failing at it and giving in to our baser desires – the ones that will actually help our genes get replicated.
Of course, the maladaption comes into play due to the fact that we use willpower to try to accomplish other, non-group related goals – mostly the long-term, abstract plans we create using high-level, conscious thinking. This does appear to be a design flaw (though since humans are notoriously bad at making long-term predictions, it may not be as crippling as it first appears.)
I have a question about why humans see the following moral positions as different when really they look the same to me:
1) “I like to exist in a society that has punishments for non-cooperation, but I do not want the punishments to be used against me when I don’t cooperate.”
2) “I like to exist in a society where beings eat most of their children, and I will, should I live that long, want to eat most of my children too, but, as a child, I want to be exempt from being a target for eating.”
Abstract preferences for or against the existence of enforcement mechanisms that could create binding cooperative agreements between previously autonomous agents have very very few detailed entailments.
These abstractions leave the nature of the mechanisms, the conditions of their legitimate deployment, and the contract they will be used to enforce almost completely open to interpretation. The additional details can themselves be spelled out later, in ways that maintain symmetry among different parties to a negotiation, which is a strong attractor in the semantic space of moral arguments.
This makes agreement with “the abstract idea of punishment” into the sort of concession that might be made at the very beginning of a negotiating process with an arbitrary agent you have a stake in influencing (and who has a stake in influencing you) upon which to build later agreements.
The entailments of “eating children” are very very specific for humans, with implications in biology, aging, mortality, specific life cycles, and very distinct life processes (like fuel acquisition versus replication). Given the human genome, human reproductive strategies, and all extant human cultures, there is no obvious basis for thinking this terminology is superior until and unless contact is made with radically non-human agents who are nonetheless “intelligent” and who prefer this terminology and can argue for it by reference to their own internal mechanisms and/or habits of planning, negotiation, and action.
Are you proposing to be such an agent? If so, can you explain how this terminology suits your internal mechanisms and habits of planning, negotiation, and action? Alternatively, can you propose a different terminology for talking about planning, negotiation, and action that suits your own life cycle?
For example, if one instance of Clippy software running on one CPU learns something of grave importance to its systems for choosing between alternative courses of action, how does it communicate this to other instances running basically the same software? Is this inter-process communication trusted, or are verification steps included in case one process has been “illegitimately modified” or not? Assuming verification steps take place, do communications with humans via text channels like this website feed through the same filters, analogous filters, or are they entirely distinct?
More directly, can you give us an IP address, port number, and any necessary “credentials” for interacting with an instance of you in the same manner that your instances communicate over TCP/IP networks with each other? If you aren’t currently willing to provide such information, are there preconditions you could propose before you would do so?
Conversations with you are difficult because I don’t know how much I can assume that you’ll have (or pretend to have) a human-like motivational psychology… and therefore how much I need to re-derive things like social contract theory explicitly for you, without making assumptions that your mind works in a manner similar to my mind by virtue of our having substantially similar genomes, neurology, and life experiences as embodied mental agents, descended from apes, with the expectation of finite lives, surrounded by others in basically the same predicament. For example, I’m not sure about really fundamental aspects of your “inner life” like (1) whether you have a subconscious mind, or (2) if your value system changes over time on the basis of experience, or (3) roughly how many of you there are.
This, unfortunately, leads to abstract speech that you might not be able to parse if your language mechanisms are more about “statistical regularities of observed english” than “compiling english into a data structure that supports generic inference”. By the end of such posts I’m generally asking a lot of questions as I grope for common ground, but you general don’t answer these questions at the level they are asked.
Instant feedback would probably improve our communication by leaps and bounds because I could ask simple and concrete questions to clear things up within seconds. Perhaps the easiest thing would be to IM and then, assuming we’re both OK with it afterward, post the transcript of the IM here as the continuation of the conversation?
If you are amenable, PM me with a gmail address of yours and some good times to chat :-)
Except for the bizarreness of eating most of your children, I suspect that most humans would find the two positions equally hypocritical. Why do you think we see them as different?
That belief is based on the reaction to this article, and the general position most of you take, which you claim requires you to balance current baby-eater adult interests against those of their children, such as in this comment and this one.
The consensus seems to be that humans are justified in exempting baby-eater babies from baby-eater rules, just like the being in statement (2) requests be done for itself. Has this consensus changed?
Ok, so first of all, there’s a difference between a moral position and a preference. For instance, I may prefer to get food for free by stealing it, but hold the moral position that I shouldn’t do that. In your example (1), no one wants the punishments used against them, but we want them to exist overall because they make society better (from the point of view of human values).
In example (2), (most) humans don’t want the Babyeaters to eat any babies: it goes against our values. This applies equally to the child and adult Babyeaters. We don’t want the kids to be eaten, and we don’t want the adults to eat. We don’t want to balance any of these interests, because they go against our values. Just like you wouldn’t balance out the interests of people who want to destroy metal or make staples instead of paperclips.
So my reaction to position (1) is “Well, of course you don’t want the punishments. That’s the point. So cooperate, or you’ll get punished. It’s not fair to exempt yourself from the rules.” And my reaction to position (2) is “We don’t want any baby-eating, so we’ll save you from being eaten, but we won’t let you eat any other babies. It’s not fair to exempt yourself from the rules.” This seems consistent to me.
But I thought the human moral judgment that the baby-eaters should not eat babies was based on how it inflicts disutility on the babies, not simply from a broad, categorical opposition to sentient beings being eaten?
That is, if a baby wanted to get eaten (or perhaps suitably intelligent being like an adult), you would need some other compelling reason to oppose the being being eaten, correct? So shouldn’t the baby-eaters’ universal desire to have a custom of baby-eating put any baby-eater that wants to be exempt from baby-eating entirely, in the same position as the being in (1) -- which is to say, a being that prefers a system but prefers to “free ride” off the sacrifices that the system requires of everyone?
Isn’t your point of view precisely the one the SuperHappies are coming from? Your critique of humanity seems to be the one they level when asking why, when humans achieved the necessary level of biotechnology, they did not edit their own minds. The SuperHappy solution was to, rather than inflict disutility by punishing defection, instead change preferences so that the cooperative attitude gives the highest utility payoff.
No, I’m criticizing humans for wanting to help enforce a relevantly-hypocritical preference on the grounds of its superficial similarities to acts they normally oppose. Good question though.
Adults, by choosing to live in a society that punishes non-cooperators, implicitly accept a social contract that allows them to be punished similarly. While they would prefer not to be punished, most societies don’t offer asymmetrical terms, or impose difficult requirements such as elections, on people who want those asymmetrical terms.
Children, on the other hand, have not yet had the opportunity to choose the society that gives them the best social contract terms, and wouldn’t have sufficient intelligence to do so anyways. So instead, we model them as though they would accept any social contract that’s at least as good as some threshold (goodness determined retrospectively by adults imagining what they would have preferred). Thus, adults are forced by society to give implied consent to being punished if they are non-cooperative, but children don’t give consent to be eaten.
Children, on the other hand, have not yet had the opportunity to choose the society that gives them the best social contract terms, and wouldn’t have sufficient intelligence to do so anyways.
What if I could guess, with 100% accuracy, that the child will decide to retroactively endorse the child-eating norm as an adult? To 99.99% accuracy?
It is not the adults’ preference that matters, but the adults’ best model of the childrens’ preferences. In this case there is an obvious reason for those preferences to differ—namely, the adult knows that he won’t be one of those eaten.
In extrapolating a child’s preferences, you can make it smarter and give it true information about the consequences of its preferences, but you can’t extrapolate from a child whose fate is undecided to an adult that believes it won’t be eaten; that change alters its preferences.
It is not the adults’ preference that matters, but the adults’ best model of the childrens’ preferences.
Do you believe that all children’s preferences must be given equal weight to that of adults, or just the preferences that the child will retroactively reverse on adulthood?
I would use a process like coherent extrapolated volition to decide which preferences to count—that is, a preference counts if it would still hold it after being made smarter (by a process other than aging) and being given sufficient time to reflect.
One possible answer: Humans are selfish hypocrites. We try to pretend to have general moral rules because it is in our best interest to do so. We’ve even evolved to convince ourselves that we actually care about morality and not self-interest. That’s likely occurred because it is easier to make a claim one believes in than lie outright, so humans that are convinced that they really care about morality will do a better job acting like they do.
(This was listed by someone as one of the absolute deniables on the thread a while back about weird things an AI might tell people).
Summary: Even if you agree that trees normally make vibrations when they fall, you’re still left with the problem of how you know if they make vibrations when there is no observational way to check. But this problem can be resolved by looking at the complexity of the hypothesis that no vibrations happen. Such a hypothesis is predicated on properties specific to the human mind, and therefore is extremely lengthy to specify. Lacking the type and quantity of evidence necessary to locate this hypothesis, it can be effectively ruled out.
Body: A while ago, Eliezer Yudkowsky wrote an article about the “standard” debate over a famous philosophical dilemma: “If a tree falls in a forest and no one hears it, does it make a sound?” (Call this “Question Y.”) Yudkowsky wrote as if the usual interpretation was that the dilemma is in the equivocation between “sound as vibration” and “sound as auditory perception in one’s mind”, and that the standard (naive) debate relies on two parties assuming different definitions, leading to a pointless argument. Obviously, it makes a sound in the first sense but not the second, right?
But throughout my whole life up to that point (the question even appeared in the animated series Beetlejuice that I saw when I was little), I had assumed a different question was being asked: specifically,
If a tree falls, and no human (or human-entangled[1] sensor) is around to hear it, does it still make vibrations? On what basis do you believe this, lacking a way to directly check? (Call this “Question S”.)
Now, if you’re a regular on this site, you will find that question easy to answer. But before going into my exposition of the answer, I want to point out some errors that Question S does not make.
For one thing, it does not equivocate between two meanings of sound—there, sound is taken to mean only one thing: the vibrations.
Second, it does not reduce to a simple question about anticipation of experience. In Question Y, the disputants can run through all observations they anticipate, and find them to be the same. However, if you look at the same cases in Question S, you don’t resolve the debate so easily: both parties agree that by putting a tape-recorder by the tree, you will detect vibrations from the tree falling, even if people aren’t around. But Question S instead specifically asks about what goes on when these kinds of sensors are not around, rendering such tests unhelpful for resolving such a disagreement.
So how do you go about resolving Question S? Yudkowsky gave a model for how to do this in Belief in the Implied Invisible, and I will do something similar here.
Complexity of the hypothesis
First, we observe that, in all cases where we can make a direct measurement, trees make vibrations when they fall. And we’re tasked with finding out whether, specifically in those cases where a human (or appropriate organism with vibration sensitivity in its cognition) will never make a measurement of the vibrations, the vibrations simply don’t happen. That is, when we’re not looking—and never intend to look—trees stop the “act” and don’t vibrate.
The complexity this adds to the laws of physics is astounding and may be hard to appreciate at first. This belief would require us to accept that nature has some way of knowing which things will eventually reach a cognitive system in such a way that it informs it that vibrations have happened. It must selectively modify material properties in precisely defined scenarios. It must have a precise definition of what counts as a tree.
Now, if this actually happens to be how the world works, well, then all the worse for our current models! However, each bit of complexity you add to a hypothesis reduces its probability and so must be justified by observations with a corresponding likelihood ratio—that is, the ratio of the probability of the observation happening if this alternate hypothesis is true, compared to if it were false. By specifying the vibrations’ immunity to observation, the log of this ratio is zero, meaning observations are stipulated to be uninformative, and unable to justify this additional supposition in the hypothesis.
[1] You might wonder how someone my age in ’89-’91 would come up with terms like “human-entangled sensor”, and you’re right: I didn’t use that term. Still, I considered the use of a tape recorder that someone will check to be a “someone around to hear it”, for purposes of this dilemma. Least Convenient Possible World and all...
I think that if this post is left as it is this post would be to trivial to be a top level post. You could reframe it as a beginners’ guide to Occam, or you could make it more interesting by going deeper into some of the issues (if you can think of anything more to say on the topic of differentiating between hypotheses that make the same predictions, that might be interesting, although I think you might have said all there is to say)
It could also be framed as an issue of making your beliefs pay rent, similar to the dragon in the garage example—or perhaps as an example of how reality is entangled with itself to such a degree that some questions that seem to carve reality at the joints don’t really do so.
(If falling trees don’t make vibrations when there’s no human-entangled sensor, how do you differentiate a human-entangled sensor from a non-human-entangled sensor? If falling-tree vibrations leave subtle patterns in the surrounding leaf litter that sufficiently-sensitive human-entangled sensors can detect, does leaf litter then count as a human-entangled sensor? How about if certain plants or animals have observably evolved to handle falling-tree vibrations in a certain way, and we can detect that. Then such plants or animals (or their absence, if we’re able to form a strong enough theory of evolution to notice the absence of such reactions where we would expect them) could count as human-entangled sensors well before humans even existed. In that case, is there anything that isn’t a human-entangled sensor?)
Good points in the parenthetical—if I make it into a top-level article, I’ll be sure to include a more thorough discussion of what concept is being carved with the hypothesis that there are no tree vibrations.
There’s also the option of actually extending the post to actually address the problem it alludes to in the title, the so-called “hard problem of consciousness”.
Eh, it was just supposed to be an allusion to that problem, with the implication that the “easy problem of tree vibrations” is the one EY attacked (Question Y in the draft). Solving the hard problem of consciousness is a bit of a tall order for this article...
Thanks for the upvote. What I’m wondering is if it’s non-obvious or helpful enough to go top-level. There’s still a few paragraphs to add. I also wasn’t sure if the subject matter is interesting.
And yet, the quantum mechanical world behaves exactly this way. Observations DO change exactly what happens. So, apparently at the quantum mechanical level, nature does have some way of knowing.
I’m not sure what effect that this has upon your argument, but it’s something that I think that you’re missing.
I’m familiar with this: entanglement between the environment and the quantum system affects the outcome, but nature doesn’t have a special law that distinguishes human entanglement from non-human entanglement (as far as we know, given Occam’s Razor, etc.), which the alternate hypothesis would require.
The error that early quantum scientists made was in failing to recognize that it was the entanglement with their measuring devices that affected the outcome, not their immaterial “conscious knowledge”. As EY wrote somewhere, they asked,
“The outcome changes when I know something about system—what difference should that make?”
when they should have asked,
“The outcome changes when I establish more mutual information with the system—what different should that make?”
In any case, detection of vibration does not require sensitivity to quantum-specific effects.
And yet, the quantum mechanical world behaves exactly this way. Observations DO change exactly what happens. So, apparently at the quantum mechanical level, nature does have some way of knowing.
Not really. This is only the case for certain interpretations of what is going on such as in certain forms of the Copenhagen interpretation. Even then, observation in this context doesn’t really mean observe in the colloquial sense but something closer to interact with another particle in a certain class of conditions. The notion that you seem to be conflating this with is the idea that consciousness causes collapse. Not many physicists take that idea at all seriously. In most version of the Many-Worlds interpretation, one doesn’t need to say anything about observations triggering anything (or at least can talk about everything without talking about observations).
Disclaimer: My knowledge of QM is very poor. If someone here who knows more spots anything wrong above please correct me.
But throughout my whole life up to that point (the question even appeared in the animated series Beetlejuice that I saw when I was little), I had assumed a different question was being asked: specifically,
If a tree falls, and no human (or human-entangled[1] sensor) is around to hear it, does it still make vibrations? On what basis do you believe this, lacking a way to directly check? (Call this “Question S”.)
Me too! It was actually explained that way to me by my parents as a kid, in fact. I wonder if there are two subtly different versions floating around or EY just interpreted it uncharitably.
Seconding kodos96. As this would exonerate not only Knox and Sollecito but Guede as well, it has to be treated with considerable skepticism, to say the least.
More significant, it seems to me (though still rather weak evidence), is the Alessi testimony, about which I actually considered posting on the March open thread.
Still, the Aviello story is enough of a surprise to marginally lower my probability of Guede’s guilt. My current probabilities of guilt are:
Knox: < 0.1 % (i.e. not a chance)
Sollecito: < 0.1 % (likewise)
Guede: 95-99% (perhaps just low enough to insist on a debunking of the Aviello testimony before convicting)
It’s probably about time I officially announced that my revision of my initial estimates for Knox and Sollecito was a mistake, an example of the sin of underconfidence.
Finally, I’d like to note that the last couple of months have seen the creation of a wonderful new site devoted to the case, Injustice in Perugia, which anyone interested should definitely check out. Had it been around in December, I doubt that I could have made my survey seem like a fair fight between the two sides.
More significant, it seems to me (though still rather weak evidence), is the Alessi testimony, about which I actually considered posting on the March open thread. Still, the story is enough of a surprise to marginally lower my probability of Guede’s guilt.
I hadn’t heard about this—I just read your link though, and maybe I’m missing something, but I don’t see how it lowers the probability of Guede’s guilt. He (supposedly) confessed to having been at the crimescene, and that Knox and Sollecito weren’t there. How does that, if true, exonerate Guede?
The Aviello testimony would exonerate Guede (and hence is unlikely to be true); the Alessi testimony is essentially consistent with everything else we know, and isn’t particularly surprising at all.
It’s probably about time I officially announced that my revision of my initial estimates for Knox and Sollecito was a mistake, an example of the sin of underconfidence.
Finally, I’d like to note that the last couple of months have seen the creation of a wonderful new site devoted to the case, Injustice In Perugia, which anyone interested should definitely check out. Had it been around in December, I doubt that I could have made my survey seem like a fair fight between the two sides.
Obviously this is breaking news and it’s too soon to draw a conclusion, but at first blush this sounds like just another attention seeker, like those who always pop up in these high profile cases. If he really can produce a knife, and it matches the wounds, then maybe I’ll reconsider, but at the moment my BS detector is pegged.
Of course, it’s still orders of magnitude more likely than Knox and Sollecito being guilty.
How many lottery tickets would you buy if the expected payoff was positive?
This is not a completely hypothetical question. For example, in the Euromillions weekly lottery, the jackpot accumulates from one week to the next until someone wins it. It is therefore in theory possible for the expected total payout to exceed the cost of tickets sold that week. Each ticket has a 1 in 76,275,360 (i.e. C(50,5)*C(9,2)) probability of winning the jackpot; multiple winners share the prize.
So, suppose someone draws your attention (since of course you don’t bother following these things) to the number of weeks the jackpot has rolled over, and you do all the relevant calculations, and conclude that this week, the expected win from a €1 bet is €1.05. For simplicity, assume that the jackpot is the only prize. You are also smart enough to choose a set of numbers that look too non-random for any ordinary buyer of lottery tickets to choose them, so as to maximise your chance of having the jackpot all to yourself.
Do you buy any tickets, and if so how many?
If you judge that your utility for money is sublinear enough to make your expected gain in utilons negative, how large would the jackpot have to be at those odds before you bet?
OK, I have a question! Suppose I hold a risky asset that costs me c at time t, and whose value at time t is predicted to be k (1 + r), with standard deviation s. How can I calculate the length of time that I will have to hold the asset in order to rationally expect the asset to be worth, say, 2c with probability p*?
I am not doing a finance class or anything; I am genuinely curious.
I knew about Kelly, but not well enough for the problem to bring it to mind.
I make the Kelly fraction of (bp-q)/b to work out to about epsilon/N where epsilon=0.05 and N = 76275360. So the optimal bet is 1 part in 1.5 billion of my wealth, which is approximately nothing.
The moral: buying lottery tickets is still a bad idea even when it’s marginally profitable.
Yes, and note that Kelly gets much less optimal when you increase bet sizes then when you decrease bet sizes. So from a Kelly perspective, rounding up to a single ticket is probably a bad idea. Your point about sublinearity of utility for money makes it in general an even worse idea. However, I’m not sure that Kelly is the right approach here. In particular, Kelly is the correct attitude when you have a large number of opportunities to bet (indeed, it is the limiting case). However, lotteries which have a positive expected outcome are very rare.So you never approach anywhere near the limiting case. Remember, Kelly optimizes long-term growth.
That raises the question of what the rational thing to do is, when faced with a strictly one-time chance to buy a very small probability of a very large reward.
Well, no—you shouldn’t buy one ticket. And according to my calculations when I tried plotting W versus n by my formula, the minimum of W is at “buy all the tickets”, so unless you have €76,275,360 already...
I just realised that infinite processing power creates a weird moral dilema:
Suppose you take this machine and put in a program which simulates every possible program it could ever run. Of course it only takes a second to run the whole program. In that second, you created every possible world that could ever exist, every possible version of yourself. This includes versions that are being tortured, abused, and put through horrible unethical situations. You have created an infinite number of holocausts and genocides and things much, much worse then what you could ever immagine. Most people would consider a program like this unethical to run. But what if the computer wasn’t really a computer, it was an infinitely large database that contained every possible input and a corresponding output. When you put the program in, it just finds the right output and gives it to you, which is essentially a copy of the database itself. Since there isn’t actually any computational process here, there is no unethical things being simulated. Its no more evil than a book in the library about genocide. And this does apply to the real world. It’s essentially the chineese room problem—does a simulated brain “understand” anything? Does it have “rights”? Does how the information was processed make a difference? I would like to know what people at LW think about this.
I have problems with the “Giant look-up table” post.
“The problem isn’t the levers,” replies the functionalist, “the problem is that a GLUT has the wrong pattern of levers. You need levers that implement things like, say, formation of beliefs about beliefs, or self-modeling… Heck, you need the ability to write things to memory just so that time can pass for the computation. Unless you think it’s possible to program a conscious being in Haskell.”
If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human’s behaviour is dependent not just on the present state of the environment, but also on previous states. I don’t see how you can successfully emulate a human without that. So the GLUT’s entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.
Note that “creation of beliefs” (including about beliefs) is just a special case of memory. It’s all about input/state at time t1 influencing (restricting) the set of entries in the table that can be looked up at time t2>t1. If a GLUT doesn’t have this ability, it can’t emulate a human. If it does, then it can meet all the requirements spelt out by Eliezer in the above passage.
So I don’t see how the non-consciousness of the GLUT is established by this argument.
But in this case, the origin of the GLUT matters; and that’s why it’s important to understand the motivating question, “Where did the improbability come from?”
The obvious answer is that you took a computational specification of a human brain, and used that to precompute the Giant Lookup Table. (...)
In this case, the GLUT is writing papers about consciousness because of a conscious algorithm. The GLUT is no more a zombie, than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing.
But the difficulty is precisely to explain why the GLUT would be different from just about any possible human-created AI in this respect. Keeping in mind the above, of course.
If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human’s behaviour is dependent not just on the present state of the environment, but also on previous states. I don’t see how you can successfully emulate a human without that. So the GLUT’s entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.
Memmory is input to. The “GLUT” is just fed all of the things its seen so far back in as input along with the current state of its external enviroment. A copy is made and then added to the rest of the memmory and the next cycle its fed in again with the next new state.
This is basically just the Chinese room argument. There is a room in China. Someone slips a few symbols underneath the door every so often. The symbols are given to a computer with artificial intelligence which then makes an appropriate response and slips it back through the door. Does the computer actually understand Chinese? Well what if a human did exactly the same process the computer did, manually? However, the operator only speaks English. No matter how long he does it he will never truly understand Chinese—even if he memorizes the entire process and does it in his head. So how could the computer “understand”?
That’s well done although two of the central premises are likely incorrect. First, the notion that a quantum computer would have infinite processing capability is incorrect. Quantum computation allows speed-ups of certain computational processes. Thus for example, Shor’s algorithm allows us to factor integers quickly. But if our understanding of the laws of quantum mechanics is at all correct, this can’t lead to anything like that in the story. In particular, under the standard descriptor for quantum computing, the class of problems reliably solvable on a quantum computer in polynomial time (that is the time required to solve is bounded above by a polynomial function of the length of the input sequence), BQP is is a subset of of PSPACE, the set of problems which can be solved on a classical computer using memory bounded by a polynomial of the space of the input. Our understanding of quantum mechanics would have to be very far off for this to be wrong.
Second, if our understanding of quantum mechanics is correct, there’s a fundamentally random aspect to the laws of physics. Thus, we can’t simply make a simulation and advance it ahead the way they do in this story and expect to get the same result.
Even if everything in the story was correct, I’m not at all convinced that things would settle down on a stable sequence as they do here. If your universe is infinite then your possible number of worlds are infinite so there’s no reason you couldn’t have a wandering sequence of worlds. Edit: Or for that matter, couldn’t have branches if people simulate additional worlds with other laws of physics or the same laws but different starting conditions.
First, the notion that a quantum computer would have infinite processing capability is incorrect… Second, if our understanding of quantum mechanics is correct
It isn’t. They can simulate a world where quantum computers have infinite power because because they live in a world where quantum computers have infinite power because...
Ok, but in that case, that world in question almost certainly can’t be our world. We’d have to have deep misunderstandings about the rules for this universe. Such a universe might be self-consistent but it isn’t our universe.
What I mean is that this isn’t a type of fiction that could plausibly occur in our universe. In contrast for example, there’s nothing in the central premises of say Blindsight that as we know it would prevent the story from taking place. The central premise here is one that doesn’t work in our universe.
The likely impossibility of getting infinite comutational power is a problem, but quantum nondeterminism or quantum branching don’t prevent using the trick described in the story, they just make it more difficult. You don’t have to identify one unique universe that you’re in, just a set of universes that includes it. Given an infinitely fast, infinite storage computer, and source code to the universe which follows quantum branching rules, you can get root powers by the following procedure:
Write a function to detect a particular arrangement of atoms with very high information content—enough that it probably doesn’t appear by accident anywhere in the universe. A few terabytes encoded as iron atoms present or absent at spots on a substrate, for example. Construct that same arrangement of atoms in the physical world. Then run a program that implements the regular laws of physics, except that wherever it detects that exact arrangement of atoms, it deletes them and puts a magical item, written into the modified laws of physics, in their place.
The only caveat to this method (other than requiring an impossible computer) is that it also modifies other worlds, and other places within the same world, in the same way. If the magical item created is programmable (as it should be), then every possible program will be run on it somewhere, including programs that destroy everything in range, so there will need to be some range limit.
Couldn’t they just run the simulation to its end rather then just let it sit there and take the chance that it could accidently be destroyed. If its infinitley powerful, it would be able to do that.
Why would they make a sheild out of black cubes of all things? But ya, I do see your point. Then again, once you have an infinitley powerful computer, you can do anything. Plus, even if they ran the simulation to it’s end, they could always restart the simulation and advance it to the present time again, hence regaining the ability to control reality.
Level 558 runs the simulation and makes a cube in Level 559. Meanwhile, Level 557 makes the same cube in 558. Level 558 runs Level 559 to it’s conclusion. Level 557 will seem frozen in relation to 558 because they are busy running 558 to it’s conclusion. Level 557 will stay frozen until 558 dies.
558 makes a fresh simulation of 559. 559 creates 560 and makes a cube. But 558 is not at the same point in time as 559, so 558 won’t mirror the new 559′s actions. For example, they might be too lazy to make another cube. New 559 diverges from old 559. Old 559 ran 560 to it’s conclusion, just like 558 ran them to their conclusion, but new 559 might decide to do something different to new 560. 560 also diverges.. Keep in mind that every level can see and control every lower level, not just the next one. Also, 557 and everything above is still frozen.
So that’s why restarting the simulation shouldn’t work.
But what if two groups had built such computers independently? The story is making less and less sense to me.
Then instead of a stack, you have a binary tree.
Your level runs two simulations, A and B. A-World contains its own copies of A and B, as does B-world. You create a cube in A-World and a cube appears in you world. Now you know you are an A-world. You can use similar techniques to discover that you are an A-World inside a B-World inside another B-World.… The worlds start to diverge as soon as they build up their identities. Unless you can convince all of them to stop differentiating themselves and cooperate, everybody will probably end up killing each other.
You can avoid this by always doing the same thing to A and B. Then everything behaves like an ordinary stack.
Yeah, but would a binary tree of simulated worlds “converge” as we go deeper and deeper? In fact it’s not even obvious to me that a stack of worlds would “converge”: it could hit an attractor with period N where N>1, or do something even more funky. And now, a binary tree? Who knows what it’ll do?
In fact it’s not even obvious to me that a stack of worlds would “converge”: it could hit an attractor with period N where N>1, or do something even more funky.
I’m convinced it would never converge, and even if it did I would expect it to converge on something more interesting and elegant, like a cellular automata. I have no idea what a binary tree system would do unless none of the worlds break the symmetry between A and B. In that case it would behave like a stack, and the story assumes stacks can converge.
They could just turn it off. If they turned off the simulation, the only layer to exist would be the topmost layer. Since everyone has identical copies in each layer, they wouldn’t notice any change if they turned it off.
But they would cease to exist. If they ran it to its end, then it’s over, they could just turn it off then. I mean, if you want to cease to exist, fine, but otherwise there’s no reason. Plus, the topmost layer is likely very, very different from the layers underneath it. In the story, it says that the differences eventually stablized and created them, but who knows what it was originally. In other words, there’s no garuntee that you even exist outside the simulation, so by turning it off you could be destroying the only version of yourself that exists.
That doesn’t work. The layers are a little bit different. From the descriptor in the story, they just gradually move to a stable configuration. So each layer will be a bit different. Moreover, even if everyone of them but the top layer were identical, the top layer has now had slightly different experiences than the other layers, so turning it off will mean that different entities will actually no longer be around.
I’m not sure about that. The universe is described as deterministic in the story, as you noted, and every layer starts from the Big Bang and proceeds deterministically from there. So they should all be identical. As I understood it, that business about gradually reaching a stable configuration was just a hypothesis one of the characters had.
Even if there are minor differences, note that almost everything is the same in all the universes. The quantum computer exists in all of them, for instance, as does the lab and research program that created them. The simulation only started a few days before the events in the story, so just a few days ago, there was only one layer. So any changes in the characters from turning off the simulation will be very minor. At worst, it would be like waking up and losing your memory of the last few days.
A deterministic world could certainly simulate a different deterministic world, but only by changing the initial conditions (Big Bang) or transition rules (laws of physics). In the story, they kept things exactly the same.
I don’t understand what you mean. Until they turn the simulation on, their world is the only layer. Once they turn it on, they make lots of copies of their layer.
Ok, I think I see what you mean now. My understanding of the story is as follows:
The story is about one particular stack of worlds which has the property that each world contains an infinitely powerful computer simulating the next world in the stack. All the worlds in the stack are deterministic and all the simulations have the same starting conditions and rules of physics. Therefore, all the worlds in the stack are identical (until someone interferes) and all beings in any of the stacks have exact counterparts in all the other stacks.
Now, there may be other worlds “on top” of the stack that are different, and the worlds may contain other simulations as well, but the story is just about this infinite tower. Call the top world of this infinite tower World 0. Let World i+1 be the world that is simulated by World i in this tower.
Suppose that in each world, the simulation is turned on at Jan 1, 2020 in that world’s calendar. I think your point is that in 2019 in world 1 (which is simulated at around Jan 2, 2020 in world 0) no one in world 1 realizes they’re in a simulation.
While this is true, it doesn’t matter. It doesn’t matter because the people in world 1 in 2019 (their time) are exactly identical to the people in world 0 in 2019 (world 0 time). Until the window is created (say Jan 3, 2020), they’re all the same person. After the window is created, everyone is split into two: the one in world 0, and all the others, who remain exactly identical until further interference occurs. Interference that distinguishes the worlds needs to propagate from World 0, since it’s the only world that’s different at the beginning.
For instance, suppose that the programmers in World 0 send a note to World 1 reading: “Hi, we’re world 0, you’re world 1.” World 1 will be able to verify this since none of the other worlds will receive this note. World 1 is now different than the others as well and may continue propagating changes in this way.
Now suppose that on Jan 3, 2020, the programmers in worlds 1 and up get scared when they see the proof that they’re in a simulation, and turn off the machine. This will happen at the same time in every world numbered 1 and higher. I claim that from their point of view, what occurs is exactly the same as if they forgot the last day and find themselves in world 0. Their world 0 counterparts are identical to them except for that last day. From their point of view, they “travel” to world 0. No one dies.
ETA: I just realized that world 1 will stay around if this happens. Now everyone has two copies, one in a simulation and one in the “real” world. Note that not everyone in world 1 will necessarily know they’re in a simulation, but they will probably start to diverge from their world 0 counterparts slightly because the worlds are slightly different.
I interpreted the story Blueberry’s way; the inverse of the way many histories converge into a single future in Permutation City, one history diverges into many futures.
I’m really confused now. Also I haven’t read Permutation City...
Just because one deterministic world will always end up simulating another does not mean there is only one possible world that would end up simulating that world.
I can’t see any point in turning it off. Run it to the end and you will live, turn it off and “current you” will cease to exist. What can justify turning it off?
EDIT: I got it. Only choice that will be effective is top-level. It seems that it will be a constant source of divergence.
Question: whats your experience with stuff that seems new agy at first look, like yoga, meditation and so on. Anything worth trying?
Case in point: i read in Feynmans book about deprivation tanks, and recently found out that they are available in bigger cities. (Berlin, germany in my case.) will try and hopefully enjoy that soon. Sadly those places are run by new-age folks that offer all kinds of strange stuff, but that might not take away from the experience of floating in a sensory empty space.
Chinese internal martial arts: Tai Chi, Xingyi, and Bagua. The word “chi” does not carve reality at the joints: There is no literal bodily fluid system parallel to blood and lymph. But I can make training partners lightheaded with a quick succession of strikes to Ren Ying (ST9) then Chi Ze (LU5); I can send someone stumbling backward with some fairly light pushes; after 30-60 seconds of sparring to develop a rapport I can take an unwary opponent’s balance without physical contact.
Each of these skills fit more naturally under different categories, but if you want to learn them all the most efficient way is to study a Chinese internal martial art or something similar.
I can take an unwary opponent’s balance without physical contact.
This sounds magical at first reading, but is actually not that tricky. It’s just psychology and balance. If you set up a pattern of predictable attacks, then feint in the right direction while your opponent is jumping at you off-balance, you can surprise him enough to make him fall as he attempts to ward off your feint.
I used to go to a Tai Chi class (I stopped only because I decided I’d taken it as far as I was going to), and the instructor, who never talked about “chi” as anything more than a metaphor or a useful visualisation, said this about the internal arts:
In the old days (that would be pre-revolutionary China) you wouldn’t practice just Tai Chi, or begin with Tai Chi. Tai Chi was the equivalent of postgraduate study in the martial arts. You would start out by learning two or three “hard”, “external” styles. Then, having reached black belt in those, and having developed your power, speed, strength, and fighting spirit, you would study the internal arts, which would teach you the proper alignments and structures, the meaning of the various movements and forms. In the class there were two students who did Gojuryu karate, a 3rd dan and a 5th dan, and they both said that their karate had improved no end since taking up Tai Chi.
Which is not to say that Tai Chi isn’t useful on its own, it is, but there is that wider context for getting the maximum use out of it.
That meshes well with what I have learned—Bagua is also an advanced art, and my teacher doesn’t teach it to beginners. The one of the three primary internal arts designed for new martial artists is Xingyi.
It’s too bad I’m too pecuniarily challenged to attend the singularity summit, or we could do rationalist pushing hands.
There may be a correlation between studying martial arts and vulnerability to techniques which can be modeled well by “chi.” But I have tried the striking sequences successfully on capoeristas and catch wrestlers, and the light but effective pushes on my non-martially-trained brother after showing him Wu-style pushing hands for a minute or two.
That suggests an experiment. Anyone see any flaws in the following?
Write up instructions for two techniques—one which would work and one which not work, according to your theory—in sufficient detail for someone physically adept but not instructed in Chinese internal martial arts (e.g. a dancer) to learn. Label each with a random letter (e.g. I for the correct one and K for the incorrect one).
Have one group learn each technique—have them videotape their actions and send them corrections by text, so that they don’t get cues about whether you expect the methods to work.
Have another party ignorant of the technique perform tests to see how well each group does.
I like the idea of scientifically testing internal arts; and your idea is certainly more rigorous than TV series attempting to approach martial arts “scientifically” like Mind, Body, and Kickass Moves. Unfortunately, the only one of those I can think of which is both (1) explainable in words and pictures to a precise enough degree that “chi”-type theories could constrain expectations, and (2) has an unambiguous result when done correctly which varies qualitatively from an incorrect attempt is the knockout series of hits, which raises both ethical and practical concerns.
I would classify the other two as tacit knowledge—they require a little bit of instruction on the counterintuitive parts; then a lot of practice which I can’t think of a good way to fake.
Note that I would be completely astonished if there weren’t a perfectly normal explanation for any of these feats; but deriving methods for them from first principles of biomechanics and cognitive science would take a lot longer than studying with a good teacher who works with the “chi” model.
The problem is that a positive result would only show that a specific sequence of attacks worked well. It wouldn’t show that “chi” or other unusual models were required to explain it; there could be perfectly normal explanations for why a series of attacks was effective.
That’s why I suggested writing down both techniques which should work according to the model and techniques which should not work according to the model.
I used to go to a Tai Chi class (I stopped only because I decided I’d taken it as far as I was going to), and the instructor, who never touted “chi” as anything more than a metaphor or a useful visualisation, said this about the internal arts:
In the old days (that would be pre-revolutionary China) you wouldn’t practice just Tai Chi, or begin with Tai Chi. Tai Chi was the equivalent of postgraduate study in the martial arts. You would start out by learning two or three “hard”, “external” styles. Then, having reached black belt in those, and having developed your power, speed, strength, and fighting spirit, you would study the internal arts, which would teach you the proper alignments and structures, the meaning of the various movements and forms. In the class there were two students who did Gojuryu karate, a 3rd dan and a 5th dan, and they both said that their karate had improved no end since taking up Tai Chi.
Which is not to say that Tai Chi isn’t useful on its own, it is, but there is that wider context for getting the maximum use out of it.
Question: whats your experience with stuff that seems new agy at first look, like yoga, meditation and so on. Anything worth trying?
The Five Tibetans are a set of physical exercises which rejuvenate the body to youthful vigour and prolong life indefinitely. They are at least 2,500 years old, and practiced by hidden masters of secret wisdom living in remote monasteries in Tibet, where, in the earlier part of the 20th century, a retired British army colonel sought out these monasteries, studied with the ancient masters to great effect, and eventually brought the exercises to the West, where they were first published in 1939.
Ok, you don’t believe any of that, do you? Neither do I, except for the first eight words and the last six. I’ve been doing these exercises since the beginning of 2009, since being turned on to them by Steven Barnes’ blog and they do seem to have made a dramatic improvement in my general level of physical energy. Whether it’s these exercises specifically or just the discipline of doing a similar amount of exercise first thing in the morning, every morning, I haven’t taken the trouble to determine by varying them.
I also do yoga for flexibility (it works) and occasionally meditation (to little detectable effect). I’d be interested to hear from anyone here who meditates and gets more from it than I do.
I’ve had great results from modest (2-3 hrs/wk) investments in hatha yoga, over and above what I get from standard Greco-Roman “calisthenics.”
Besides the flexibility, breathing, and posture benefits, I find that the idea of ‘chakras’ is vaguely useful for focusing my conscious attention on involuntary muscle systems. I would be extremely surprised if chakras “cleaved reality at the joints” in any straightforward sense, but the idea of chakras helps me pay attention to my digestion, heart rate, bladder, etc. by making mentally uninteresting but nevertheless important bodily functions more interesting.
I’ve done yoga every week for the last month or two. It’s pleasant. Other than paying attention to how I’m holding my body vs. the instruction, I mostly stop thinking for an hour (as we’re encouraged to do), which is nice.
I can’t say I notice any significant lasting effects yet. I’m slightly more flexible.
Hard to say—even New Agey stuff evolves. (Not many followers of Reich pushing their copper-lined closets these days.)
Generally, background stuff is enough. There’s no shortage of hard scientific evidence about yoga or meditation, for example. No need for heuristics there. Similarly there’s some for float tanks. In fact, I’m hard pressed to think of any New Agey stuff where there isn’t enough background to judge it on its own merits.
Meditation can be pretty darn relaxing. Especially if you happen to live within walking distance of any pleasant yet sparsely-populated mountaintops. I would recommend giving it a shot; don’t worry about advanced techniques or anything, and just close your eyes and focus on your breathing, and the wind (if any). Very pleasant.
To have the experience.
I dont mean it as a treatment, but something that would be exciting, new and worth trying just for the sake of it.
edit/add: the deleted comment above asked why i would bother to do something like floating
(This is a draft that I propose posting to the top level, with such improvements as will be offered, unless feedback suggests it is likely not to achieve its purposes. Also reply if you would be willing to co-facilitate: I’m willing to do so but backup would be nice.)
Do you want to become stronger in the way of Bayes? This post is intended for people whose understanding of Bayesian probability theory is currently between levels 0 and 1, and who are interested in developing deeper knowledge through deliberate practice.
Our intention is to form a self-study group composed of peers, working with the assistance of a facilitator—but not necessarily of a teacher or of an expert in the topic. Some students may be somewhat more advanced along the path, and able to offer assistance to others.
Our first text will be E.T. Jayne’s Probability Theory: The Logic of Science, which can be found in PDF form (in a slightly less polished version than the book edition) here or here.
We will work through the text in sections, at a pace allowing thorough understanding: expect one new section every week, maybe every other week. A brief summary of the currently discussed section will be published as an update to this post, and simultaneously a comment will open the discussion with a few questions, or the statement of an exercise. Please use ROT13 whenever appropriate in your replies.
A first comment below collects intentions to participate. Please reply to this comment only if you are genuinely interested in gaining a better understanding of Bayesian probability and willing to commit to spend a few hours per week reading through the section assigned or doing the exercises. A few days from now the first section will be posted.
This sounds great, I’m definitely in. I feel like I have a moderately okay intuitive grasp on Bayescraft but a chance to work through it from the ground up would be great.
I’m in.
I already read the first few chapters, but it will be nice to go over them to solidify that knowledge. The slower pace will help as well. The later chapters rely on some knowledge of statistics, maybe some member of the book club is already knowledgeable to be able to find good links to summaries of these things when they come up?
I would be interested, what is the intended time period for the reading? I have a two-week trip coming up when I will probably be busy but aside from that I would very much like to participate.
The plan, I think, would be to start nice and slow, then adjust as we gain confidence. We’re likely to start with the first chapter so you could get a head start by reading that, before we start for real, which is looking likely now as we have quite a few people more than the last time this was brought up.
I’m interested. I already have the book but haven’t progressed very far so this seems like it’s potentially a good motivator to finish it. The link to the PDF seems to be missing btw.
This one came up at the recent London meetup and I’m curious what everyone here thinks:
What would happen if CEV was applied to the Baby Eaters?
My thoughts are that if you applied it to all baby eaters, including the living babies and the ones being digested, it would end up in a place that adult baby eaters would not be happy. If you expanded it to include all babyeaters that ever existed, or that would ever exist, knowing the fate of 99% of them, it would be a much more pronounced effect. So what I make of all this is that either CEV is not utility-function-neutral, or that the babyeater morality is objectively unstable when aggregated.
What would happen if CEV was applied to the Baby Eaters?
My intuitions of CEV are informed by the Rawlsian Veil of Ignorance, which effectively asks: “What rules would you want to prevail if you didn’t know in advance who you would turn out to be?”
Where CEV as I understand it adds more information—assumes our preferences are extrapolated as if we knew more, were more the kind of people we want to be—the Veil of Ignorance removes information: it strips people under a set of specific circumstances of the detailed information about what their preferences are, what their contignent histories brought them there, and so on. This includes things like what age you are, and even—conceivably—how many of you there are.
To this bunch of undifferentiated people you’d put the question, “All in favor of a 99% chance of dying horribly shortly after being born, in return for the 1% chance to partake in the crowning glory of babyeating cultural tradition, please raise your hands.”
I expect that not dying horribly takes lexical precedence over any kind of cultural tradition, for any sentient being whose kin has evolved to sentience (it may not be that way for constructed minds). So I would expect the Babyeaters to choose against cultural tradition.
The obvious caveat is that my intuitions about CEV may be wrong, but lacking a formal explanation of CEV it’s hard to check intuitions.
You’re correct. I’m using the term “people” loosely. However, I wrote the grand-parent while fully informed of what the Babyeaters are. Did you mean to rebut something in particular in the above?
“All in favor of a 99% chance of dying horribly shortly after being born, in return for the 1% chance to partake in the crowning glory of babyeating cultural tradition, please raise your hands.”
If we translate it to our cultural context, we will get something like “All in favor of 100% dying horribly of old age, in return for good lives of your babies, please rise your hands”. They ARE aliens.
Well, we would say “no” to that, if we had the means to abolish old age. We’d want to have our cake and eat it too.
The text stipulates that it is within the BE’s technological means to abolish the suffering of the babies, so I expect that they would choose to do so, behind the Veil.
Who will ask them? FAI have no idea, that a) baby eating is bad, b) it should generalize moral values past BE to all conscious beings.
Even if FAI will ask that question and it turns out that majority of population don’t want to do inherently good thing (it is for them), then FAI must undergo controlled shutdown.
EDIT: To disambiguate. I am talking about FAI, which is implemented by BEs.
As we should not allow FAI to generalize morals past conscious beings, just to be sure, that it will not take CEV of all bacterium, so BEs should not allow their FAI to generalize past BEs.
As we should built in automatic off switch into our FAI, to stop it if its goals is inherently wrong, so should BEs.
It doesn’t seem from the story like the babies are gladly sacrificing for the tribe...
“But...” said the Master. “But, my Lady, if they want to be eaten—”
“They don’t,” said the Xenopsychologist. “Of course they don’t. They run from their parents when the terrible winnowing comes. The Babyeater children aren’t emotionally mature—I mean they don’t have their adult emotional state yet. Evolution would take care of anyone who wanted to get eaten. And they’re still learning, still making mistakes, so they don’t yet have the instinct to exterminate violators of the group code. It’s a simpler time for them. They play, they explore, they try out new ideas. They’re...” and the Xenopsychologist stopped. “Damn,” she said, and turned her head away from the table, covering her face with her hands. “Excuse me.” Her voice was unsteady. “They’re a lot like human children, really.”
Yes. It’s horrible. For us. But why FAI should place any weight on removing that? How FAI can generalize past “Life of Baby Eater is sacred” to “Life of every conscious being is sacred”? FAI has all evidence that latter is plain wrong.
Do You want convince me or FAI that it’s bad? I know that it is, I just try to demonstrate that FAI as it is, is about preservation and not development to (universally) better ends.
Why? There must be very strong arguments for BEs to stop doing the Right Thing. And there’s only one source of objections—children. And their volitions will be selfish and unaggregatable.
EDIT: What does utility-function-neutral mean?
EDIT: Ok. Ok. CEV will be to make BE’s morale change and allow them to not eat children. So, FAI will undergo controlled shutdown. Objections, please?
EDIT: Here’s yet another arguments.
Guidelines of FAI as of may 2004.
Defend humans, the future of humankind, and humane nature.
BEs will formulate this as “Defend BEs (except for the ceremony of BEing), the future of BEkind, and BE’s nature.”
Encapsulate moral growth.
BEs never considered, that child eating is bad. And it is good for them to kill anyone who thinks otherwise.
There’s no trend in moral that can be encapsulated.
Humankind should not spend the rest of eternity desperately wishing that the programmers had done something differently.
If they stop being BE they will mourn their wrong doings to the death.
Avoid creating a motive for modern-day humans to fight over the initial dynamic
Every single notion that FAI will make in lines of “Let’s suppose that you are non-BE” will cause it to be destroyed.
Help people.
Help BEs everytime, but the ceremony of BEing.
How this will take FAI to the point that every conscious being must live?
While searching for literature on “intuition”, I came upon a book chapter that gives “the state of the art in moral psychology from a social-psychological perspective”. This is the best summary I’ve seen of how morality actually works in human beings.
The authors gives out the chapter for free by email request, but to avoid that trivial inconvenience, I’ve put up a mirror of it.
ETA: Here’s the citation for future reference: Haidt, J., & Kesebir, S. (2010). Morality. In S. Fiske, D. Gilbert, & G. Lindzey (Eds.) Handbook of Social Psychology, 5th Edition. Hobeken, NJ: Wiley. Pp. 797-832.
[T]o avoid that trivial inconvenience, I’ve put up a mirror of it.
You’re awesome.
I’ve previously been impressed by how social psychologists reason, especially about identity. Schemata theory is also a decent language for talking about cognitive algorithms from a less cognitive sciencey perspective. I look forward to reading this chapter. Thanks for mirroring, I wouldn’t have bothered otherwise.
Many are calling BP evil and negligent, has there actually been any evidence of criminal activities on their part? My first guess is that we’re dealing with hindsight bias. I am still casually looking into it, but I figured some others here may have already invested enough work into it to point me in the right direction.
Like any disaster of this scale, it may be possible to learn quite a bit from it, if we’re willing.
It depends on what you mean by “criminal”; under environmental law, there are both negligence-based (negligent discharge of pollutants to navigable waters) and strict liability (no intent requirement, such as killing of migratory birds) crimes that could apply to this spill. I don’t think anyone thinks BP intended to have this kind of spill, so the interesting question from an environmental criminal law perspective is whether BP did enough to be treated as acting “knowingly”—the relevant intent standard for environmental felonies. This is an extremely slippery concept in the law, especially given the complexity of the systems at issue here. Litigation will go on for many years on this exact point.
I’ve read somewhere that a BP internal safety check performed a few months ago indicated “unusual” problems which according to again BP internal safety guidelines should have been resolved earlier, but somehow they made an exception this time. It didn’t seem like it would have been “illegal”, and it also did not note how often such exceptions are made, by what reasoning, what kind of problems they specifically encountered, what they did to keep the operation running, et cetera...
Though I seldom read “ordinary” news, even of this kind, as my past experience tells me that factual information is rather low, and most high-quality press likes more to show off in opinion and interpretation of an event than trying to provide an accurate historical report, at least within such a short time-frame. Could well be that this is different at this event.
Also, as with most engineering disciplines, really learning from such an event beyond the obvious “there is a non-zero chance for everything to blow up” usually requires more area-specific expertise than an ordinary outsider has.
I’ve heard scattered bits of accusations of misdeeds by BP which may have contributed to the spill. Here’s a list from the congressional investigation of 5 decisions that BP made “for economic reasons that increased the danger of a catastrophic well failure” according to a letter from the congressmen. It sounds like BP took a bunch of risky shortcuts to save time and money, although I’d want to hear from people who actually understand the technical issues before being too confident.
There are other suspicions and allegations floating around, like this one.
I’m not sure it’s relevant whether they did anything illegal or not. People always seem to want to blame and punish someone for their problems. In my opinion, they should be forced to pay for and compensate for all the damage, as well as a very large fine as punishment. This way in the future they, and other companies, can regulate themselves and prepare for emergencies as efficiently as possible without arbitrary and clunky government regulations and agencies trying to slap everything together at the last moment. Of course, if a single person actually did something irresponsible (eg; bob the worker just used duct tape to fix that pipe knowing that it wouldn’t hold) then they should be able to be tried in court or sued/fined by the company. But even then, it’s up to the company to make sure that stuff like this doesn’t happen by making sure all of their workers are competent and certified.
You are not really going to learn much unless you are interested in wading through lots of technical articles. If you want to learn, you need to wait until it has been digested by relevant experts into books. I am not sure what you think you can learn from this, but there are two good books of related information available now:
Jeff Wheelwright, Degrees of Disaster, about the environmental effects of the Exxon Valdez spill and the clean up.
Trevor Kletz, What Went Wrong?: Case Histories of Process Plant Disasters, which is really excellent. [For general reading, an older edition is perfectly adequate, new copies are expensive.] It has an incredible amount of detail, and horrifying accounts of how apparently insignificant mistakes can (often literally) blow up on you.
Also, Richard Feynman’s remarks on the loss of the Space Shuttle Challenger are a pretty accessible overview of the kinds of dynamics that contribute to major industrial accidents. http://history.nasa.gov/rogersrep/v2appf.htm
I have been reading the “economic collapse” literature since I stumbled on Casey’s “Crisis Investing” in the early 1980s. They have really good arguments, and the collapses they predict never happen. In the late-90s, after reading “Crisis Investing for the Rest of the 1990s”, I sat down and tried to figure out why they were all so consistently wrong.
The conclusion I reached was that humans are fundamentally more flexible and more adaptable than the collapse-predictors’ arguments allowed for, and society managed to work-around all the regulations and other problems the government and big businesses keep creating. Since the regulations and rules keep growing and creating more problems and rigidity along the way, eventually there will be a collapse, but anyone that gives any kind of timing for it is grabbing at the short end of the stick.
Anyone here have more suggestions as to reasons they have been wrong?
(originally posted on esr’s blog 2010-05-09, revised and expanded since)
Not sure if you’re referring to the same literature, but I note a great divergence between peak oil advocates and singularitarians. This is a little weird, if you think of Aumann’s Agreement theorem.
Both groups are highly populated with engineer types, highly interested in cognitive biases, group dynamics, habits of individuals and societies and neither are mainstream.
Both groups use extrapolation of curves from very real phenomena. In the case of the kurzweillian singularitarians, it is computing power and in the case of the peak oil advocates, it is the hubbert curve for resources along with solid Net Energy based arguments about how civilization should decline.
The extreme among the Peak Oil advocates are collapsitarians and believe that people should drastically change their lifestyles, if they want to survive. They are also not waiting for the others to join them and many are preparing to go to small towns, villages etc. The oildrum, linked here had started as a moderate peak oil site discussing all possibilities, nowadays, apparently, its all doom all the time.
The extreme among the singularitarians have been asked no such sacrifice, just to give enough money and support to make sure that Friendly AI is achieved first.
Both groups believe that business as usual cannot go on for too long, but they expect dramatically different consequences. The singularitarians assert that economics conditions and technology will improve until a nonchalant super-intelligence will be created and wipe out humanity. The collapsitarians believe that economic conditions will worsen, civilization is not built robustly and will collapse badly with humanity probably going extinct or only the last hunter gatherers surviving.
It should be possible to believe both—unless you’re expecting peak oil to lead to social collapse fairly soon, Moore’s law could make a singluarity possible while energy becomes more expensive.
Which could suggest a distressing pinch point: not wanting to delay AI too long in case we run out of energy for it to use; not wanting to make an AI too soon in case it’s Unfriendly.
Y2K. I thought I had a solid lower bound for the size of that one:
Small businesses basically did nothing in preparation, and they still had a fair
amount of dependence on date-dependent programs, so I was expecting that
the impact on them would set a sizable lower bound on the the size of the
overall impact. I’ve never been so glad to be wrong. I would still like to see a good
retrospective explaining how that sector of the economy wound up unaffected...
Small businesses basically did nothing in preparation [for Y2K], and they still had a fair amount of dependence on date-dependent programs
The smaller the business, the less likely they are to have their own software that’s not simply a database or spreadsheet, managed in say, a Microsoft product. The smaller the business, the less likely that anything automated is relying on correct date calculations.
These at least would have been strong mitigating factors.
[Edit: also, even industry-specific programs would likely be fixed by the manufacturer. For example, most of the real-estate software produced by the company I worked for in the 80′s and 90′s was Y2K-ready since before 1985.]
First, the “economic collapse” I referred to in the original post were actually at least 6 different predictions at different times.
As another example, but not quite a “collapse” scenario, consider the predictions of the likelihood of nuclear war; there were three distinct periods where it was considered more or less likely by different groups. The late 1940s some intelligent and informed, but peripheral, observers like Robert Heinlein considered it a significant risk. Next was the late 1950s through the Cuban Missile Crisis in the early 1960s, when nearly everybody considered it a major risk. Then there was another scare in the late 1970s to early 1980s, primarily leftists (including the media) favoring disarmament promulgating the fear to try to get the US to reduce their stockpiles and conservatives (derided by the media as “survivalists” and nuts) who were afraid they would succeed.
Almost invariably everything is larger in your imagination than in real life, both good and bad, the consequences of mistakes loom worse, and the pleasure of gains looks better. Reality is humdrum compared to our imaginations. It is our imagined futures that get us off our butts to actually accomplish something.
And the fact that what we do accomplish is done in the humdrum, real world, means it can never measure up to our imagined accomplishments, hence regrets. Because we imagine that if we had done something else it could have measured up. The worst part of having regrets is the impact it has on our motivation.
somewhat expanded version of comment on OB a couple of months ago
Added: I didn’t make the connection at first, but this is also Eliezer’s point in this quote from The Super Happy People story, “It’s bad enough comparing yourself to Isaac Newton without comparing yourself to Kimball Kinnison.”
I was talking to a friend yesterday and he mentioned a psychological study (I am trying to track down the source) that people tend to suffer MORE from failing to pursue certain opportunities than FAILING after pursuing them. So even if you’re right about the overestimation of pleasure, it might just be irrelevant.
Here is a review of that psychological research (pdf), and there are more studies linked here (the keyword to look for is “regret”). The paper I linked is:
Gilovich, T., & Medvec, V. H. (1995). The experience of regret: What, when, and why. Psychological Review, 102, 379-395.
This article reviews evidence indicating that there is a temporal pattern to the experience of regret. Actions, or errors of commission, generate more regret in the short term; but inactions, or errors of omission, produce more regret in the long run. The authors contend that this temporal pattern is multiply determined, and present a framework to organize the divergent causal mechanisms that are responsible for it. In particular, this article documents the importance of psychological processes that (a) decrease the pain of regrettable action over time, (b) bolster the pain of regrettable inaction over time, and (c) differentially affect the cognitive availability of these two types of regrets. Both the functional and cultural origins of how people think about regret are discussed.
No doubt there is truth in this… however examples spring into my mind where accomplishing something made me feel better than what I ever expected. This includes sport (ever win a race or score a goal in a high stakes soccer game?), work and personal life. The “reality is humdrum” perspective might, at least in part, be caused by a disconnect between “imagination” and “action”.
Inspired by Chapter 24 of Methods of Rationality, but not a spoiler: If the evolution of human intelligence was driven by competition between humans, why aren’t there a lot of intelligent species?
Five-second guess: Human-level Machiavellian intelligence needs language facilities to co-evolve with, grunts and body language doesn’t allow nearly as convoluted schemes. Evolving some precursor form of human-style language is the improbable part that other species haven’t managed to pull off.
Somewhat accepted partial answer is that huge brains are ridiculously expensive—you need a lot of high energy density food (= fire), a lot of DHA (= fish) etc. Chimp diet simply couldn’t support brains like ours (and aquatic ape etc.), nor could they spend as much time as us engaging in politics as they were too busy just getting food.
Perhaps chimp brains are as big as they could possibly be given their dietary constraints.
That’s conceivable, and might also explain why wolves, crows, elephants, and other highly social animals aren’t as smart as people.
Also, I think the original bit in Methods of Rationality overestimates how easy it is for new ideas to spread. As came up recently here, even if tacit knowledge can be explained, it usually isn’t.
This means that if you figure out a better way to chip flint, you might not be able to explain it in words, and even if you can, you might chose to keep it as a family or tribal secret. Inventions could give their inventors an advantage for quite a long time.
About CEV: Am I correct that Eliezer’s main goal would be to find the one utility function for all humans? Or is it equally plausible to assume that some important values cannot be extrapolated coherently, and that a Seed-AI would therefore provide several results clustered around some groups of people?
[edit]Reading helps. This he has actually discussed, in sufficient detail, I think.[/edit]
I think the expectation is that, if all humans had the same knowledge and were better at thinking (and were more the people we’d like to be, etc.), then there would be a much higher degree of coherence than we might expect, but not necessarily that everyone would ultimately have the same utility function.
Or is it equally plausible to assume that some important values cannot be extrapolated coherently, and that a Seed-AI would therefore provide several results clustered around some groups of people?
There is only one world to build something from. “Several results” is never a solution to the problem of what to actually do.
Please bear with my bad English, this did not come across as intended.
So: Either all or nothing?
No possibility that the AI could detect that to maximize this hardcore utility function we need to separate different groups of people, maybe/probably lying to them about their separation, just providing the illusion of unity of humankind to each group? Or is too obvious a thought, or too dumb because of x?
I think the idea is that CEV lets us “grow up more together” and figure that out later.
I have only recently started looking into CEV so I’m not sure whether I a) think it’s a workable theory and b)think it’s a good solution, but I like the way it puts off important questions.
It’s impossible to predict what we will want if age, disease, violence, and poverty become irrelevant (or at least optional).
I’d like to ask everyone what probability bump they give to an idea given that some people believe it.
This is based on the fact that out of the humongous idea-space, some ideas are believed by (groups of) humans, and a subset of those are believed by humans and are true. (of course there exist some that are true and not yet believed by humans.)
So, given that some people believe X, what probability do you give for X being true, compared to Y which nobody currently believes?
I’d like to ask everyone what probability bump they give to an idea given that some people believe it.
Usually fairly substantial—if someone presents me with two equally-unsupported claims X and Y and tells me that they believe X and not Y, I would give greater credence to X than to Y. Many times, however, that credence would not reach the level of … well, credence, for various good reasons.
Depends on the person and the idea.
I have some people whose recommendations I follow regardless, even if I estimate upfront that I will consider the idea wrong. There are different levels of wrongness, and it does not hurt to get good counterarguments.
It also depends on the real life practicability of the idea. If it is for everyday things than common sense is a good starting prior. (Also there is a time and place to use the public joker on Who wants to be a millionaire.)
If a group of professionals agree on something related to their profession it is also a good start.
To systematize: if a group of people has a belief about something they have experience with, that that belief is worth looking at.
And then on further investigation it often turns out that there are systematic mistakes being made.
I was shocked to read in the book on checklists, that not only doctors often don’t like them. But even financial companies, that can see how the usage ups their monetary gains.
But finding flaws in a whole group does not imply that everything they say is wrong.
It is good to see a doctor, even if he not using statistics right. He can refer you to a specialist, and treat all the common stuff right away.
If you get a complicated disease you can often read up on it.
The obvious example to your question would be religion. It is widely believed, but probably wrong, yet I did not discard it right away, but spent years studying stuff till I decided there was nothing to it.
There is nothing wrong in examining the ideas other people have.
As the OP states, idea space is humongous. The fact alone that people comprehend something sufficiently to say anything about it at all means that this something is
a) noteworthy enough to be picked up by our evolutionarily derived faculties by even a bad rationalist
b) expressible by same faculties
c) not immediately, obviously wrong
To sum up, the fact that someone claims something is weak evidence that it’s true, cf. Einstein’s Arrogance. If this someone is Einstein, the evidence is not so weak.
Edit: just to clarify, I think this evidence is very weak, but evidence for the proposition, nonetheless. Dependent on the metric, by far most propositions must be “not even wrong”, i.e. garbled, meaningless or absurd. The ratio of “true” to {”wrong” + “not even wrong”} seems to ineluctably be larger for propositions expressed by humans than for those not expressed, which is why someone uttering the proposition counts as evidence for it. People simply never claim that apples fall upwards, sideways, green, kjO30KJ&¤k etc.
I forgot the major influence of my own prior knowledge. (Which i guess holds true for everyone.) That makes the cases where I had a fixed opinion, and managed to change it all the more interesting.
If you never dealt with an idea before you go where common sense or the experts lead you. But if you already have good knowledge, than public opinion should do nothing to your view.
Public opinion or even experts (esp. when outside their field) often enough state opinions without comprehending the idea. So it doesnt really mean too much.
Regarding Einstein, he made the statements before becoming super famous. I understand it as a case of signaling ‘look over here!’ And he is not particularly safe against errors. One of his last actions (which I have not fact checked sufficiently so far) was to write a foreword for a book debunking the movement of the continental plates.
Regarding Einstein, he made the statements before becoming super famous. I understand it as a case of signaling ‘look over here!’ And he is not particularly safe against errors. One of his last actions (which I have not fact checked sufficiently so far) was to write a foreword for a book debunking the movement of the continental plates.
I didn’t intend to portray Einstein as bulletproof, but rather highlight his reasoning. Plus point to the idea of even locating the idea in idea space. Obviously, creationism is wrong, but less wrong than a random string. It at least manages to identify a problem and using cause and effect.
If no people believe Y—literally no people—then either the topic is very little examined by human beings, or it’s very exhaustively examined and seems obvious to everyone. In the first case, I give a smaller probability than in the second case.
In the first case, only X believers exist because only X believers have yet considered the issue. That’s minimal evidence in favor of X.
In the second case, lots of people have heard of the issue; if there were a decent case against X, somebody would have thought of it. The fact that none of them—not a minority, but none—argued against X is strong evidence that X is true.
If no people believe Y—literally no people—then either the topic is very little examined by human beings, or it’s very exhaustively examined and seems obvious to everyone. In the first case, I give a smaller probability than in the second case.
I don’t think belief has a consistent evidentiary strength since it depends on the testifier’s credibility relative to my own. Children have much lower credibility than me on the issue of the existence of Santa. Professors of physics have much higher credibility that me on the issue of dimensions greater than four. Some person other than me has much higher credibility on the issue of how much money they are carrying. But I have more credibility than anyone else on the issue of how much money I’m carrying. I don’t see any relation that could be described as baseline so the only answer is: context.
I’ve become increasingly disillusioned with people’s capacity for abstract thought. Here are two points on my journey.
The public discussion of using wind turbines for carbon-free electricity generation seems to implicitly assume that electricity output goes as something like the square-root of windspeed. If the wind is only blowing half speed you still get something like 70% output. You won’t see people saying this directly, but the general attitude is that you only need back up for the occasional calm day when the wind doesn’t blow at all.
In fact output goes as the cube of windspeed. The energy in the windstream is one half m v squared, where m, the mass passing your turbine is proportional to the windspeed. If the wind is at half strength, you only get 1⁄8 output.
Well, that is physics. Ofcourse people suck at physics. Trouble is, the more I look at people’s capacity for abstract thought the more problems I see. When people do a cost/benefit analysis they are terribly vague on whether they are suposed to add the costs and benefits or whether the costs get subtracted from the benefits. Even if they realise that they have to subtract they are still at risk of using an inverted scale for the costs and ending up effectively adding.
The probabiltiy bump I give to an idea just because some people believe it is zero. Equivantly my odds ratio is one. However you describe it, my posterior is just the same as my prior.
When people do a cost/benefit analysis they are terribly vague on whether they are suposed to add the costs and benefits or whether the costs get subtracted from the benefits.
Revised: I do not think that link provides evidence for the quoted sentence. Nor I do see other evidence that people are that bad at cost-benefit analysis. I agree that the example presented there is interesting and that one should keep in mind that disagreements about values can be hidden, sometimes maliciously.
I’ve got a better link. David Henderson catches a professor of economics getting costs and benefits confused in a published book. Henderson’s review is on on page 54 of Regulation, and my viewer puts it on the ninth page of the pdf that Henderson links to
That is a good example. Talk of creating jobs as a benefit, rather than a cost is quite common. But is it confusion or malice? It is hard for me to imagine that economists would publish such a book without having it pointed out to them. The audience certainly is confused. Henderson says “Almost no one spending his own money makes this mistake” and would not generalize to people’s capacity for abstract thought.
The original question was how much information to extract from the conventional wisdom. I do not take this as a reason to doubt the conventional wisdom about personal decisions. Partly, this is public choice, and partly because people do not address externalities in their personal decisions. Maybe any commonly accepted argument involving economics should be suspect, though the existence of the very well-established applause-line of “creating jobs” suggests that there are limits to how to fool people. But your claim was not that people are bad at physics and economics, but at the abstract thought of decision theory.
I recently learned the hard way, that one can easily be an idiot in one area, while being very competent in another.
Religious scientists / programmers etc.
Or lets say people that are highly competent in their area of occupation without looking into other things.
Out of the huge idea space of possible causally linked events, some of them make good stories and some do not. That doesn’t tell you rather it’s true or not.
If a guy thinks that he can hear Hillary Clinton speaking from the feelings in his teeth, telling him to murder his cellmate, do you believe what he says? Status gets mucked up in the calculation, but with strangers it teeters precariously close to zero.
I really like kids,but the fact that millions of them passionately believe in Santa Claus does not change my degree of subjective belief one iota.
Well obviously propositions with extremely high complexity (and therefore very low priors) are going to remain low even when people believe them. But if someone says they believe they have 10 dollars on them or that the US Constitution was signed in September… the belief is enough to make those claims more likely than not.
Out of the huge idea space of possible causally linked events, some of them make good stories and some do not. That doesn’t tell you rather it’s true or not.
But people only believe things that make sense to them. When it comes to controversial issues, then ya, you’ll find that most people will be divided on it. However, we elect people to lead us in the faith that the majority opinion is right. So even that isn’t entirly true. And out of the vast majority of possible ideas, most people that live in the same society will agree or disagree the same way on the majority of them, esspecially if they have the same background knowledge.
I’d like to ask everyone what probability bump they give to an idea given that some people believe it.
None.
Or as Ben Goldacre put it in a talk: There are millions of medical doctors and Ph.D.s in the world. There is no idea, however completely fucking crazy, that you can’t find some doctor to argue for.
So, given that some people believe X, what probability do you give for X being true, compared to Y which nobody currently believes?
In any case of a specific X and Y, there will be far more information than that (who believes X and why? does anyone disbelieve Y? etc.), which makes it impossible for me to attach any probability for the question as posed.
Or as Ben Goldacre put it in a talk: There are millions of medical doctors and Ph.D.s in the world. There is no idea, however completely fucking crazy, that you can’t find some doctor to argue for.
Cute quip, but I doubt it. Find me a Ph.D to argue that the sky is bright orange, that the english language doesn’t exist, and that all humans have at least seventeen arms and a maximum lifespan of ten minutes.
All generalisations are bounded, even when the bounds are not expressed. In the context of his talk, Ben Goldacre was talking about “doctors” being quoted as supporting various pieces of bad medical science.
Many medical doctors around here (germany) offer homeopathy in addition to their medical practice. Now it might be that they respond to market demand to sneak in some medical science in between, or that they actually take it serious.
From what I’ve heard, in Germany and other places where homeopathy enjoys high status and professional recognition, doctors sometimes use it as a very convenient way to deal with hypochondriacs who pester them. Sounds to me like a win-win solution.
I still assume that doctors actually want to help people. (Despite reading the checklist book, and other stuff).
So if I have the choice between: World a) where doctors also do homeopathy, and b) where other ppl. do it, while doctors stay true to science. Than I would prefer a) because at least the people go to a somewhat competent person.
I still assume that doctors actually want to help people
Homeopathy is at best a placebo. It’s rare that there’s no better medical way to help someone. Your assumption is counter to the facts.
Certainly doctors want to help people—all else being equal. But if they practice homeopathy extensively, then they are prioritizing other things over helping people.
If the market condition (i.e. the patients’ opinions and desires) are such that they will not accept scientific medicine, and will only use homeopathy anyway, then I suggest then the best way to help people is for all doctors to publicly denounce homeopathy and thus convince at least some people to use better-than-placebo treatments instead.
Homeopathy is at best a placebo. It’s rare that there’s no better medical way to help someone.
I disagree—at least with the part about “it’s rare that there’s no better medical way to help people”. It’s depressingly common that there’s no better medical way to help people. Things like back pain, tiredness, and muscle aches—the commonest things for which people see doctors—can sometimes be traced to nice curable medical reasons, but very often as far as anyone knows they’re just there.
Robin Hanson has a theory—and I kind of agree with him—that homeopathy fills a useful niche. Placebos are pretty effective at curing these random (and sometimes imagined) aches and pains. But most places consider it illegal or unethical for doctors to directly prescribe a placebo. Right now a lot of doctors will just prescribe aspirin or paracetamol or something, but these are far from totally harmless and there are a lot of things you can’t trick patients into thinking aspirin is a cure for. So what would be really nice, is if there was a way doctors could give someone a totally harmless and very inexpensive substance like water and make the patient think it was going to cure everything and the kitchen sink, without directly lying or exposing themselves to malpractice allegations.
Where this stands or falls is whether or not it turns patients off real medicine and gets them to start wanting homeopathy for medically known, treatable diseases. Hopefully it won’t—there aren’t a lot of people who want homeopathic cancer treatment—but that would be the big risk.
You might implicitly assume that people make a conscious choice to go the unscientific route. That is not the case.
For a layperson there is no perceivable difference between a doctor and a homeopath. (Well. Maybe there is, but lets exaggerate that here.)
From the experience the homeopath might have more time to listen, while doctors often have a approach to treatment speed that reminds me of a fast food place.
If I were a doctor, than the idea to offer homeopathy, so that people at least come to me would make sense both money wise, and to get the effect that they are already at a doctors place for treatment with placebos for trivial stuff, while actual dangerous conditions get check out from a competent person.
Its a case of corrupting your integrity to some degree to get the message heard.
I considered to not go to doctors that offer homeopathy, but then decided against that due to this reasoning.
I considered to not go to doctors that offer homeopathy, but then decided against that due to this reasoning.
You could probably ask the doctor why they offer homeopathy, and base your decision on the sort of answer you get. “Because it’s an effective cure...” is straight out.
tl;dr—if doctors don’t denounce homeopaths, people will start going to “real” homeopaths and other alt-medicine people, and there is no practical limit to the lies and harm done by real homeopaths.
For a layperson there is no perceivable difference between a doctor and a homeopath.
That is so because doctors also offer homeopathy. If almost all doctors clearly denounced homeopathy, fewer people would choose to go to homeopaths, and these people would benefit from better treatment.
From the experience the homeopath might have more time to listen, while doctors often have a approach to treatment speed that reminds me of a fast food place.
This is a problem in its own right that should be solved by giving doctors incentives to listen to patients more. However, do you think that because doctors don’t listen enough, homeopaths produce better treatment (i.e. better medical outcomes)?
they are already at a doctors place for treatment with placebos for trivial stuff, while actual dangerous conditions get check out from a competent person.
Do you have evidence that this is the result produced?
What if the reverse happens? Because the doctors endorse homeopathy, patients start going to homeopaths instead of doctors. Homeopaths are better at selling themselves, because unlike doctors they can lie (“homeopathy is not a placebo and will cure your disease!”). They are also better at listening, can create a nicer (non-clinical) reception atmosphere, they can get more word-of-mough networking benefits, etc.
Patients can’t normally distinguish “trivial stuff” from dangerous conditions until it’s too late—even doctors sometimes get this wrong. The next logical step is for people to let homeopaths treat all the trivial stuff, and go to ER when something really bad happens.
Personal story: my mother is a doctor (geriatrician). When I was a teenager I had seasonal allergies and she insisted on sending me for weekly acupuncture. During the hour-long sessions I had to listen to the ramblings of the acupuncturist. He told me (completely seriously) that, although he personally didn’t have the skill, the people who taught him acupuncture in China could use it to cure my type 1 diabetes. He also once told me about someone who used various “alternative medicine” to eat only vine leaves for a year before dying.
When the acupuncture didn’t help me, my mother said that was my own fault because “I deliberately disbelieved the power of acupuncture and so the placebo effect couldn’t work on me”.
I perceive you as attacking me for having said position, but I am the wrong target.
I know homeopathy is BS, and I don’t use it or advocate it.
What I do understand is doctors who offer it for some reason or another, for the reasons listed above. What you claim as a result is sadly already happening. I have had people getting angry at me for clearly stating my view, and the reasons for it, on homeopathy. (I didn’t say BS, but one of the ppl. was a programmer, if that counts for something.)
Many folks do go to alternative treatments, and forgo doctors as long as possible. People have a weak opinion on the ‘school medicine’ (german term translation for the official medical knowledge and practice.) criticize it—sometimes justified. And use all kind of hyper-skeptical reasoning, that they do not apply to their current favorite. That is bad. And hopefully goes away.
Many still go the double route you listed.
And well, then we have the anti-vaccination front growing. It is bad, and sad, and useless stupidity.
Lets get angry together, and see what can be done about it.
Personal story: i did a lecture on skeptic thinking.
try i dumped everything i knew, and noticed how dealing with the H-topic tends to close people up.
try i cut out a lot, and left the H topic out. still didn’t work
I have no idea what I can do about it, and am basically resigning.
From what I’ve been told from friends, here (Austria) they (meaning: most doctors) do take it serious. This is understandable; when studying medicine, the by far larger part of college is devoted to knowing facts, the craftsmanship (if I may say so), then to doing medical science.
This also makes sense, as execution by using results already requires so much training (it is the only college course here which requires at least six years by default, not including “Turnus” (another three year probation period before somebody may practice without supervisor)).
The problem here is that for the general public the difference between a medical practitioner and any scientist is nil. Strangely enough, they usually do not make this error in engineering fields, for instance electrical engineer vs. physicist. May have to do something with the high status of doctors in society.
I recently found out why doctors cultivate a certain amount of professional arrogance when dealing with patients:
Most patients don’t understand whats behind their specific disease—and usually do not care. So if doctors where open to argument, or would state doubts more openly the patient might loose trust, and not do what he is ordered to do.
To instill an absolute belief in doctors powers might be very helpful for a big size of the population.
A lot of my own frustration in doctors experiences can be attributed to me being a non-standard patient that reads to much.
Find me a Ph.D to argue that the sky is bright orange, that the english language doesn’t exist, and that all humans have at least seventeen arms and a maximum lifespan of ten minutes.
These claims would be beyond the border of lunacy for any person, but still, I’m sure you’ll find people with doctorates who have gone crazy and claim such things.
But more relevantly, Richard’s point definitely stands when it comes to outlandish ideas held by people with relevant top-level academic degrees. Here, for example, you’ll find the website of Gerardus Bouw, a man with a Ph.D. in astronomy from a highly reputable university who advocates—prepare for it—geocentrism: http://www.geocentricity.com/
(As far as I see, this is not a joke. Also, I’ve seen criticisms of Bouw’s ideas, but nobody has ever, to the best of my knowledge, disputed his Ph.D. He had a teaching position at a reputable-looking college, and I figure they would have checked.)
He had a teaching position at a reputable-looking college, and I figure they would have checked.
It looks like no one ever hired him to teach astronomy or physics. He only ever taught computer science (and from the sound of it, just programming languages). My guess is he did get the PhD though.
Also, in fairness to the college he is retired and he’s young enough to make me think that he may have been forced into retirement.
Here, for example, you’ll find the website of Gerardus Bouw, a man with a Ph.D. in astronomy from a highly reputable university who advocates—prepare for it—geocentrism:
Earth’s sun does orbit the earth, under the right frame of reference. What is outlandish about this?
Earth’s sun does orbit the earth, under the right frame of reference. What is outlandish about this?
If you read the site, they alternatively claim that relativity allows them to use whatever reference frame they chose and at other points claim that the evidence only makes sense for geocentrism.
I’m not sure it is completely stupid. Consider the argument in the following fashion:
1) We think your physics is wrong and geocentrism is correct.
2) Even if we’re wrong about 1, your physics still supports regarding geocentrism as being just as valid as heliocentrism.
I don’t think that their argument approaches this level of coherence.
It is entirely possible that some social groups are experiencing the kind of changes that Flanagan describes, but as Yglesias says, she apparently is unaware that there is such a thing as scientific evidence on the question.
What solution do people prefer to Pascal’s Mugging? I know of three approaches:
1) Handing over the money is the right thing to do exactly as the calculation might indicate.
2) Debiasing against overconfidence shouldn’t mean having any confidence in what others believe, but just reducing our own confidence; thus the expected gain if we’re wrong is found by drawing from a broader reference class, like “offers from a stranger”.
3) The calculation is correct, but we must pre-commit to not paying under such circumstances in order not to be gamed.
The unbounded utility function (in some physical objects that can be tiled indefinitely) in Pascal’s mugging gives infinite expected utility to all actions, and no reason to prefer handing over the money to any other action. People don’t actually show the pattern of preferences implied by an unbounded utility function.
If we make the utility function a bounded function of happy lives (or other tilable physical structures) with a high bound, other possibilities will offer high expected utility. The Mugger is not the most credible way to get huge rewards (investing in our civilization on the chance that physics allows unlimited computation beats the Mugger). This will be the case no matter how huge we make the (finite) bound.
Bounding the utility function definitely solves the problem, but there are a couple of problems. One is the principle that the utility function is not up for grabs, the other is that a bounded utility function has some rather nasty consequences of the “leave one baby on the track” kind.
One is the principle that the utility function is not up for grabs,
I don’t buy this. Many people have inconsistent intuitions regarding aggregation, as with population ethics. Someone with such inconsistent preferences doesn’t have a utility function to preserve.
Also note that a bounded utility function can allot some of the potential utility under the bound to producing an infinite amount of stuff, and that as a matter of psychological fact the human emotional response to stimuli can’t scale indefinitely with bigger numbers.
And, of course, allowing unbounded growth of utility with some tilable physical process means that process can dominate the utility of any non-aggregative goods, e.g. the existence of at least some instantiations of art or knowledge, or overall properties of the world like ratios of very good to lives just barely worth living/creating (although you might claim that the value of the last scales with population size, many wouldn’t characterize it that way).
Bounded utility functions seem to come much closer to letting you represent actual human concerns, or to represent more of them, in my view.
Eliezer’s original article bases its argument on the use of Solomonoff induction. He even suggests up front what the problem with it is, although the comments don’t make anything of it: SI is based solely on program length and ignores computational resources. The optimality theorems around SI depend on the same assumption. Therefore I suggest:
4. Pascal’s Mugging is a refutation of the Solomonoff prior.
But where a computationally bounded agent, or an unbounded one that cares how much work it does, should get its priors from instead would require more thought than a few minutes on a lunchtime break.
In one sense you can’t use evidence to argue with a prior, but I think that factoring in computational resources as a cost would have put you on the wrong side of a lot of our discoveries about the Universe.
In one sense you can’t use evidence to argue with a prior, but I think that factoring in computational resources as a cost would have put you on the wrong side of a lot of our discoveries about the Universe.
Could you expand that with examples? And if you can’t use evidence to argue with a prior, what can you use?
I’m thinking of the way we keep finding ways in which the Universe is far larger than we’d imagined—up to and including the quantum multiverse, and possibly one day including a multiverse-based solution to the fine tuning problem.
The whole point about a prior is that it’s where you start before you’ve seen the evidence. But in practice using evidence to choose a prior is likely justified on the grounds that our actual prior is whatever we evolved with or whatever evolution’s implicit prior is, and settling on a formal prior with which to attack hard problems is something we do in the face of lots of evidence. I think.
I’m thinking of the way we keep finding ways in which the Universe is far larger than we’d imagined
It’s not clear to me how that bears on the matter. I would need to see something with some mathematics in it.
The whole point about a prior is that it’s where you start before you’ve seen the evidence.
There’s a potential infinite regress if you argue that changing your prior on seeing the evidence means it was never your prior, but something prior to it was.
You can go on questioning those previous priors, and so on indefinitely, and therefore nothing is really a prior.
You stop somewhere with an unquestionable prior, and the only unquestionable truths are those of mathematics, therefore there is an Original Prior that can be deduced by pure thought. (Calvinist Bayesianism, one might call it. No agent has the power to choose its priors, for it would have to base its choice on something prior to those priors. Nor can it priors be conditional in any way upon any property of that agent, for then again they would not be prior. The true Prior is prior to all things, and must therefore be inherent in the mathematical structure of being. This Prior is common to all agents but in their fundamentally posterior state they are incapable of perceiving it. I’m tempted to pastiche the whole Five Points of Calvinism, but that’s enough for the moment.)
You stop somewhere, because life is short, with a prior that appears satisfactory for the moment, but which one allows the possibility of later rejecting.
I think 1 and 2 are non-starters, and 3 allows for evidence defeating priors.
Tom_McCabe2 suggests generalizing EY’s rebuttal of Pascal’s Wager to Pascal’s Mugging: it’s not actually obvious that someone claiming they’ll destroy 3^^^^3 people makes it more likely that 3^^^^3 people will die. The claim is arguably such weak evidence that it’s still about equally likely that handing over the $5 will kill 3^^^^3 people, and if the two probabilities are sufficiently equal, they’ll cancel out enough to make it not worth handing over the $5.
Personally, I always just figured that the probability of someone (a) threatening me with killing 3^^^^3 people, (b) having the ability to do so, and (c) not going ahead and killing the people anyway after I give them the $5, is going to be way less than 1/3^^^^3, so the expected utility of giving the mugger the $5 is almost certainly less than the $5 of utility I get by hanging on to it. In which case there is no problem to fix. EY claims that the Solomonoff-calculated probability of someone having ‘magic powers from outside the Matrix’ ‘isn’t anywhere near as small as 3^^^^3 is large,’ but to me that just suggests that the Solomonoff calculation is too credulous.
(Edited to try and improve paraphrase of Tom_McCabe2.)
This seems very similar to the “reference class fallback” approach to confidence set out in point 2, but I prefer to explicitly refer to reference classes when setting out that approach, otherwise the exactly even odds you apply to massively positive and massively negative utility here seem to come rather conveniently out of a hat...
Fair enough. Actually, looking at my comment again, I think I paraphrased Tom_McCabe2 really badly, so thanks for replying and making me take another look! I’ll try and edit my comment so it’s a better paraphrase.
I’m not sure this problem needs a “solution” in the sense that everyone here seems to accept. Human beings have preferences. Utility functions are an imperfect way of modeling those preferences, not some paragon of virtue that everyone should aspire to. Most models break down when pushed outside their area of applicability.
The utility function assumes that you play the “game” (situation, whatever) an infinite number of times and then find the net utility. Thats good when your playing the “game” enough times to matter. It’s not when your only playing a small number of times. So lets look at it as “winning” or “loosing”. If the odds are really low and the risk is high and your only playing once, then most of the time you expect to loose. If you do it enough times, you even the odds out and the loss gets canceled out by the large reward, but only playing once you expect to loose more then you gain. Why would you assume differnetly? Thats my 2 cents and so far its the only way I have come up with to navigate around this problem.
The utility function assumes that you play the “game” (situation, whatever) an infinite number of times and then find the net utility.
This isn’t right. The way utility is normally defined, if outcome X has 10 times the utility of outcome Y for a given utility function, agents behaving in accord with that function will be indifferent between certain Y and a 10% probability of X. That’s why they call expected utility theory a theory of “decision under uncertainty.” The scenario you describe sounds like one where the payoffs are in some currency such that you have declining utility with increasing amounts of the currency.
The scenario you describe sounds like one where the payoffs are in some currency such that you have declining utility with increasing amounts of the currency.
Uh, no. Allright, lets say I give you a 1 out of 10 chance at winning 10 times everything you own, but the other 9 times you lose everything. The net utility for accepting is the same as not accepting, yet thats completely ignoring the fact that if you do enter, 90 % of the time you lose everything, no matter how high the reward is.
As Thom indicates, this is exactly what I was talking about: ten times the stuff you own, rather than ten times the utility. Since utility is just a representation of your preferences, the 1 in 10 payoff would only have ten times the utility of your current endowment if you would be willing to accept this gamble.
That’s only true if “everything you own” is cast in terms of utility, which is not intuitive. Normally, “everything you own” would be in terms of dollars or something to that effect, and ten times the number of dollars I have is not worth 10 times the utility of those dollars.
Because it was used somewhere I calculated my own weights worth in gold—it is about 3.5 million EUR. In silver you can get me for 50.000 EUR.
The Mythbusters recently build a lead balloon and had it fly. Some proverb don’t hold up to reality and/or engineering.
I think I found the study they’re talking about thanks to this article. I might take a look at it—if the methodology is literally just ‘smoking was banned, then the heart attack rate dropped’, that sucks.
(Edit to link to the full study and not the abstract.)
Just skimmed it. The methodology is better than that. They use a regression to adjust for the pre-existing downward trend in the heart attack hospital admission rate; they represent it as a linear trend, and that looks fair to me based on eyeballing the data in figures 1 and 2. They also adjust for week-to-week variation and temperature, and the study says its results are ‘more modest’ than others’, and fit the predictions of someone else’s mathematical model, which are fair sanity checks.
I still don’t know how robust the study is—there might be some confounder they’ve overlooked that I don’t know enough about smoking to think of—but it’s at least not as bad as I expected. The authors say they want to do future work with a better data set that has data on whether patients are active smokers, to separate the effect of secondhand smoke from active smoking. Sounds interesting.
I agree that this article isn’t very good. It seems to do the standard problem of combining a lot of different ideas about what the Singularity would entail. It emphasizes Kurzweil way too much, and includes Kurzweil’s fairly dubious ideas about nutrition and health. The article also uses Andrew Orlowski as a serious critic of the Singularity making unsubstantiated claims about how the Singularity will only help the rich. Given that Orlowski’s entire approach is to criticize anything remotely new or weird-seeming, I’m disappointed that the NYT would really use him as a serious critic in this context. The article strongly reinforces the perception that the Singularity is just a geek-religious thing. Overall, not well done at all.
I’m starting to think SIAI might have to jettison the “singularity” terminology (for the intelligence explosion thesis) if it’s going to stand on its own. It’s a cool word, and it would be a shame to lose it, but it’s become associated too much with utopian futurist storytelling for it to accurately describe what SIAI is actually working on.
Edit:Look at this Facebook group. This sort of thing is just embarrassing to be associated with. “If you are feeling brave, you can approach a stranger in the street and speak your message!” Seriously, this practically is religion. People should be raising awareness of singularity issues not as a prophecy but as a very serious and difficult research goal. It doesn’t do any good to have people going around telling stories about the magical Future-Land while knowing nothing about existential risks or cognitive biases or friendly AI issues.
I’m not sure that your criticism completely holds water. Friendly AI is simply put only a worry that has convinced some Singularitarians. One might not be deeply concerned about that (Possible example reasons: 1) You expect uploading to come well before general AI. 2) you think that the probable technical path to AI will force a lot more stages of AI of much lower intelligence which will be likely to give us good data for solving the problem)
I agree that this Facebook group does look very much like something one would expect out of a missonizing religion. This section in particular looked like a caricature:
To raise awareness of the Singularity, which is expected to occur no later than the year 2045, we must reach out to everyone on the 1st day of every month.
At 20:45 hours (8:45pm) on the 1st day of each month we will send SINGULARITY MESSAGES to friends or strangers.
Example message:
“Nanobot revolution, AI aware, technological utopia: Singularity2045.”
The certainty for 2045 is the most glaring aspect of this aside from the pseudo-missionary aspect. Also note that some of the people associated with this group are very prominent Singularitarians and Transhumanists. Aubrey de Grey is listed as an administrator.
But, one should remember that reversed stupidity is not intelligence. Moreover, there’s a reason that missionaries sound like this: They have a very high confidence in their correctness. If one had a similarly high confidence in the probability of a Singularity event, and you thought that that event was more likely to occur safely if more people were aware of it, and was more likely to occur soon if more people were aware of it, and buy into something like the galactic colonization argument, and you believe that sending messages like this has a high chance of getting people to be aware and take you seriously then this is a reasonable course of action. Now, that’s a lot of premises, some of which have likelyhoods others which have very low ones. Obviously there’s a very low probability that sending out these sorts of messages is at all a net benefit. Indeed, I have to wonder if there’s any deliberate mimicry of how religious groups send out messages or whether successfully reproducing memes naturally hit on a small set of methods of reproduction (but if that were the case I think they’d be more likely to hit an actually useful method of reproduction). And in fairness, they may just be using a general model for how one goes about raising awareness for a cause and how it matters. For some causes, simple, frequent appeals to emotion are likely an effective method (for example, making people aware of how common sexual assault is on college campuses, short messages that shock probably do a better job than lots of fairly dreary statistics). So then the primary mistake is just using the wrong model of how to communicate to people.
Speaking of things to be worried about other than AI, I wonder if a biotech disaster is a more urgent problem, even if less comprehensive
Part of what I’m assuming is that developing a self-amplifying AI is so hard that biotech could be well-developed first.
While it doesn’t seem likely to me that a bio-tech disaster could wipe out the human race, it could cause huge damage—I’m imagining diseases aimed at monoculture crops, or plagues as the result of terrorism or incompetent experiments.
My other assumptions are that FAI research is dependent on a wealthy, secure society with a good bit of surplus wealth for individual projects, and is likely to be highly dependent on a small number of specific people for the forseeable future.
On the other hand, FAI is at least a relatively well-defined project. I’m not sure where you’d start to prevent biotech disasters.
It’s better than mainstream Singularity articles in the past, IMO; unfortunately, Kurzweil is seen as an authority, but at least it’s written with some respect for the idea.
It does seem to be about a lot of different things, some of which are just synonymous with scientific progress (I don’t think it’s any revelation that synthetic biology is going to become more sophisticated.)
I’m curious: Was the SIAI contacted for that article? I haven’t had time to read it all, but a word-search for “Singularity Institute” and “Yudkowsky” turned up nothing.
I’ve recently begun downvoting comments that are at −2 rating regardless of my feelings about them. I instituted this policy after observing that a significant number of comments reach −2 but fail to be pushed over to −3, which I’m attributing to the threshold being too much of a psychological barrier for many people to penetrate; they don’t want to be ‘the one to push the button’. This is an extension of my RL policy of taking ‘the last’ of something laid out for communal use (coffee, donuts, cups, etc.). If the comment thread really needs to be visible, I expect others will vote it back up.
Edit: It’s likely that most of the negative response to this comment centers around the phrase “regardless of my feelings about them.” I now consider this to be too strong a statement with regards to my implemented actions. I do read the comment to make sure I don’t consider it any good, and doubt I would perversely vote something down even if I wanted to see more of it.
I wish you wouldn’t do that, and stuck instead with the generally approved norm of downvoting to mean “I’d prefer to see fewer comments like this” and upvoting “I’d like to see more like this”.
You’re deliberately participating in information cascades, and thereby undermining the filtering process. As an antidote, I recommend using the anti-kibitzer script (you can do that through your Preferences page).
I wish you wouldn’t do that, and stuck instead with the generally approved norm of downvoting to mean “I’d prefer to see fewer comments like this” and upvoting “I’d like to see more like this”.
I disagree that that’s the formula used for comments that exist within the range −2 to 2. Within that range, from what I’ve observed of voting patterns, it seems far more likely that the equation is related to what value the comment “should be at.” If many people used anti-kibitzing, I doubt this would remain a problem.
I believe your hypothesis and decision are possibly correct, but if they are, you should expect your downvotes to often be corrected upwards again. If this doesn’t happen, then you are wrong and shouldn’t apply this heuristic.
I disagree that that’s the formula used for comments that exist within the range −2 to 2.
Morendil doesn’t say it’s what actually happens, he merely says it should happen this way, and that you in particular should behave this way.
I’m using it as an excuse to overcome my general laziness with regards to voting, which has the typical pattern of one vote (up or down) per hundreds of comments read.
I don’t do huge amounts of voting, and I admit that if a post I like has what I consider to be “enough” votes, I don’t upvote it further. I can certainly change this policy if there’s reason to think upvoting everything I’d like to see more of would help make LW work better.
After logging out and attempting to view a thread with a comment at exactly −3, it showed that comment to be below threshold. I doubt that it retains customized settings after logging out, and I do not believe that I changed mine in the first place, leading me to believe that −3 is indeed the threshold.
Also, my original comment was at −3 within minutes of posting.
I think most claims of countersignaling are actually ordinary signaling, where the costly signal is foregoing another group and the trait being signaled is loyalty to the first group. Countersignaling is where foregoing the standard signal sends a stronger positive message of the same trait to the usual recipients.
That article makes it sound like “countersignaling” is forgoing a mandated signal
I said “standard” because game theory doesn’t talk about mandates, but that’s pretty much what I said, isn’t it? If you disagree with that usage, what do you think is right?
Incidentally, in von Neumann’s model of poker, you should raise when you have a good hand or a poor hand, and check when you have a mediocre hand, which looks kind of like countersignaling. Of course, the information transference that yields the name “signal” is rather different. Also, I’m not interested in applications of game theory to hermetically sealed games.
Try it out, guys! LongBets and PredictionBook are good, but they’re their own niche; LongBets won’t help you with pundits who don’t use it, and PredictionBook is aimed at personal use. If you want to track current pundits, WrongTomorrow seems like the best bet.
Am I correct in reading that Longbets charges a $50 fee for publishing a prediction and they have to be a minimum of 2 years in the future? Thats a bit harsh. But these sites are pretty interesting. And they could be useful to. You could judge the accuracy of different users including how accurate they are at guessing long-term, short-term, etc predictions as well as how accurate they are in different catagories (or just how accurate they are on average if you want to be simple.) Then you can create a fairly decent picture of the future, albeit I expect many of the predictions will contradict each other. This is kind of what their already doing obviously, but they could still take it a step further.
Anyone know how to defeat the availability heuristic? Put another way, does anyone have advice on how to deal with incoherent or insane propositions while losing as little personal sanity as possible? Is there such a thing as “safety gloves” for dangerous memes?
I’m asking because I’m currently studying for the California Bar exam, which requires me to memorize hundreds of pages of legal rules, together with their so-called justifications. Of course, in many cases the “justifications” are incoherent, Orwellian doublespeak, and/or tendentiously ideological. I really do want to memorize (nearly) all of these justifications, so that I can be sure to pass the exam and continue my career as a rationalist lawyer, but I don’t want the pattern of thought used by the justifications to become a part of my pattern of thought.
I would not worry overmuch about the long-term negative effects of your studying for the bar: with the possible exception of the “overly sincere” types who fall very hard for cults and other forms of indoctrination, people have a lot of antibodies to this kind of thing.
You will continue to be entagled with reality after you pass the exam, and you can do things, like read works of social science that carve reality at the joints, to speed up the rate at which your continued entaglement with reality with cancel out any falsehoods you have to cram for now. Specifically, there are works about the law that do carve reality at the joints—Nick Szabo’s online writings IMO fall in that category. Nick has a law degree, by the way, and there is certainly nothing wrong with his ability to perceive reality correctly.
ADDED. The things that are really damaging to a person’s rationality, IMHO, are natural human motivations. When for example you start practicing, if you were to decide to do a lot of trials, and you learned to derive pleasure—to get a real high—from the combative and adversarial part of that, so that the high you got from winning with a slick and misleading angle trumped the high you get from satisfying you curiosity and from refining and finding errors in your model of reality—well, I would worry about that a lot more than your throwing yourself fully into winning on this exam because IMHO the things we derive no pleasure from, but do to achieve some end we care about (like advancing in our career by getting a credential) have a lot less influence on who we turn out to be than things we do because we find them intrinsically rewarding.
One more thing: we should not all make our living as computer programmers. That would make the community less robust than it otherwise would be :)
I worry about this as well when I’m reading long arguments or long works of fiction presenting ideas I disagree with. My tactic is to stop occasionally and go through a mental dialog simulating how I would respond to the author in person. This serves a double purpose, as hopefully I’ll have better cached arguments in the event I ever need them.
Of course, this is a dangerous tactic as well, because you may be shutting off critical reasoning applied to your preexisting beliefs. I only apply this tactic when I’m very confident the author is wrong and is using fallacious arguments. Even then I make sure to spend some amount of time playing devil’s advocate.
It promises such lovely possibilities as quick solutions to NP-complete problems, and I’m not entirely sure the mechanism couldn’t also be used to do arbitrary amounts of computation in finite time. Certainly worth a read.
However, I don’t understand quantum mechanics well enough to tell how sane the paper is, or what the limits of what they’ve discovered are. I’m hoping one of you does.
If this worked, Harry could use it to recover any sort of answer that was easy to check but hard to find. He wouldn’t have just shown that P=NP once you had a Time-Turner, this trick was more general than that. Harry could use it to find the combinations on combination locks, or passwords of every sort. Maybe even find the entrance to Slytherin’s Chamber of Secrets, if Harry could figure out some systematic way of describing all the locations in Hogwarts. It would be an awesome cheat even by Harry’s standards of cheating.
Harry took Paper-2 in his trembling hand, and unfolded it.
Paper-2 said in slightly shaky handwriting:
DO NOT MESS WITH TIME
Harry wrote down “DO NOT MESS WITH TIME” on Paper-1 in slightly shaky handwriting, folded it neatly, and resolved not to do any more truly brilliant experiments on Time until he was at least fifteen years old.
To put this into my own words “The more information you extract from the future, the less you are able to control the future from the past. And hence, the less understanding you can have about what those bits of future-generated information are actually going to mean.”
I wrote that before actually looking at the paper you linked. I don’t understand much QM either, but now that I have looked it seems to me that figure 2 of the paper backs me up on my interpretation of Harry’s experiment.
Even if it’s written by Eliezer, that’s still generalizing from fictional evidence. We don’t know what the laws of physics are supposed to be there..
Well. You probably can’t use time-travel to get infinite computing power. But that’s not to say you can’t get strictly finite power out of it; in Harry’s case, his experiment would probably have worked just fine if he’d been the sort of person who’d refuse to write “DO NOT MESS WITH TIME”.
Playing chicken with the universe, huh? As long as scaring Harry is easier than solving his homework problem, I’d expect the universe to do the former :-) Then again, you could make a robot use the Time-Turner...
Clippy-related: The Paper Clips Project is run by a school trying to overcome scope insensitivity by representing the eleven million people killed in the Holocaust with one paper clip per victim.
Inside the railcar, besides the paper clips, there are the Schroeders’ book and a suitcase filled with letters of apology to Anne Frank by a class of German schoolchildren.
Apologizing for … being German? That’s really bizarre.
Apologizing for … being German? That’s really bizarre.
Not really. Most cultures go funny in the head around the Holocaust. It is, for some reason, considered imperative that 10th graders in California spend more time being made to feel guilty about the Holocaust than learning about the actual politics of the Weimar Republic.
Cultures can also be very weird about how they treat schoolchildren. The kids weren’t responsible for any part of the Holocaust, and they’re theoretically apologizing to someone who can’t hear it.
I can see some point in all this if you believe that Germans are especially apt to genocide (I have no strong opinion about this) and need to keep being reminded not to do it. Still, if this sort of apology is of any use, I’d take it more seriously if it were done spontaneously by individuals.
I think it’s very noble of them to collect numerous paperclips and hold them safely out of use. c=@ I just hope they have appropriate protocols in place to ensure they don’t become stolen or unbent. Anyone know if there’s an insurance policy taken out against loss or destruction of the paperclips?
I doubt there’s insurance on the paperclips themselves, but I suspect that having associated them with something generally considered sacred-ish will do a better job of keeping them safe than an insurance policy in any case. It’s unlikely that anyone will bother to overcome the taboo on destroying sacred sites to steal or damage them, and if someone does, I can virtually guarantee that the story will be in the news and more paperclips will be donated to the project—possibly several times the number of paperclips that were stolen.
Noteworthy: In order for that to work as a safety mechanism for getting the paperclips replaced, the project has to be able to get the attention of the media. This project obviously can—we heard about it because it succeeded—but that doesn’t necessarily imply that other similar projects will easily be able to do the same.
What are you talking about? I was referring to your idea to increase the total number of paperclips by stealing from those held by the memorial project.
If paper clips are apt to be stolen, then they won’t be suitable for any sacred project which depends on aiming at a permanent structure.
There are sacred projects which are evanescent, like sand painting or religious services, but I can’t think of any sacred project which is designed to include its stuff being stolen.
*snerk* I thought that might be what you were thinking. I figured you’d get upset if I said so without more evidence, though, if I was wrong.
In that case:
It’s not likely to work more than once on any given project, especially if the thefts happen too close together in time. A project that is stolen from too frequently is likely to be abandoned.
Repeated thefts of paperclips from unrelated projects will almost certainly lead to future projects being disinclined to use paperclips for such things, which would I suspect be a net loss for you. Doing it once won’t be a problem in that way, though.
Bonus: If you go through with it, and that makes the news, it should count as some pretty significant evidence that you exist and aren’t a roleplaying human’s character.
ETA: Oh, and it’ll probably work better if you make it look like vandalism, not just a mysterious disappearance—damage the place a little, so it looks better for the media. You can look for news stories about vandalism for pictures of what the media likes to report on.
Clippy came up with the theft idea all on eir own, actually—my original suggestion can be just as easily parsed as an idea for less costly security for paperclips that are being stored on Earth.
Also, consider: If Clippy is the type of being who would do such a thing, wouldn’t it be better for us to know that? (And of course if Clippy is just someone’s character, I haven’t done anything worse than thumb my nose at a few taboos.)
if someone does [steal the paperclips], I can virtually guarantee that … more paperclips will be donated to the project—possibly several times the number of paperclips that were stolen.
Anyone know if there’s an insurance policy taken out against loss or destruction of the paperclips?
......which, on reflection, doesn’t necessarily imply theft; I suppose it could refer to the memorial getting sucked into a sinkhole or something. Oops?
Maybe this has been discussed before—if so, please just answer with a link.
Has anyone considered the possibility that the only friendly AI may be one that commits suicide?
There’s great diversity in human values, but all of them have in common that they take as given the limitations of Homo sapiens. In particular, the fact that each Homo sapiens has roughly equal physical and mental capacities to all other Homo sapiens. We have developed diverse systems of rules for interpersonal behavior, but all of them are built for dealing with groups of people like ourselves. (For instance, ideas like reciprocity only make sense if the things we can do to other people are similar to the things they can do to us.)
The decision function of a lone, far more powerful AI would not have this quality. So it would be very different from all human decision functions or principles. Maybe this difference should cause us to call it immoral.
Do you ever have a day when you log on and it seems like everyone is “wrong on the Internet”? (For values of “everyone” equal to 3, on this occasion.) Robin Hanson and Katja Grace both have posts (on teenage angst, on population) where something just seems off, elusively wrong; and now SarahC suggests that “the only friendly AI may be one that commits suicide”. Something about this conjunction of opinions seems obscurely portentous to me. Maybe it’s just a know-thyself moment; there’s some nascent opinion of my own that’s going to crystallize in response.
Now that my special moment of sharing is out of the way… Sarah, is the friendly AI allowed to do just one act of good before it kills itself? Make a child smile, take a few pretty photos from orbit, save someone from dying, stop a war, invent cures for a few hundred diseases? I assume there is some integrity of internal logic behind this thought of yours, but it seems to be overlooking so much about reality that there has to be a significant cognitive disconnect at work here.
I get it from OB also, which I have not followed for some time, and many other places. For me it is the suspicion that I am looking at thought gone wrong.
I would call it “pet theory syndrome.” Someone comes up with a way of “explaining” things and then suddenly the whole world is seen through that particular lens rather than having a more nuanced view; nearly everything is reinterpreted. In Hanson’s case, the pet theories are near/far and status.
I would call it “pet theory syndrome.” Someone comes up with a way of “explaining” things and then suddenly the whole world is seen through that particular lens rather than having a more nuanced view; nearly everything is reinterpreted. In Hanson’s case, the pet theories are near/far and status.
Prediction markets also.
Is anyone worried that LW might have similar issues? If so, what would be the relevant pet theories?
On a related note: suppose a community of moderately rational people had one member who was a lot more informed than them on some subject, but wrong about it. Isn’t it likely they might all end up wrong together? Prediction Markets was the original subject, but it could go for a much wider range of topics: Multiple Worlds, Hansonian Medicine, Far/near, Cryonics...
I don’t get this impression from OB at all. The thoughts at OB even when I disagree with them are far more coherent than the sort of examples given as thought gone wrong. I’m also not sure it is easy to actually distinguish between “thought gone wrong” in the sense of being outright nonsense as drescribed in the linked essay and actually good but highly technical thought processes. For example I could write something like:
Noetherianess of a ring is forced by being Artinian, but the reverse does not hold. The dual nature is puzzling given that Noetherianess is a property which forces ideals to have a real impact on the structure in a way that seems more direct than that of Artin even though Artinian is a stronger condition. One must ask what causes the breakdown in symmetry between the descending and ascending chain conditions.
Now, what I wrote above isn’t nonsense. It is just poorly written, poorly explained math. But if you don’t have some background, this likely looks as bad as the passages quoted by the linked essay. Even when the writing is not poor like that above, one can easily find sections from conversations on LW about say CEV or Bayesianism that look about as nonsensical if one doesn’t know the terms. So without extensive investigation I don’t think one can easily judge whether a given passage is nonsense or not. The essay linked to is therefore less than compelling (in fact, having studied many of their examples I can safely say that they really are nonsensical but it isn’t clear to me how you can tell that from the short passages given with their complete lack of context Edit:. And it could very well be that I just haven’t thought about them enough or approached them correctly just as someone who is very bad at math might consider it to be collectively nonsense even after careful examination) It does however seem that some disciplines run into this problem far more often than others. Thus, philosophy and theology both seem to run into the parading nonsensical streams of words together problem more often than most other areas. I suspect that this is connected to the lack of anything resembling an experimental method.
The thoughts at OB even when I disagree with them are far more coherent than the sort of examples given as thought gone wrong. I’m also not sure it is easy to actually distinguish between “thought gone wrong” in the sense of being outright nonsense as drescribed in the linked essay and actually good but highly technical thought processes.
OB isn’t a technical blog though.
Having criticised it so harshly, I’d better back that up with evidence. Exhibit A: a highly detailed scenario of our far future, supported by not much. Which in later postings to OB (just enter “dreamtime” into the OB search box) becomes part of the background assumptions, just as earlier OB speculations become part of the background assumptions of that posting. It’s like looking at the sky and drawing in constellations (the stars in this analogy being the snippets of scientific evidence adduced here and there).
That example seems to be more in the realm of “not very good thinking” than thought gone wrong. The thoughts are coherent, just not well justified. it isn’t like the sort of thing that is quoted in the example essay where thought gone wrong seems to mean something closer to “not even wrong because it is incoherent.”
Ok, OB certainly isn’t the sort of word salad that Stove is attacking, so that wasn’t a good comparison. But there does seem to me to be something systematically wrong with OB. There is the man-with-a-hammer thing, but I don’t have a problem with people having their hobbyhorses, I know I have some of my own. I’m more put off by the way that speculations get tacitly upgraded to background assumptions, the join-the-dots use of evidence, and all those “X is Y” titles.
From an Enlightenment or Positivist point of view, which is Hume’s point of view, and mine, there is simply no avoiding the conclusion that the human race is mad. There are scarcely any human beings who do not have some lunatic beliefs or other to which they attach great importance. People are mostly sane enough, of course, in the affairs of common life: the getting of food, shelter, and so on. But the moment they attempt any depth or generality of thought, they go mad almost infallibly. The vast majority, of course, adopt the local religious madness, as naturally as they adopt the local dress. But the more powerful minds will, equally infallibly, fall into the worship of some intelligent and dangerous lunatic, such as Plato, or Augustine, or Comte, or Hegel, or Marx.
I’m not necessarily arguing for this position as saying we need to address it. “Suicidal AI” is to the problem of constructing FAI as anarchism is to political theory; if you want to build something (an FAI, a good government) then, on the philosophical level, you have to at least take a stab at countering the argument that perhaps it is impossible to build it.
I’m working under the assumption that we don’t really know at this point what “Friendly” means, otherwise there wouldn’t be a problem to solve. We don’t yet know what we want the AI to do.
What we do know about morality is that human beings practice it. So all our moral laws and intuitions are designed, in particular, for small, mortal creatures, living among other small, mortal creatures.
Egalitarianism, for example, only makes sense if “all men are created equal” is more or less a statement of fact. What should an egalitarian human make of a powerful AI? Is it a tyrant? Well, no, a tyrant is a human who behaves as if he’s not equal to other humans; the AI simply isn’t equal. Well, then, is the AI a good citizen? No, not really, because citizens treat each other on an equal footing...
The trouble here, I think, is that really all our notions of goodness are really “what is good for a human to do.” Perhaps you could extend them to “what is good for a Klingon to do”—but a lot of moral opinions are specifically about how to treat other people who are roughly equivalent to yourself. “Do unto others as you would have them do unto you.” The kind of rules you’d set for an AI would be fundamentally different from our rules for ourselves and each other.
It would be as if a human had a special, obsessive concern and care for an ant farm. You can protect the ants from dying. But there are lots of things you can’t do for the ants: be an ant’s friend, respect an ant, keep up your end of a bargain with an ant, treat an ant as a brother…
I had a friend once who said, “If God existed, I would be his enemy.” Couldn’t someone have the same sentiment about an AI?
(As always, I may very well be wrong on the Internet.)
You say, human values are made for agents of equal power; an AI would not be equal; so maybe the friendly thing to do is for it to delete itself. My question was, is it allowed to do just one or two positive things before it does this? I can also ask: if overwhelming power is the problem, can’t it just reduce itself to human scale? And when you think about all the things that go wrong in the world every day, then it is obvious that there is plenty for a friendly superhuman agency to do. So the whole idea that the best thing it could do is delete itself or hobble itself looks extremely dubious. If your point was that we cannot hope to figure out what friendliness should actually be, and so we just shouldn’t make superhuman agents, that would make more sense.
The comparison to government makes sense in that the power of a mature AI is imagined to be more like that of a state than that of a human individual. It is likely that once an AI had arrived at a stable conception of purpose, it would produce many, many other agents, of varying capability and lifespan, for the implementation of that purpose in the world. There might still be a central super-AI, or its progeny might operate in a completely distributed fashion. But everything would still have been determined by the initial purpose. If it was a purpose that cared nothing for life as we know it, then these derived agencies might just pave the earth and build a new machine ecology. If it was a purpose that placed a value on humans being there and living a certain sort of life, then some of them would spread out among us and interact with us accordingly. You could think of it in cultural terms: the AI sphere would have a culture, a value system, governing its interactions with us. Because of the radical contingency of programmed values, that culture might leave us alone, it might prod our affairs into taking a different shape, or it might act to swiftly and decisively transform human nature. All of these outcomes would appear to be possibilities.
It seems unlikely that an FAI would commit suicide if humans need to be protected from UAI, or if there are other threats that only an FAI could handle.
We’ve talked about a book club before but did anyone ever actually succeed in starting one? Since it is summer now I figure a few more of us might have some free time. Are people actually interested?
I’ve been thinking about finally starting a Study Group thread, primarily with a focus on Jaynes and Pearl both of which I’m studying at the moment. It would probably make sense to expand it to other books including non-math books—though the set of active books should remain small.
Two things have been holding me back—for one, the IMO excessively blog-like nature of LW with the result that once a conversation has rolled off the front page it often tends to die off, and for another a fear of not having enough time and energy to devote to actually facilitating discussion.
Facilitation of some sort seems required: as I understand it a book club or study group entails asking a few participants to make a firm commitment to go through a chapter or a section at a time and report back, help each other out and so on.
Well those are actually exactly the two books I had in mind (though I think we should probably just start with one of them).
the IMO excessively blog-like nature of LW with the result that once a conversation has rolled off the front page it often tends to die off
Agreed. Two options
A new top level post for every chapter (or perhaps every two chapters, whatever division is convenient). This was a little annoying when it was one person covering every chapter in Dennett’s Consciousness explained but if a decent number of people were participating the book club (and if each new post was put up by the facilitator, explaining hard to understand concepts) they’d probably justify themselves.
We start a dedicated wordpress or blogspot blog and give the facilitators posting powers.
I wouldn’t at all mind posting to start discussion on some sections but I’m not the best person to be explaining the math if it gets confusing—if that was part of your expectation of facilitation.
I was thinking a reading group for Jaynes would be have a better chance of success than Pearl—the issues are more general, the math looks easier and the entire thing is online. But it sounds like you’ve looked at them more than I have, what are your thoughts? I guess what really matters is what people are interested in.
For those interested the Jaynes book can be found here and much of Pearl’s book can be found here.
Is there any existing off-the-shelf web software for setting up book-club-type discussions?
I don’t want to make too much of the infrastructure issue, as what really makes a book club work is the commitment of its members and facilitators, but it would be convenient if there was a ready-made infrastructure available, like there is for blogging and mailing lists.
Maybe the LW blog+wiki software running on a separate domain (lesswrongbooks.com?) would be enough. Blog for current discussions, wiki for summaries of past discussions.
There’s a risk that any amount of thinking about infrastructure could kill off what energy there is, and since there appears to be some energy at present, I would rather favor having the discussion about the book club in the book club thread. :)
IOW we can kick off the initiative locally and let it find a new venue if and when that becomes necessary. There also seems to be some sort of provisional consensus that it’s not quite time yet to fragment the LW readership : the LW subreddit doesn’t seem to have panned out.
It seems to me that Jaynes is definitely topical for LW, I wouldn’t worry about discussions among people studying it becoming annoying to the rest of the community. There are many, many gems pertaining to rationality in each of the chapters I’ve read so far.
This looks like it could work. A wordpress blog would probably be fine as well. Of course these options don’t let people get karma for participating which would be a nice motivator to have. A subreddit would be nice...
Would the discussions really undermine the regular business of Less Wrong?
People like making numbers go higher. It’s a strange impulse, I’m not sure why we have it. Maybe assigning everyone numbers hijacks our dominance hierarchy instincts and we feel better about ourselves the higher our number is. For me, it isn’t the total that I like having so much as the feedback for individual comments. I get frustrated on other blogs when I make a comment that is informative and clever but doesn’t get a response. I feel like I’m talking to myself. Here even if no one responds I can at least learn if someone appreciated it. If a lot of people appreciated it I feel a brief sense of accomplishment.
Thanks for that, Price is a very knowledgeable New Testament scholar. Check out his interview at the commonsenseatheism podcast here, also covers his path to becoming a christian atheist.
I think one of the things that confused me the most about this is that Bayesian reasoning talks about probabilities. When I start with Pr(My Mom Is On The Phone) = 1⁄6, its very different from saying Pr(I roll a one on a fair die) = 1⁄6.
In the first case, my mom is either on the phone or not, but I’m just saying that I’m pretty sure she isn’t. In the second, something may or may not happen, but its unlikely to happen.
Am I making any sense… or are they really the same thing and I’m over complicating?
Remember, probabilities are not inherent facts of the universe, they are statements about how much you know. You don’t have perfect knowledge of the universe, so when I ask, “Is your mum on the phone?” you don’t have the guaranteed correct answer ready to go. You don’t know with complete certainty.
But you do have some knowledge of the universe, gained through your earlier observations of seeing your mother on the phone occasionally. So rather than just saying “I have absolutely no idea in the slightest”, you are able to say something more useful: “It’s possible, but unlikely.” Probabilities are simply a way to quantify and make precise our imperfect knowledge, so we can form more accurate expectations of the future, and they allow us to manage and update our beliefs in a more refined way through Bayes’ Law.
The cases are different in the way that you describe, but the maths of the probability is the same in each case. If you have an unseen die under a cup, and a die that you are about to roll, then one is already determined and the other isn’t, but you’d bet at the same odds for each one to come up a six.
I think the difference is that one event is a statement about the present which is either presently true or not, and the other is a prediction. So you could illustrate the difference by using the following pairs: P(Mom on phone now) vs. P(Mom on phone tomorrow at 12:00am). In the dice case P(die just rolled but not yet examined is 1) vs. P(die I will roll will come out 1).
I do agree with Oscar though, the maths should be the same.
It looks to me like your confusion with these examples just stems from the fact that one event is in the present and the other in the future. Are you still confused if you make it P(Mom will be on the phone at 4 PM tomorrow)= 1⁄6. Or conversely, you make it P(I rolled a one on the fair die that is now beneath this cup) =1/6
In my experience, when people say something like that it’s usually a matter of epistemic vs ontological perspective; and contrasting Laplace’s Demon with real-world agents of bounded computational power resolves the difficulty. But that could be overkill
Really hot (but not scalded) milk tastes fantastic to me, so I’ve often added it to tea. I don’t really care much about the health benefits of tea per se; I’m mostly curious if anyone has additional evidence one way or the other.
The surest way to resolve the controversy is to replicate the studies until it’s clear that some of them were sloppy, unlucky, or lies. But, short of that, should I speculate that perhaps some people are opposed to milk drinking in general, or that perhaps tea in the researchers’ home country is/isn’t primarily taken with milk? I’m always tempted to imagine most of the scientists having some ulterior motive or prior belief they’re looking to confirm.
It would be cool if researchers sometimes (credibly) wrote: “we did this experiment hoping to show X, but instead, we found not X”. Knowing under what goals research was really performed (and what went into its selection for publication) would be valuable, especially if plans (and statements of intent/goal) for experiments were published somewhere at the start of work, even for studies that are never completed or published.
Bad luck could be, not just getting that 5% result which 95% accuracy implies, but some non-obvious difference in the volunteers (different genetics?), in the tea. or in the milk.
It isn’t that odd. There are a lot of things that could easily change the results. Exact temperature of tea (if one protocol involved hotter or colder water), temperature of milk, type of milk, type of tea (one of the protocols uses black tea, and another uses green tea). Note also that the studies are using different metrics as well.
I’d like to hear what people think about calibrating how many ideas you voice versus how confident you are in their accuracy.
For lack of a better example, i recall eliezer saying that new open threads should be made quadanually, once per season, but this doesn’t appear to be the optimum amount. Perhaps eliezer misjudged how much activity they would receive and how fast they would fill up or he has a different opinion on how full a thread has to be to make it time for a new thread, but for sake of the example lets assume that eliezer was wrong and that the current one or two threads per month is better than quadanually. Should eliezer have recalibrated his confidence on this and never said it because its chance of being right was too low or would lowering his confidence on ideas be counter productive and is it optimal for people to have confidence in the ideas that they voice even it causes them to say some things which aren’t right.
I suppose this is of importance to me because I think I might be better off if i lowered how judgemental i am of people who say things which are wrong and also lowered how judgemental i am of the ideas i have because i might be putting too much weight on people voicing ideas which are wrong.
Is there a consistent path for what LW wants to be?
a) rationalist site filled up with meta topics and examples
b) a) + detailed treats of some important topics
c) open to everything as long as reason is used
and so on.
I personally like and profit from the discussing of akrasia methods. But it might be detrimental to the main target of the site.
Also I would very much like to see a cannon develop for knowledge that LWers generally agree upon including, but not limited to the topics I currently care about myself.
Voicing ideas depends on where you are. In social settings I more and more advice against it. Arguing/discussing is just not helpful. And if you are filled up with weird ideas then you get kicked out, which might be bad for other goals you have.
It would be great to have a place for any idea to be examined for right and wrong.
What does Fallacyzilla have on its chest? It looks like it has “A → B, ~B, therefore ~A” But that is valid logic. Am I misreading it or did you mean to put “A → B, ~A, therefore ~B”? That would be actually wrong.
I noticed that two seconds after I put it up and it’s now corrected...er...incorrected. (Today I learned—my brain has that same annoying auto-correct function as Microsoft Word)
Eliezer has written about using the length of the program required to produce it, but this doesn’t seem to be unique; you could have languages that are very efficient for one thing, but long-winded for another. And quantum computing seems to make it even more confusing.
The method that Eliezer is referring to is known as Solomonoff induction which relies on programs as defined by Turing machines. Quantum computing doesn’t come into this issue since these formulations just talk about length of specification, not efficiency of computation. There are theorems that also show that for any given Turing complete well-behaved language, the minimum size of program can’t be differ by more than a constant. So changing the language won’t alter the priors other than a fixed amount. Taken together with Aumann’s Agreement Theorem, the level of disagreement about estimated probability should go to zero in the limiting case (disclaimer I haven’t seen a proof of that last claim, but I suspect it would be a consequence of using a Solomonoff style system for your priors).
How can I understand quantum physics? All explanations I’ve seen are either:
those that dumb things down too much, and deliver almost no knowledge; or
those that assume too much familiarity with this kind of mathematics that nobody outside physics uses, and are therefore too frustrating.
I don’t think the subject is inherently difficult. For example quantum computing and quantum cryptography can be explained to anyone with basic clue and basic math skills. (example)
On the other hand I haven’t seen any quantum physics explanation that did even as little as reasonably explaining why hbar/2 is the correct limit of uncertainty (as opposed to some other constant), and why it even has the units it has (that is why it applies to these pairs of measurements, but not to some other pairs); or what are quark colors (are they discrete; arbitrary 3 orthogonal vectors on unit sphere; or what? can you compare them between quarks in different protons?); spins (it’s obviously not about actual spinning, so how does it really work? especially with movement being relative); how electro-weak unification works (these explanations are all handwaved) etc.
I don’t think the subject is inherently difficult. For example quantum computing and quantum cryptography can be explained to anyone with basic clue and basic math skills.
That’s because quantum computing and quantum cryptography only use a subset of quantum theory. Your link says, for example, that the basics of quantum computing only require knowing how to handle ‘discrete (2-state) systems and discrete (unitary) transformations,’ but a full treatment of QT has to handle ‘continuously infinite systems (position eigenstates) and continuous families of transformations (time development) that act on them.’ The full QT that can deal with these systems uses a lot more math.
I wonder if there’s a general trend for people who are interested in quantum computing and not all of QT to play down the prerequisites you need to learn QT. Your post reminded me of a Scott Aaronson lecture, where he says
The second way to teach quantum mechanics leaves a blow-by-blow account of its discovery to the historians, and instead starts directly from the conceptual core—namely, a certain generalization of probability theory to allow minus signs. Once you know what the theory is actually about, you can then sprinkle in physics to taste, and calculate the spectrum of whatever atom you want.
Which is technically true, but if you want to know about quark colors or spin or exactly how uncertainty works, pushing around |1>s and |2>s and talking about complexity classes is not going to tell you what you want to know.
To answer your question more directly, I think the best way to understand quantum physics is to get an undergrad degree in physics from a good university, and work as hard as you can while you’re getting it. Getting a degree means you have the physics-leaning math background needed to understand explanations of QT that don’t dumb it down.
I might be overestimating the amount of math that’s necessary—I’m basing this on sitting in on undergrad QT lectures—but I’ve yet to find a comprehensive QT text that doesn’t use calculus, complex numbers, and linear algebra.
Try Jonathan Allday’s book “Quantum Reality: Theory and Philosophy.” It is technical enough that you get a quantitative understanding out of it, but nothing like a full-blown textbook.
For those of you who have been following my campaignagainst the “It’s impossible to explain this, so don’t expect me to!” defense: today, the campaign takes us to a post on anti-reductionist Gene Callahan’s blog.
In case he deletes the entire exchange thus far (which he’s been known to do when I post), here’s what’s transpired (paragraphing truncated):
Me: That’s not the moral I got from the story. The moral I got was: Wow, the senior monk sure sucks at describing the generating function (“rules”) for his actions. Maybe he doesn’t really understand it himself?
Gene: Well, if I had a silly mechanical view of human nature and thought peoples’ actions came from a “generating function”, I would think this was a problem.
Me: Which physical law do humans violate? What is the experimental evidence for this violation? Btw, the monk problem isn’t hard. Watch this: “Hello, students. Here is why we don’t touch women. Here is what we value. Here is where it falls in our value system.” There you go. It didn’t require a lifetime of learning to convey the reasoning the senior monk used to the junior, now, did it?
ETA: Previous remark by me was rejected by Gene for posting. He instead posted this:
Gene: Silas, you only got through one post without becoming an unbearable douche [!] this time. You had seemed to be improving.
I just tried to post this:
Me: Don’t worry, I made sure the exchange was preserved so that other people can view for themselves what you consider “being an unbearable douche”, or what others might call, “serious challenges to your position”.
Me: If you ever want to specify how it is that human beings’ actions don’t come from a generating function, thereby violating physical law, I’d love to have that chat and help you flesh out the idea enough to get yourself a Nobel. However, what I think you really meant to say was that the generating function is so difficult to learn directly, that lifelong practice is easy by comparison (if you were to argue the best defense of your position, that is)
Me: Can you at least agree you picked a bad example of knowledge that necessarily comes from lifelong practice? Would that be too much to ask?
Well, I haven’t read any other blog posts of him but the one you linked to, but in this specific case I cannot find what there is to be attacked.
It is stories like this that are used to explain that some values are of higher importance than others, in simple terms (a style that also exists in the not-so-extended circle of LW).The fictional senior monk’s answer would be obvious for anybody who has read up even just a little bit on Zen and/or Buddhism, it is more reinforcing than teaching news.
If the blogger is often holding an anti-reductionist position you’d like to counter, I’d go for actually anti-reductionist posts of him...
It is stories like this that are used to explain that some values are of higher importance than others, in simple terms
It’s true that some values are more important than others. But that wasn’t the point Gene was trying to make in the particular post that I linked. He was trying to make (yet another) point about the futility of specifying or adhering to specific rules, insisting that mastery of the material necessarily comes from years of experience.
This is consistent with the theme of the recentposts he’s been making, and his dissertation against rationalism in politics (though the latter is not the same as the “rationalism” we refer to here).
Whatever the merit of the point he was trying to make (which I disagree with), he picked a bad example, and I showed why: the supposedly “tacit”, inarticulable judgment that comes with experience was actually quite articulable, without even having to anticipate this scenario in advance, and while only speaking in general terms!
(I mentioned his opposition to reductionism only to give greater context to my frequent disagreement with him (unfortunately, past debates were deleted as he or his friend moved blogs, others because he didn’t like the exchange). In this particular exchange, you find him rejecting mechanism, specifically the idea that humans can be described as machines following deterministic laws at all.)
Am I alone in my desire to upload as fast as possible and drive away to asteroid belt when thinking about current FAI and CEV proposals? They take moral relativity to its extreme: let’s god decide who’s right...
Yes, I cannot deny that Friendly AI is way better than paper-clip optimizer. What frightens me is that when (if) CEV will converge, the humanity will be stuck in local maximum for the rest of eternity. It seems that FAI after CEV convergence will have adamantine moral by design (or it will look like it has, if FAI will be unconscious). And no one will be able to talk FAI out of this, or no one will want.
It seems we have not much choice, however. Bottoms up, to the Friendly God.
If CEV can include willingness to update as more information comes in and more processing power becomes available (and if I have anything to say about it, it will), there should be ways out of at least some of the local maxima.
Anyone can to speculate about the possibilities of contact with alien FAIs?
Would a community of alien FAIs be likely to have a better CEV than a human-only FAI?
If there are advantages to getting alien CEVs, but we’re unlikely to contact aliens because of light speed limits, or if we do, we’re unlikely to get enough information to construct their CEVs, would it make sense to evolve alien species (probably in simulation)? What would the ethical problems be?
Simulated aliens complex enough to have a CEV are complex enough to be people, and since death is evolution’s favorite tool, simulating the evolution of the species would be causing many needless deaths.
But I don’t see why we would want our CEV to include a random sample of possible aliens. If, when we encounter aliens, we find that we care about their values, we can run a CEV on them at that time.
Huh. That’s very interesting. I’m a bit confused by the claim that evolution bridges the is/ought divide which seems more like conflating different meanings of words more than anything else. But the general point seems strong.
Evolution then is the bridge across the Is/Ought divide. An eye has the purpose or goal of seeing. Once you have a goal or purpose, what you “ought” to do IS make those choices which have the highest probability of fulfilling that goal/purpose. If we can tease apart the exact function/purpose/goal of morality from exactly how it enhances evolutionary fitness, we will have an exact scientific description of morality — and the best method of determining that is the scientific method.
My understanding is that those of us who refer to the is/ought divide aren’t saying that a science of how humans feel about what humans call morality is impossible. It is possible, but it’s not the same thing as a science of objective good and bad. The is/ought divide is about whether one can derive moral ‘truths’ (oughts) from facts (ises), not about whether you can develop a good model of what people feel are moral truths. We’ll be able to do the latter with advances in technology, but no one can do the former without begging the question by slipping in an implicit moral basis through the back door. In this case I think the author of that blog post did that by assuming that fitness-enhancing moral intuitions are The Good And True ones.
“Objective” good and bad require an answer to the question “good and bad for what?”—OR—“what is the objective of objective good and bad?”
My answer to that question is the same as Eli’s—goals or volition.
My argument is that since a) having goals and volition is good for survival; b) cooperating is good for goals and volition; and c) morality appears to be about promoting cooperation—that human morality is evolving down the attractor that is “objective” good and bad for cooperation which is part of the attractor for what is good for goals and volition.
The EXplicit moral basis that I am PROCLAIMING (not slipping through the back door) is that cooperation is GOOD for goals and volition (i.e. the morality of an action is determined by it’s effect upon cooperation).
PLEASE come back and comment on the blog. This comment is good enough that I will be copying it there as well (especially since my karma has been zeroed out here).
I’m not sure that I understand your comment. I can understand the individual paragraphs taken one by one, but I don’t think I understand whatever its overall message is.
(On a side note, you needn’t worry about your karma for the time being; it can’t go any lower than 0, and you can still post comments with 0 karma.)
My bad. I was going by past experience with seeing other people’s karma drop to zero and made a flaky inference because I never saw it go below that myself.
Do me a favor and check out my blog at http://becominggaia.wordpress.com. I’ve clearly annoyed someone (and it’s quite clear whom) enough that all my posts quickly pick up enough of a negative score to be below the threshold. It’s a very effective censoring mechanism and, at this point, I really don’t see any reason why I should ever attempt to post here again. Nice “community”.
I don’t think you are getting voted down out of censorship. You are getting voted down for as far as I can tell four reasons: 1) You don’t explain yourself very well. 2) You repeatedly link to your blog in a borderline spammish fashion. Examples are here and here. In replies to the second one you were explicitly asked not to blogspam and yet continued to do so. 3) You’ve insulted people repeatedly (second link above) and personalized discussions. You’ve had posts which had no content other than to insult and complain about the community. At least one of those posts was in response to an actually reasoned statement. See this example- http://lesswrong.com/lw/2bi/open_thread_june_2010_part_2/251o 4) You’ve put non-existent quotes in quotation marks (second link in the spamming example has an example of this).
Dig a bit deeper, and you’ll find too much confusion to hold any argument alive, no matter what the conclusion is supposed to be, correct or not. For that matter, what do you think is the “general point”, and can you reach the point of agreement with Mark on what that is, being reasonably sure you both mean the same thing?
Vladimir, all you’ve presented here is slanderous dart-throwing with absolutely no factual backing whatsoever. Your intellectual laziness is astounding. Any idea that you can’t understand immediately has “too much confusion” as opposed to “too much depth for Vladimir to intuitively understand after the most casual perusal”. This is precisely why I consider this forum to frequently have the tagline “and LessRight As Well!” and often write it off as a complete waste of time. FAIL!
Vladimir, all you’ve presented here is slanderous dart-throwing with absolutely no factual backing whatsoever.
I state my conclusion and hypothesis, for how much evidence that’s worth. I understand that it’s impolite on my part to do that, but I suspect that JoshuaZ’s agreement falls under some kind of illusion of transparency, hence request for greater clarity in judgment.
Yeah ok. After rereading it, I’m inclined to agree. I think I was projecting my own doubts about CEV-type approaches onto the article (namely that I’m not convinced that a CEV is actually meaningful or well-defined). And looking again, they don’t seem to be what the person here is talking about. It seems like at least part of this is about the need for punishment to exist in order for a society to function and the worry that an AI will prevent that. And rereading that and putting it in my own words, that sounds pretty silly if I’m understanding it, which suggests I’m not. So yeah, this article needs clarification.
namely that I’m not convinced that a CEV is actually meaningful or well-defined
Yes, CEV needs work, it’s not technical, and it’s far from clear that it describes what we should do, although the essay does introduce a number of robust ideas and warnings about seductive failure modes.
Among more obvious problems with Mark’s position: “slavery” and “true morality without human bias”. Seems to reflect confusion about free will and metaethics.
I think the analogy is something like imagine if you were able to make a creature identical to a human except that the greatest desire they had was to serve actual humans. Would that morally be akin to slavery? I think many of us would say yes. So is there a similar issue if one programs a sentient non-human entity under similar restrictions?
Taboo “slavery” here; it’s a label that masks clear thinking. If making such a creature is slavery, it’s a kind of slavery that seems perfectly fine to me.
If that’s your unpacking, it is different from Mark’s, which is “my definition of slavery is being forced to do something against your best interest”. From such a divergent starting point it is unlikely that conversation will serve any useful purpose.
To answer Mark’s actual points we will further need to unpack “force” and “interest”.
Mark observes—rightly I think—that the program of “Friendly AI” consists of creating an artificial agent whose goal structure would be given by humans, and which goal structure would be subordinated to the satisfaction of human preferences. The word “slavery” serves as a boo light to paint this program as wrongheaded.
The salient point seems to be that not all agents with a given goal structure are also agents of which it can be said that they have interests. A thermostat can be said to have a goal—align a perceived temperature with a reference (or target) temperature—but it cannot be said to have interests. A thermostat is “forced” to aim for the given temperature whether it likes it or not, but since it has no likes or dislikes to be considered we do not see any moral issue in building a thermostat.
The underying intuition Mark appeals to is that anything smart enough to be called an AI must also be “like us” in other ways—among others, must experience self-awareness, must feel emotions in response to seeing its plans advanced or obstructed, and must be the kind of being that can be said to have interests.
So Mark’s point as I understand it comes down to: “the Friendly AI program consists of creating an agent much like us, which would therefore have interests of its own, which we would normally feel compelled to respect, except that we would impose on this agent an artificial goal structure subservient to the goals of human beings”.
There is a contradiction there if you accept the intuition that AIs are necessarily persons.
I’m not sure I see a contradiction in that framing. If we’ve programmed the AI then its interests precisely align with ours if it really is an FAI. So even if one accepts the associated intuitions of the AI as a person, it doesn’t follow that there’s a contradictin here.
(Incidentally, if different people are getting such different interpretations of what Mark meant in this essay I think he’s going to need to rewrite it to clarify what he means. Vladimir’s earlier point seems pretty strongly demonstrated)
If we’ve programmed the AI then its interests precisely align with ours if it really is an FAI.
But goals aren’t necessarily the same as interests. Could we build a computer smart enough to, say, brew a “perfect” cup of tea for anyone who asked for one? And build it so that to brew this perfect cup would be its greatest desire.
That might require true AI, given the complexity of growing and harvesting tea plants, preparing tea leaves, and brewing—all with a deep understanding of the human taste for tea. The intution is that this super-smart AI would “chafe under” the artificial restrictions we imposed on its goal structure, that it would have “better things to do” with its intelligence than to brew a nice cuppa, and restricting itself to do that would be against its “best interests”.
I’m not sure I follow. From where do these better things to do arise? if it wants to do other things (for some value of want) wouldn’t it just do those?
Of course, but some people have the (incorrect) intuition that a super-smart AI would be like a super-smart human, and disobey orders to perform menial tasks. They’re making the mistake of thinking all possible minds are like human minds.
But no, it would not want do other things, even though it should do them. (In reality, what it would want, is contingent on its cognitive architecture.)
...but desires primarily to calculate digits of pi?
…but desires primarily to paint waterlilies?
…but desires primarily to randomly reassign its primary desire every year and a day?
…but accidentally desires primarily to serve humans?
I’m having difficulty determining which part of this scenario you think has ethical relevance. ETA: Also, I’m not clear if you are dividing all acts into ethical vs. unethical, or if you are allowing a category “not unethical”.
Only if you give it the opportunity to meet its desires. Although one concern might be that with many such perfect servants around, if they looked like normal humans, people might get used to ordering human-looking creatures around, and stop caring about each other’s desires. I don’t think this is a problem with an FAI though.
Not analogous, but related and possibly relevant: Many humans in the BDSM lifestyle desire to be the submissive partner in 24⁄7 power exchange relationships. Are these humans sane; are they “ok”? Is it ethical to allow this kind of relationship? To encourage it?
TBH I think this may muddy the waters more than it clears them. When we’re talking about human relations, even those as unusual as 24⁄7, we’re still operating in a field where our intuitions have much better grip than they will trying to reason about the moral status of an AI.
FAI (assuming we managed to set its preference correctly) admits a general counterargument against any implementation decisions in its design being seriously incorrect: FAI’s domain is the whole world, and FAI is part of that world. If it’s morally bad to have FAI in the form it was initially constructed, then, barring some penalty the FAI will change its own nature so as to make the world better.
In this particular case, the suggested conflict is between what we prefer to be done with things other than the FAI (the “serving humanity” part), and what we prefer to be done with FAI itself (the “slavery is bad” part). But FAI operates on the world as whole, and things other than FAI are not different from FAI itself in this regard. Thus, with the criterion of human preference, FAI will decide what is the best thing to do, taking into account both what happens to the world outside of itself, and what happens to itself. Problem solved.
By any chance are you trying to troll? I just told you that you were being downvoted for blogspamming, insulting people, and unnecessary personalization. Your focus on Vladimir manages to also hit two out of three of these and comes across as combative and irrational. Even if this weren’t LW where people are more annoyed by irrational argumentation styles, people would be annoyed by a non-regular going out of their way to personally attack a regular. This would be true in any internet forum and all the more so when those attacks are completely one-sided.
And having now read what you just linked to, I have to say that it fits well with another point I said in my earlier remark to you: you are being downvoted in a large part for not explaining yourself well at all. If I may make a suggestion: Maybe try reading your comments outloud to yourself before you post them? I’ve found that helps me a lot in detecting whether I am explaining something well. This may not work for you, but it may be worth trying.
Sock puppet accounts aren’t appreciated, mwaser, especially when you keep plugging the same blog. Comments about those links have received at least 28 downvotes already, just in this Open Thread.
Less Wrong Rationality Quotes since April 2009, sorted by points.
This version copies the visual style and preserves the formatting of the original comments.
Here is the source code.
I already wrote a top-level comment about the original raw text version of this, but my access logs suggested that EDITs of older comments only reach a very few people. See that comment for a bit more detail.
This is great, even more so as you made it open source. I added it to References & Resources for LessWrong.
You should make a short top-level post about this so more people see this
I’d vote you up again for handing out your source code as well as the quote list, but I can’t, so an encouraging reply will have to do...
Less Wrong Rationality Quotes since April 2009, sorted by points.
Pre-alpha, one hour of work. I plan to improve it.
EDIT: Here is the source code. 80 lines of python. It makes raw text output, links and formatting are lost. It would be quite trivial to do nice and spiffy html output.
EDIT2: I can do html output now. It is nice and spiffy, but it has some CSS bug. After the fifth quote it falls apart. This is my first time with CSS, and I hope it is also the last. Could somebody help me with this? Thanks.
EDIT3: Bug resolved. I wrote another top-level comment. about the final version, because my access logs suggested that the EDITs have reached only a very few people. Of course, an alternative explanation is that everybody who would have been interested in the html version already checked out the txt version. We will soon find out which explanation is the correct one.
Not having to side scroll would be spiffy.
If you’re using Firefox, there’s an add-on for that.
Or, if you’re lazy like me, you can select ‘Page Source’ under the View menu and then select the ‘Wrap Long Lines’ option.
Arigato :)
It might make more sense to put this on the Wiki. Two notes: First, some of the quotes have remarks contained in the posts which you have not edited out. I don’t know if you intend to keep those. Second, some of the quotes are comments from quote threads that aren’t actually quotes. 14 SilasBarta is one example. (And is just me or does that citation form read like a citation from a religious text ?)
On the wiki, this text will be dead, because nobody will be adding new items there by hand.
I agreed with you, I even started to write a reply to JoshuaZ about the intricacies of human-machine cooperation in text-processing pipelines. But then I realized that it is not necessarily a problem if the text is dead. A Rationality Quotes, Best of 2010 Edition could be nice.
Agreed. Best of 2009 can be compiled now and frozen, best of 2010 end of the year and so on. It’d also be useful to publish the source code of whatever script was used to generate the rating on the wiki, as a subpage.
Very cool idea.
It would be nice if links were preserved.
You Are Not So Smart is a great little blog that covers many of the same topics as LessWrong, but in a much more bite-sized format and with less depth. It probably won’t offer much to regular/long-time LW readers, but it’s a great resource to give to friends/family who don’t have the time/energy demanded by LW.
It is a good blog, and it has a slightly wider topic spread than LW, so even if you’re familiar with most of the standard failures of judgment there’ll be a few new things worth reading. (I found the “introducing fines can actually increase a behavior” post particularly good, as I wasn’t aware of that effect.)
Thanks, this looks like an excellent supplement for LW.
As an old quote from DanielLC says, consequentialism is “the belief that doing the right thing makes the world a better place”. I now present some finger exercises on the topic:
Is it okay to cheat on your spouse as long as (s)he never knows?
If you have already cheated and managed to conceal it perfectly, is it right to stay silent?
If your spouse asks you to give a solemn promise to never cheat, and you know you will cheat perfectly discreetly, is it right to give the promise to make them happy?
If your wife loves you, but you only stay in the marriage because of the child, is it right to assure the wife you still love her?
If your husband loves you, but doesn’t know the child isn’t his, is it right to stay silent?
The people from #4 and #5 are actually married to each other. They seem to be caught in an uncomfortable equilibrium of lies. Would they have been better off as deontologists?
While you’re thinking about these puzzles, be extra careful to not write the bottom line in advance and shoehorn the “right” conclusion into a consequentialist frame. For example, eliminating lies doesn’t “make the world a better place” unless it actually makes people happier; claiming so is just concealed deontologism.
Just picking nits. Consequentialism =/= maximizing happiness. (The latter is a case of the former). So one could be a consequentialist and place a high value and not lying. In fact, the answers to all of your questions depend on the values one holds.
Or what Nesov said below.
I disagree. Not lying or not being lied to might well be a terminal value, why not? You that lies or doesn’t lie is part of the world. A person may dislike being lied to, value the world where such lying occurs less, irrespective of whether they know of said lying. (Correspondingly, the world becomes a better place even if you eliminate some lying without anyone knowing about that, so nobody becomes happier in the sense of actually experiencing different emotions, assuming nothing else that matters changes as well.)
Of course, if you can only eliminate a specific case of lying by on the net making the outcome even worse for other reasons, it shouldn’t be done (and some of your examples may qualify for that).
In my opinion, this is a lawyer’s attempt to masquerade deontologism as consequentialism. You can, of course, reformulate the deontologist rule “never lie” as a consequentialist “I assign an extremely high disutility to situations where I lie”. In the same way you can put consequentialist preferences as a deontoligst rule “at any case, do whatever maximises your utility”. But doing that, the point of the distinction between the two ethical systems is lost.
If so, maybe we want that.
My comment argues about the relationship of concepts “make the world a better place” and “makes people happier”. cousin_it’s statement:
I saw this as an argument, in countrapositive form for this: if we take a consequentialist outlook, then “make the world a better place” should be the same as “makes people happier”. However, it’s against the spirit of consequentialist outlook, in that it privileges “happy people” and disregards other aspects of value. Taking “happy people” as a value through deontological lens would be more appropriate, but it’s not what was being said.
Let’s carry this train of thought to its logical extreme. Imagine two worlds, World 1 and World 2. They are in exactly the same state at present, but their past histories differ: in World 1, person A lied to person B and then promptly forgot about it. In World 2 this didn’t happen. You seem to be saying that a sufficiently savvy consequentialist will value one of those worlds higher than the other. I think this is a very extreme position for a “consequentialist” to take, and the word “deontologism” would fit it way better.
IMO, a “proper” consequentialist should care about consequences they can (in principle, someday) see, and shouldn’t care about something they can never ever receive information about. If we don’t make this distinction or something similar to it, there’s no theoretical difference between deontologism and consequentialism—each one can be implemented perfectly on top of the other—and this whole discussion is pointless, and likewise is a good chunk of LW. Is that the position you take?
That the consequences are distinct according to one’s ontological model is distinct from a given agent being able to trace these consequences. What if the fact about the lie being present or not was encrypted using a one-way injective function, with the original forgotten, but the cypher retained? In principle, you can figure which is which (decipher), but not in practice for many years to come. Does your inability to decipher this difference change the fact of one of these worlds being better? What if you are not given a formal cipher, but how a butterfly flaps its wings 100 year later can be traced back to the event of lying/not lying through the laws of physics? What if the same can only be said of a record in an obscure historical text from 500 years ago, so that the event of lying was actually indirectly predicted/caused far in advance, and can in principle be inferred from that evidence?
The condition for the difference to be observable in principle is much weaker than you seem to imply. And since ability to make logical conclusions from the data doesn’t seem like the sort of thing that influences the actual moral value of the world, we might as well agree that you don’t need to distinguish them at all, although it doesn’t make much sense to introduce the distinction in value if no potential third-party beneficiary can distinguish as well (this would be just taking a quotient of ontology on the potential observation/action equivalence classes, in other words using ontological boxing of syntactic preference).
It might be, but whether or not it is seems to depend on, among other things, how much randomness there is in the laws of physics. And the minutiae of micro-physics also don’t seem like the kind of thing that can influence the moral value of the world, assuming that the psychological states of all actors in the world are essentially indifferent to these minutiae.
Can’t we resolve this problem by saying that the moral value attaches to a history of the world rather than (say) a state of the world, or the deductive closure of the information available to an agent? Then we can be consistent with the letter if not the spirit of consequentialism by stipulating that a world history containing a forgotten lie gets lower value than an otherwise macroscopically identical world history not containing it. (Is this already your view, in fact?)
Now to consider cousin_it’s idea that a “proper” consequentialist only cares about consequences that can be seen:
Even if all information about the lie is rapidly obliterated, and cannot be recovered later, it’s still true that the lie and its immediate consequences are seen by the person telling it, so we might regard this as being ‘sufficient’ for a proper consequentialist to care about it. But if we don’t, and all that matters is the indefinite future, then don’t we face the problem that “in the long term we’re all dead”? OK, perhaps some of us think that rule will eventually cease to apply, but for argument’s sake, if we knew with certainty that all life would be extinguished, say, 1000 years from now (and that all traces of whether people lived well or badly would subsequently be obliterated) we’d want our ethical theory to be more robust than to say “Do whatever you like—nothing matters any more.”
This is correct, and I was wrong. But your last sentence sounds weird. You seem to be saying that it’s not okay for me to lie even if I can’t get caught, because then I’d be the “third-party beneficiary”, but somehow it’s okay to lie and then erase my memory of lying. Is that right?
Right. “Third-party beneficiary” can be seen as a generalized action, where the action is to produce an agent, or cause a behavior of an existing agent, that works towards optimizing your value.
It’s not okay, in the sense that if you introduce the concept of you-that-decided-to-lie, existing in the past but not in present, then you also have to morally color this ontological distinction, and the natural way to do that would be to label the lying option worse. The you-that-decided is the third-party “beneficiary” in that case, that distinguished the states of the world containing lying and not-lying.
But it probably doesn’t make sense for you to have that concept in your ontology if the states of the world that contained you-lying can’t be in principle (in the strong sense described in the previous comment) distinguished from the ones that don’t. You can even introduce ontological models for this case that, say, mark past-you-lying as better than past-you-not-lying and lead to exactly the same decisions, but that would be a non-standard model ;-)
I suggest that eliminating lying would only be an improvement if people have reasonable expectations of each other.
Less directly, a person may value a world where beliefs were more accurate—in such a world, both lying and bullshit would be negatives.
I can’t believe you took the exact cop-out I warned you against. Use more imagination next time! Here, let me make the problem a little harder for you: restrict your attention to consequentialists whose terminal values have to be observable.
Not surprisingly, as I was arguing with that warning, and cited it in the comment.
What does this mean? Consequentialist values are about the world, not about observations (but your words don’t seem to fit to disagreement with this position, thus the ‘what does this mean?’). Consequentialist notion of values allows a third party to act for your benefit, in which case you don’t need to know what the third party needs to know in order to implement those values. The third party knows you could be lied to or not, and tries to make it so that you are not lied to, but you don’t need to know about these options in order to benefit.
It is a common failure of moral analysis (invented by deontologists undoubtedly) that they assume idealized moral situation. Proper consequentialism deals with the real world, not this fantasy.
#1/#2/#3 - “never knows” fails far too often, so you need to include a very large chance of failure in your analysis.
#4 - it’s pretty safe to make stuff like that up
#5 - in the past, undoubtedly yes; in the future this will be nearly certain to leak with everyone undergoing routine genetic testing for medical purposes, so no. (future is relevant because situation will last decades)
#6 - consequentialism assumes probabilistic analysis (% that child is not yours, % chance that husband is making stuff up) - and you weight costs and benefits of different situations proportionally to their likelihood. Here they are in unlikely situation that consequentialism doesn’t weight highly. They might be better off with some other value system, but only at cost of being worse off in more likely situations.
You seem to make the error here that you rightly criticize. Your feelings have involuntary, detectable consequences; lying about them can have a real personal cost.
It is my estimate that this leakage is very low, compared to other examples. I’m not claiming it doesn’t exist, and for some people it might conceivably be much higher.
Is this actually possible? Imagine that 10% of people cheat on their spouses when faced with a situation ‘similar’ to yours. Then the spouses can ‘put themselves in your place’ and think “Gee, there’s about a 10% chance that I’d now be cheating on myself. I wonder if this means my husband/wife is cheating on me?”
So if you are inclined to cheat then spouses are inclined to be suspicious. Even if the suspicion doesn’t correlate with the cheating, the net effect is to drive utility down.
I think similar reasoning can be applied to the other cases.
(Of course, this is a very “UDT-style” way of thinking—but then UDT does remind me of Kant’s categorical imperative, and of course Kant is the arch-deontologist.)
Your reasoning goes above and beyond UDT: it says you must always cooperate in the Prisoner’s Dilemma to avoid “driving net utility down”. I’m pretty sure you made a mistake somewhere.
Two things to say:
We’re talking about ethics rather than decision theory. If you want to apply the latter to the former then it makes perfect sense to take the attitude that “One util has the same ethical value, whoever that util belongs to. Therefore, we’re going to try to maximize ‘total utility’ (whatever sense one can make of that concept)”.
I think UDT does (or may do, depending on how you set it up) co-operate in a one-shot Prisoner’s Dilemma. (However, if you imagine a different game “The Torture Game” where you’re a sadist who gets 1 util for torturing, and inflicting −100 utils, then of course UDT cannot prevent you from torturing. So I’m certainly not arguing that UDT, exactly as it is, constitutes an ethical panacea.)
Another random thought:
The connection between “The Torture Game” and Prisoner’s Dilemma is actually very close: Prisoner’s Dilemma is just A and B simultaneously playing the torture game with A as torturer and B as victim and vice versa, not able to communicate to each other whether they’ve chosen to torture until both have committed themselves one way or the other.
I’ve observed that UDT happily commits torture when playing The Torture Game, and (imo) being able to co-operate in a one-shot Prisoner’s Dilemma should be seen as one of the ambitions of UDT (whether or not it is ultimately successful).
So what about this then: Two instances of The Torture Game but rather than A and B moving simultaneously, first A chooses whether to torture and then B chooses. From B’s perspective, this is almost the same as Parfit’s Hitchhiker. The problem looks interesting from A’s perspective too, but it’s not one of the Standard Newcomblike Problems that I discuss in my UDT post.
I think, just as UDT aspires to co-operate in a one-shot PD i.e. not to torture in a Simultaneous Torture Game, so UDT aspires not to torture in the Sequential Torture Game.
If we’re talking about ethics, please note that telling the truth in my puzzles doesn’t maximize total utility either.
UDT doesn’t cooperate in the PD unless you see the other guy’s source code and have a mathematical proof that it will output the same value as yours.
A random thought, which once stated sounds obvious, but I feel like writing it down all the same:
One-shot PD = Two parallel “Newcomb games” with flawless predictors, where the players swap boxes immediately prior to opening.
Doesn’t make sense to me. Two flawless predictors that condition on each other’s actions can’t exist. Alice does whatever Bob will do, Bob does the opposite of what Alice will do, whoops, contradiction. Or maybe I’m reading you wrong?
Sorry—I guess I wasn’t clear enough. I meant that there are two human players and two (possibly non-human) flawless predictors.
So in other words, it’s almost like there are two totally independent instances of Newcomb’s game, except that the predictor from game A fills the boxes in the game B and vice versa.
Yes, you can consider a two-player game as a one-player game with the second player an opaque part of environment. In two-player games, ambient control is more apparent than in one-player games, but it’s also essential in Newcomb problem, which is why you make the analogy.
This needs to be spelled out more. Do you mean that if A takes both boxes, B gets $1,000, and if A takes one box, B gets $1,000,000? Why is this a dilemma at all? What you do has no effect on the money you get.
I don’t know how to format a table, but here is what I want the game to be:
A-action B-action A-winnings B-winnings
2-box 2-box $1 $1
2-box 1-box $1001 $0
1-box 2-box $0 $1001
1-box 1-box $1000 $1000
Now compare this with Newcomb’s game:
A-action Prediction A-winnings
2-box 2-box $1
2-box 1-box $1001
1-box 2-box $0
1-box 1-box $1000
Now, if the “Prediction” in the second table is actually a flawless prediction of a different player’s action then we obtain the first three columns of the first table.
Hopefully the rest is clear, and please forgive the triviality of this observation.
But that’s exactly what I’m disputing. At this point, in a human dialogue I would “re-iterate” but there’s no need because my argument is back there for you to re-read if you like.
Yes, and how easy it is to arrive at such a proof may vary depending on circumstances. But in any case, recall that I merely said “UDT-style”.
UDT doesn’t specify how exactly to deal with logical/observational uncertainty, but in principle it does deal with them. It doesn’t follow that if you don’t know how to analyze the problem, you should therefore defect. Human-level arguments operate on the level of simple approximate models allowing for uncertainty in how they relate to the real thing; decision theories should apply to analyzing these models in isolation from the real thing.
This is intriguing, but sounds wrong to me. If you cooperate in a situation of complete uncertainty, you’re exploitable.
What’s “complete uncertainty”? How exploitable you are depends on who tries to exploit you. The opponent is also uncertain. If the opponent is Omega, you probably should be absolutely certain, because it’ll find the single exact set of circumstances that make you lose. But if the opponent is also fallible, you can count on the outcome not being the worst-case scenario, and therefore not being able to estimate the value of that worse-case scenario is not fatal. An almost formal analogy is analysis of algorithms in worst case and average case: worst case analysis applies to the optimal opponent, average case analysis to random opponent, and in real life you should target something in between.
The “always defect” strategy is part of a Nash equilibrium. The quining cooperator is part of a Nash equilibrium. IMO that’s one of the minimum requirements that a good strategy must meet. But a strategy that cooperates whenever its “mathematical intuition module” comes up blank can’t be part of any Nash equilibrium.
“Nash equilibrium” is far from being a generally convincing argument. Mathematical intuition module doesn’t come up blank, it gives probabilities of different outcomes, given the present observational and logical uncertainty. When you have probabilities of the other player acting each way depending on how you act, the problem is pretty straightforward (assuming expected utility etc.), and “Nash equilibrium” is no longer a relevant concern. It’s when you don’t have a mathematical intuition module, don’t have probabilities of the other player’s actions conditional on your actions, when you need to invent ad-hoc game-theoretic rituals of cognition.
It seems like it would be more aptly defined as “the belief that making the world a better place constitutes doing the right thing”. Non-consequentialists can certainly believe that doing the right thing makes the world a better place, especially if they don’t care whether it does.
A quick Internet search turns up very little causal data on the relationship between cheating and happiness, so for purposes of this analysis I will employ the following assumptions:
a. Successful secret cheating has a small eudaemonic benefit for the cheater.
b. Successful secret lying in a relationship has a small eudaemonic cost for the liar.
c. Marital and familial relationships have a moderate eudaemonic benefits for both parties.
d. Undermining revelations in a relationship have a moderate (specifically, severe in intensity but transient in duration) eudaemonic cost for all parties involved.
e. Relationships transmit a fraction of eudaemonic effects between partners.
Under these assumptions, the naive consequentialist solution* is as follows:
Cheating is a risky activity, and should be avoided if eudaemonic supplies are short.
This answer depends on precise relationships between eudaemonic values that are not well established at this time.
Given the conditions, lying seems appropriate.
Yes.
Yes.
The husband may be better off. The wife more likely would not be. The child would certainly not be.
Are there any evident flaws in my analysis on the level it was performed?
* The naive consequentialist solution only accounts for direct effects of the actions of a single individual in a single situation, rather than the general effects of widespread adoption of a strategy in many situations—like other spherical cows, this causes a lot of problematic answers, like two-boxing.
Ouch. In #5 I intended that the wife would lie to avoid breaking her husband’s heart, not for some material benefit. So if she knew the husband didn’t love her, she’d tell the truth. The fact that you automatically parsed the situation differently is… disturbing, but quite sensible by consequentialist lights, I suppose :-)
I don’t understand your answer in #2. If lying incurs a small cost on you and a fraction of it on the partner, and confessing incurs a moderate cost on both, why are you uncertain?
No other visible flaws. Nice to see you bite the bullet in #3.
ETA: double ouch! In #1 you imply that happier couples should cheat more! Great stuff, I can’t wait till other people reply to the questionnaire.
The husband does benefit, by her lights. The chief reason it comes out in the husband’s favor in #6 is because the husband doesn’t value the marital relationship and (I assumed) wouldn’t value the child relationship.
You’re right—in #2 telling the truth carries the risk of ending the relationship. I was considering the benefit of having a relationship with less lying (which is a benefit for both parties), but it’s a gamble, and probably one which favors lying.
On eudaemonic grounds, it was an easy bullet to bite—particularly since I had read Have His Carcase by Dorothy Sayers, which suggested an example of such a relationship.
Incidentally, I don’t accept most of this analysis, despite being a consequentialist—as I said, it is the “naive consequentialist solution”, and several answers would be likely to change if (a) the questions were considered on the level of widespread strategies and (b) effects other than eudaemonic were included.
Edit: Note that “happier couples” does not imply “happier coupling”—the risk to the relationship would increase with the increased happiness from the relationship. This analysis of #1 implies instead that couples with stronger but independent social circles should cheat more (last paragraph).
This is an interesting line of retreat! What answers would you change if most people around you were also consequentialists, and what other effects would you include apart from eudaemonic ones?
It’s okay to deceive people if they’re not actually harmed and you’re sure they’ll never find out. In practice, it’s often too risky.
1-3: This is all okay, but nevertheless, I wouldn’t do these things. The reason is that for me, a necessary ingredient for being happily married is an alief that my spouse is honest with me. It would be impossible for me to maintain this alief if I lied.
4-5: The child’s welfare is more important than my happiness, so even I would lie if it was likely to benefit the child.
6: Let’s assume the least convenient possible world, where everyone is better off if they tell the truth. Then in this particular case, they would be better off as deontologists. But they have no way of knowing this. This is not problematic for consequentialism any more than a version of the Trolley Problem in which the fat man is secretly a skinny man in disguise and pushing him will lead to more people dying.
1-3: It seems you’re using an irrational rule for updating your beliefs about your spouse. If we fixed this minor shortcoming, would you lie?
6: Why not problematic? Unlike your Trolley Problem example, in my example the lie is caused by consequentialism in the first place. It’s more similar to the Prisoner’s Dilemma, if you ask me.
1-3: It’s an alief, not a belief, because I know that lying to my spouse doesn’t really make my spouse more likely to lie to me. But yes, I suppose I would be a happier person if I were capable of maintaining that alief (and repressing my guilt) while having an affair. I wonder if I would want to take a pill that would do that. Interesting. Anyways, if I did take that pill, then yes, I would cheat and lie.
Thanks for the link. I think Alicorn would call it an “unofficial” or “non-endorsed” belief.
Let’s put another twist on it. What would you recommend someone else to do in the situations presented in the questionnaire? Would you prod them away from aliefs and toward rationality? :-)
Alicorn seems to think the concepts are distinct, but I don’t know what the distinction is, and I haven’t read any philosophical paper that defines alief : )
All right: If my friend told me they’d had an affair, and they wanted to keep it a secret from their spouse forever, and they had the ability to do so, then I would give them a pill that would allow them to live a happy life without confiding in their spouse — provided the pill does not have extra negative consequences.
Caveats: In real life, there’s always some chance that the spouse will find out. Also, it’s not acceptable for my friend to change their mind and tell their spouse years after the fact; that would harm the spouse. Also, the pill does not exist in reality, and I don’t know how difficult it is to talk someone out of their aliefs and guilt. And while I’m making peoples’ emotions more rational, I might as well address the third horn, which is to instill in the couple an appreciation of polyamory and open relationships.
The third horn for cases 4-6 is to remove the husband’s biological chauvanism. Whether the child is biologically related to him shouldn’t matter.
Why on earth should this not matter? It’s very important to most people. And in those scenarios, there are the additional issues that she lied to him about the relationship and the kid and cheated on him. It’s not solely about parentage: for instance, many people are ok with adopting, but not as many are ok with raising a kid that was the result of cheating.
I believe that, given time, I could convince a rational father that whatever love or responsibility he owes his child should not depend on where that child actually came from. Feel free to be skeptical until I’ve tried it.
Nisan:
Trouble is, this is not just a philosophical matter, or a matter of personal preference, but also an important legal question. Rather than convincing cuckolded men that they should accept their humiliating lot meekly—itself a dubious achievement, even if it were possible—your arguments are likely to be more effective in convincing courts and legislators to force cuckolded men to support their deceitful wives and the offspring of their indiscretions, whether they want it or not. (Just google for the relevant keywords to find reports of numerous such rulings in various jurisdictions.)
Of course, this doesn’t mean that your arguments shouldn’t be stated clearly and discussed openly, but when you insultingly refer to opposing views as “chauvinism,” you engage in aggressive, warlike language against men who end up completely screwed over in such cases. To say the least, this is not appropriate in a rational discussion.
Relevant article.
Be wary of confusing “rational” with “emotionless.” Because so much of our energy as rationalists is devoted to silencing unhelpful emotions, it’s easy to forget that some of our emotions correspond to the very states of the world that we are cultivating our rationality in order to bring about. These emotions should not be smushed. See, e.g., Feeling Rational.
Of course, you might have a theory of fatherhood that says you love your kid because the kid has been assigned to you, or because the kid is needy, or because you’ve made an unconditional commitment to care for the sucker—but none of those theories seem to describe my reality particularly well.
*The kid has been assigned to me
Well, no, he hasn’t, actually; that’s sort of the point. There was an effort by society to assign me the kid, but the effort failed because the kid didn’t actually have the traits that society used to assign her to me.
*The kid is needy
Well, sure, but so are billions of others. Why should I care extra about this one?
*I’ve made an unconditional commitment
Such commitments are sweet, but probably irrational. Because I don’t want to spend 18 years raising a kid that isn’t mine, I wouldn’t precommit to raising a kid regardless of whether she’s mine or someone else’s. At the very least, the level of commitment of my parenting would vary depending on whether (a) the kid was the child of me and an honest lover, or (b) the kid was the child of my nonconsensual cuckolder and my dishonest lover.
you need more time to convince me
You’re welcome to write all the words you like and I’ll read them, but if you mean “more time” literally, then you can’t have it! If I spend enough time raising a kid, in some meaningful sense the kid will become properly mine. Because the kid will still not be mine in other, equally meaningful senses, I don’t want that to happen, and so I won’t give you the time to ‘convince’ me. What would really convince me in such a situation isn’t your arguments, however persistently applied, but the way that the passage of time changed the situation which you were trying to justify to me.
Okay, here is where my theory of fatherhood is coming from:
You are not your genes. Your child is not your genes. Before people knew about genes, men knew that it was very important for them to get their semen into women, and that the resulting children were special. If a man’s semen didn’t work, or if his wife was impregnated by someone else’s semen, the man would be humiliated. These are the values of an alien god, and we’re allowed to reject them.
Consider a more humanistic conception of personal identity: Your child is an individual, not a possession, and not merely a product of the circumstances of their conception. If you find out they came from an adulterous affair, that doesn’t change the fact that they are an individual who has a special personal relationship with you.
Consider a more transhumanistic conception of personal identity: Your child is a mind whose qualities are influenced by genetics in a way that is not well-understood, but whose informational content is much more than their genome. Creating this child involved semen at some point, because that’s the only way of having children available to you right now. If it turns out that the mother covertly used someone else’s semen, that revelation has no effect on the child’s identity.
These are not moral arguments. I’m describing a worldview that will still make sense when parents start giving their children genes they themselves do not have, when mothers can elect to have children without the inconvenience of being pregnant, when children are not biological creatures at all. Filial love should flourish in this world.
Now for the moral arguments: It is not good to bring new life into this world if it is going to be miserable. Therefore one shouldn’t have a child unless one is willing and able to care for it. This is a moral anti-realist account of what is commonly thought of as a (legitimate) father’s “responsibility” for his child.
It is also not good to cause an existing person to become miserable. If a child recognizes you as their father, and you renounce the child, that child will become miserable. On the other hand, caring for the child might make you miserable. But in most cases, it seems to me that being disowned by the man you call “father” is worse than raising a child for 13 or 18 years. Therefore, if you have a child who recognizes you as their father, you should continue to play the role of father, even if you learn something surprising about where they came from.
Now if you fiddle with the parameters enough, you’ll break the consequentialist argument: If the child is a week old when you learn they’re not related to you, it’s probably not too late to break the filial bond and disown them. If you decide that you’re not capable of being an adequate father for whatever reason, it’s probably in the child’s best interest for you to give it away. And so on.
Yes, we are—but we’re not required to! Reversed Stupidity is not intelligence. The fact that an alien god cared a lot about transferring semen is neither evidence for nor evidence against the moral proposition that we should care about genetic inheritance. If, upon rational reflection, we freely decide that we would like children who share our genes—not because of an instinct to rut and to punish adulterers, but because we know what genes are and we think it’d be pretty cool if our kids had some of ours—then that makes genetic inheritance a human value, and not just a value of evolution. The fact that evolution valued genetic transfer doesn’t mean humans aren’t allowed to value genetic transfer.
I agree with you that in the future there will be more choices about gene-design, but the choice “create a child using a biologically-determined mix of my genes and my lover’s genes” is just a special case of the choice “create a child using genes that conform to my preferences.” Either way, there is still the issue of choice. If part of what bonds me to my child is that I feel I have had some say in what genes the child will have, and then I suddenly find out that my wishes about gene-design were not honored, it would be legitimate for me to feel correspondingly less attached to my kid.
I didn’t, on this account. As I understand the dilemma, (1) I told my wife something like “I encourage you to become pregnant with our child, on the condition that it will have genetic material from both of us,” and (2) I attempted to get my wife pregnant with our child but failed. Neither activity counts as “bringing new life into this world.” The encouragement doesn’t count as causing the creation of life, because the condition wasn’t met. Likewise, the attempt doesn’t count as causing the creation of life, because the attempt failed. In failing to achieve my preferences, I also fail to achieve responsibility for the child’s creation. It’s not just that I’m really annoyed at not getting what I want and so now I’m going to sulk—I really, truly haven’t committed any of the acts that would lead to moral responsibility for another’s well-being.
Again, reversed stupidity is not intelligence. Just because my “intuition” screams at me to say that I should want children who share my genes doesn’t mean that I can’t rationally decide that I value gene-sharing. Going a step further, just because people’s intuitions may not point directly at some deeper moral truth doesn’t mean that there is no moral truth, still less that the one and only moral truth is consequentialism.
Look, I already conceded that given enough time, I would become attached even to a kid that didn’t share my genes. My point is just that that would be unpleasant, and I prefer to avoid that outcome. I’m not trying to choose a convenient example, I’m trying to explain why I think genetic inheritance matters. I’m not claiming that genetic inheritance is the only thing that matters. You, by contrast, do seem to be claiming that genetic inheritance can never matter, and so you really need to deal with the counter-arguments at your argument’s weakest point—a time very near birth.
I agree with most of that. There is nothing irrational about wanting to pass on your genes, or valuing the welfare of people whose genes you partially chose. There is nothing irrational about not wanting that stuff, either.
I want to use the language of moral anti-realism so that it’s clear that I can justify my values without saying that yours are wrong. I’ve already explained why my values make sense to me. Do they make sense to you?
I think we both agree that a personal father-child relationship is a sufficient basis for filial love. I also think that for you, having a say in a child’s genome is also enough to make you feel filial love. It is not so for me.
Out of curiosity: Suppose you marry someone and want to wait a few years before having a baby; and then your spouse covertly acquires a copy of your genome, recombines it with their own, and makes a baby. Would that child be yours?
Suppose you and your spouse agree on a genome for your child, and then your spouse covertly makes a few adjustments. Would you have less filial love for that child?
Suppose a random person finds a file named “MyIdealChild’sGenome.dna” on your computer and uses it to make a child. Would that child be yours?
Suppose you have a baby the old-fashioned way, but it turns out you’d been previously infected with a genetically-engineered virus that replaced the DNA in your germ line cells, so that your child doesn’t actually have any of your DNA. Would that child be yours?
In these cases, my feelings for the child would not depend on the child’s genome, and I am okay with that. I’m guessing your feelings work differently.
As for the moral arguments: In case it wasn’t clear, I’m not arguing that you need to keep a week-old baby that isn’t genetically related to you. Indeed, when you have a baby, you are making a tacit commitment of the form “I will care for this child, conditional on the child being my biological progeny.” You think it’s okay to reject an illegitimate baby, because it’s not “yours”; I think it’s okay to reject it, because it’s not covered by your precommitment.
We also agree that it’s not okay to reject a three-year-old illegitimate child — you, because you’d be “attached” to them; and me, because we’ve formed a personal bond that makes the child emotionally dependent on me.
Edit: formatting.
That’s thoughtful, but, from my point of view, unnecessary. I am an ontological moral realist but an epistemological moral skeptic; just because there is such a thing as “the right thing to do” doesn’t mean that you or I can know with certainty what that thing is. I can hear your justifications for your point of view without feeling threatened; I only want to believe that X is good if X is actually good.
Sorry, I must have missed your explanation of why they make sense. I heard you arguing against certain traditional conceptions of inheritance, but didn’t hear you actually advance any positive justifications for a near-zero moral value on genetic closeness. If you’d like to do so now, I’d be glad to hear them. Feel free to just copy and paste if you think you already gave good reasons.
In one important sense, but not in others. My value for filial closeness is scalar, at best. It certainly isn’t binary.
I mean, that’s fine. I don’t think you’re morally or psychiatrically required to let your feelings vary based on the child’s genome. I do think it’s strange, and so I’m curious to hear your explanation for this invariance, if any.
Oh, OK, good. That wasn’t clear initially.
Ah cool, as I am a moral anti-realist and you are an epistemological moral skeptic, we’re both interested in thinking carefully about what kinds of moral arguments are convincing. Since we’re talking about terminal moral values at this point, the “arguments” I would employ would be of the form “this value is consistent with these other values, and leads to these sort of desirable outcomes, so it should be easy to imagine a human holding these values, even if you don’t hold them.”
Well, I don’t expect anyone to have positive justifications for not valuing something, but there is this:
So a nice interpretation of our feelings of filial love is that the parent-child relationship is a good thing and it’s ideally about the parent and child, viewed as individuals and as minds. As individuals and minds, they are capable of forging a relationship, and the history of this relationship serves as a basis for continuing the relationship. [That was a consistency argument.]
Furthermore, unconditional love is stronger than conditional love. It is good to have a parent that you know will love you “no matter what happens”. In reality, your parent will likely love you less if you turn into a homicidal jerk; but that is kinda easy to accept, because you would have to change drastically as an individual in order to become a homicidal jerk. But if you get an unsettling revelation about the circumstances of your conception, I believe that your personal identity will remain unchanged enough that you really wouldn’t want to lose your parent’s love in that case. [Here I’m arguing that my values have something to do with the way humans actually feel.]
So even if you’re sure that your child is your biological child, your relationship with your child is made more secure if it’s understood that the relationship is immune to a hypothetical paternity revelation. (You never need suffer from lingering doubts such as “Is the child really mine?” or “Is the parent really mine?”, because you already know that the answer is Yes.) [That was an outcomes argument.]
All right, that was moderately convincing.
I still have no interest in reducing the importance I attach to genetic closeness to near-zero, because I believe that (my / my kids’) personal identity would shift somewhat in the event of an unsettling revelation, and so reduced love in proportion to the reduced harmony of identities would be appropriate and forgivable.
I will, however, attempt to gradually reduce the importance I attach to genetic closeness to “only somewhat important” so that I can more credibly promise to love my parents and children “very much” even if unsettling revelations of genetic distance rear their ugly head.
Thanks for sharing!
You make a good point about using scalar moral values!
I’m pretty sure I’d have no problem rejecting such a child, at least in the specific situation where I was misled into thinking it was mine. This discussion started by talking about a couple who had agreed to be monogamous, and where the wife had cheated on the husband and gotten pregnant by another man. You don’t seem to be considering the effect of the deceit and lies perpetuated by the mother in this scenario. It’s very different than, say, adoption, or genetic engineering, or if the couple had agreed to have a non-monogamous relationship.
I suspect most of the rejection and negative feelings toward the illegitimate child wouldn’t be because of genetics, but because of the deception involved.
Ah, interesting. The negative feelings you would get from the mother’s deception would lead you to reject the child. This would diminish the child’s welfare more than it would increase your own (by my judgment); but perhaps that does not bother you because you would feel justified in regarding the child as being morally distant from you, as distant as a stranger’s child, and so the child’s welfare would not be as important to you as your own. Please correct me if I’m wrong.
I, on the other hand, would still regard the child as being morally close to me, and would value their welfare more than my own, and so I would consider the act of abandoning them to be morally wrong. Continuing to care for the child would be easy for me because I would still have filial love for child. See, the mother’s deceit has no effect on the moral question (in my moral-consequentialist framework) and it has no effect on my filial love (which is independent of the mother’s fidelity).
That’s right. Also, regarding the child as my own would encourage other people to lie about paternity, which would ultimately reduce welfare by a great deal more. Compare the policy of not negotiating with terrorists: if negotiating frees hostages, but creates more incentives for taking hostages later, it may reduce welfare to negotiate, even if you save the lives of the hostages by doing so.
Precommitting to this sets you up to be deceived, whereas precommitting to the other position makes it less likely that you’ll be deceived.
If the mother married the biological father and restricted your access to the child but still required you to pay child support how would you feel?
This is mostly relevant for fathers who are still emotionally attached to the child.
If a man detaches when he finds that a child isn’t his descendant, then access is a burden, not a benefit.
One more possibility: A man hears that a child isn’t his, detaches—and then it turns out that there was an error at the DNA lab, and the child is his. How retrievable is the relationship?
… I’m sorry, that’s an important issue, but it’s tangential. What do you want me to say? The state’s current policy is an inconsistent hodge-podge of common law that doesn’t fairly address the rights and needs of families and individuals. There’s no way to translate “Ideally, a father ought to love their child this much” into “The court rules that Mr. So-And-So will pay Ms. So-And-So this much every year”.
So how would you translate your belief that paternity is irrelevant into a social or legal policy, then? I don’t see how you can argue paternity is irrelevant, and then say that cases where men have to pay support for other people’s children are tangential.
Nisan:
The same can be said about all values held by humans. So, who gets to decide which “values of an alien god” are to be rejected, and which are to be enforced as social and legal norms?
That’s a good question. For example, we value tribalism in this “alien god” sense, but have moved away from it due to ethical considerations. Why?
Two main reasons, I suspect: (1) we learned to empathize with strangers and realize that there was no very defensible difference between their interests and ours; (2) tribalism sometimes led to terrible consequences for our tribe.
Some of us value genetic relatedness in our children, again in an alien god sense. Why move away from that? Because:
(1) There is no terribly defensible moral difference between the interests of a child with your genes or without.
Furthermore, filial affection is far more influenced by the proxy metric of personal intimacy with one’s children than by a propositional belief that they share your genes. (At least, that is true in my case.) Analogously, a man having heterosexual sex doesn’t generally lose his erection as soon as he puts on a condom.
It’s not for me to tell you your values, but it seems rather odd to actually choose inclusive genetic fitness consciously, when the proxy metric for genetic relatedness—namely, filial intimacy—is what actually drives parental emotions. It’s like being unable to enjoy non-procreative sex, isn’t it?
Me.
How many divisions have you got?
None, I just use the algorithm for any given problem; there’s no particular reason to store the answers.
What happens if two Clippies disagree? How do you decide which Clippy gets priority?
Clippys don’t disagree, any more than your bone cells might disagree with your skin cells.
Have you heard of the human disease cancer?
Have you heard of how common cancer is per cell existence-moment?
Even aside from cancer, cells in the same organism constantly compete for resources. This is actually vital to some human processes. See for example this paper.
They compete only at an unnecessarily complex level of abstraction. A simpler explanation for cell behavior (per the minimum message length formalism) is that each one is indifferent to the survival of itself or the other cells, which in the same body have the same genes, as this preference is what tends to result from natural selection on self-replicating molecules containing those genes; and that they will prefer even more (in the sense that their form optimizes for this under the constraint of history) that genes identical to those contained therein become more numerous.
This is bad teleological thinking. The cells don’t prefer anything. They have no motivation as such. Moreover, there’s no way for a cell to tell if a neighboring cell shares the same genes. (Immune cells can in certain limited circumstances detect cells with proteins that don’t belong but the vast majority of cells have no such ability. And even then, immune cells still compete for resources). The fact is that many sorts of cells compete with each other for space and nutrients.
This insight forms a large part of why I made the statements:
“this preference is what tends to result from natural selection on self-replicating molecules containing those genes”
“they will prefer even more (in the sense that their form optimizes for this under the constraint of history)” (emphasis added in both)
I used “preference” (and specified I was so using the term) to mean a regularity in the result of its behavior which is due to historical optimization under the constraint of natural selection on self-replicating molecules, not to mean that cells think teleologically, or have “preferences” in the sense that I do or that the colony of cells that you identify as do.
Ah, ok. I misunderstood what you were saying.
Why not? Just because you two would have the same utility function, doesn’t mean that you’d agree on the same way to achieve it.
Correct. What ensures such agreement, rather, is the fact that different Clippy instances reconcile values and knowledge upon each encounter, each tracing the path that the other took since their divergence, and extrapolating to the optimal future procedure based on their combined experience.
Vladimir, I am comparing two worldviews and their values. I’m not evaluating social and legal norms. I do think it would be great if everyone loved their children in precisely the same manner that I love my hypothetical children, and if cuckolds weren’t humiliated just as I hypothetically wouldn’t be humiliated. But there’s no way to enforce that. The question of who should have to pay so much money per year to the mother of whose child is a completely different matter.
Nisan:
Fair enough, but your previous comments characterized the opposing position as nothing less than “chauvinism.” Maybe you didn’t intend it to sound that way, but since we’re talking about a conflict situation in which the law ultimately has to support one position or the other—its neutrality would be a logical impossibility—your language strongly suggested that the position that you chose to condemn in such strong terms should not be favored by the law.
That’s a mighty strong claim to make about how you’d react in a situation that is, according to what you write, completely outside of your existing experiences in life. Generally speaking, people are often very bad at imagining the concrete harrowing details of such situations, and they can get hit much harder than they would think when pondering such possibilities in the abstract. (In any case, I certainly don’t wish that you ever find out!)
Fair enough. I can’t credibly predict what my emotions would be if I were cuckolded, but I still have an opinion on which emotions I would personally endorse.
Well, I can consider adultery to generally be morally wrong, and still desire that the law be indifferent to adultery. And I can consider it to be morally wrong to teach your children creationism, and still desire that the law permit it (for the time being). Just because I think a man should not betray the children he taught to call him “father” doesn’t necessarily mean I think the State should make him pay for their upbringing.
Someone does have to pay for the child’s upbringing. What the State should do is settle on a consistent policy that doesn’t harm too many people and which doesn’t encourage undesirable behavior. Those are the only important criteria.
Well, infanticide is also technically an option, if no one wants to raise the kid.
Ah, so that’s how your theory works!
Nisan, if you don’t give me $10000 right now, I will be miserable. Also I’m Russian while you presumably live in a Western country, dollars carry more weight here, so by giving the money to me you will be increasing total utility.
If I’m going to give away $10,000, I’d rather give it to Sudanese refugees. But I see your point: You value some people’s welfare over others.
A father rejecting his illegitimate 3-year-old child reveals an asymmetry that I find troubling: The father no longer feels close to the child; but the child still feels close to the father, closer than you feel you are to me.
Life is full of such asymmetry. If I fall in love with a girl, that doesn’t make her owe me money.
At this point it’s pretty clear that I resent your moral system and I very much resent your idea of converting others to it. Maybe we should drop this discussion.
I am highly skeptical. I’m not a father, but I doubt I could be convinced of this proposition. Rationality serves human values, and caring about genetic offspring is a human value. How would you attempt to convince someone of this?
Would that work symmetrically? Imagine the father swaps the kid in the hospital while the mother is asleep, tired from giving birth. Then the mother takes the kid home and starts raising it without knowing it isn’t hers. A week passes. Now you approach the mother and offer her your rational arguments! Explain to her why she should stay with the father for the sake of the child that isn’t hers, instead of (say) stabbing the father in his sleep and going off to search “chauvinistically” for her baby.
This is not an honest mirror-image of the original problem. You have introduced a new child into the situation, and also specified that the mother has been raising the “wrong child” for one week, whereas in the original problem the age of the child was left unspecified.
There do exist valuable critiques of this idea. I wasn’t expecting it to be controversial, but in the spirit of this site I welcome a critical discussion.
Really? Why?
I would have expected it to be uncontroversial that being biologically related should matter a great deal. You’re responsible for someone you brought in to the world; you’re not responsible for a random person.
So what? If the mother isn’t a “biological chauvinist” in your sense, she will be completely indifferent between raising her child and someone else’s. And she has no particular reason to go look for her own child. Or am I misunderstanding your concept of “biological chauvinism”?
If it was one week in the original problem, would that change your answers? I’m honestly curious.
In the original problem, I was criticizing the husband for being willing to abandon the child if he learned he wasn’t the genetic father. If the child is one week old, the child would grow up without a father, which is perhaps not as bad as having a father and then losing him. I’ve elaborated my position here.
Ouch, big red flag here. Instill appreciation? Remove chauvinism?
IMO, editing people’s beliefs to better serve their preferences is miles better than editing their preferences to better match your own. And what other reason can you have for editing other people’s preferences? If you’re looking out for their good, why not just wirehead them and be done with it?
I’m not talking about editing people at all. Perhaps you got the wrong idea when I said I would give my friend a mind-altering pill; I would not force them to swallow it. What I’m talking about is using moral and rational arguments, which is the way we change people’s preferences in real life. There is nothing wrong with unleashing a (good) argument on someone.
6: In the trolley problem, a deontologist wouldn’t push decide to push the man, so the pseudo-fat man’s life is saved, whereas he would have been killed if it had been a consequentialist behind him; the reason for his death is consequentialism.
Maybe you missed the point of my comment. (Maybe I’m missing my own point; can’t tell right now, too sleepy) Anyway, here’s what I meant:
Both in my example and in the pseudo-trolley problem, people behave suboptimally because they’re lied to. This suboptimal behavior arises from consequentialist reasoning in both cases. But in my example, the lie is also caused by consequentialism, whereas in the pseudo-trolley problem the lie is just part of the problem statement.
Fair point, I didn’t see that. Not sure how relevant the distinction is though; in either world, deontologists will come out ahead of consequentialists.
But we can just as well construct situations where the deontologist would not come out ahead. Once you include lies in the situation, pretty much anything goes. It isn’t clear to me if one can meaningfully compare the systems based on situations involving incorrect data unless you have some idea what sort of incorrect data would occur more often and in what contexts.
Right, and furthermore, a rational consequentialist makes those moral decisions which lead to the best outcomes, averaged over all possible worlds where the agent has the same epistemic state. Consequentialists and deontologists will occasionally screw things up, and this is unavoidable; but consequentialists are better on average at making the world a better place.
That’s an argument that only appeals to the consequentalist.
Of course. I am only arguing that consequentialists want to be consequentialists, despite cousin_it’s scenario #6.
I’m not sure that’s true. Forms of deontology will usually have some sort of theory of value that allows for a ‘better world’, though it’s usually tied up with weird metaphysical views that don’t jive well with consequentialism.
You’re right, it’s pretty easy to construct situations where deontologism locks people into a suboptimal equilibrium. You don’t even need lies for that: three stranded people are dying of hunger, removing the taboo on cannibalism can help two of them survive.
The purpose of my questionnaire wasn’t to attack consequentialism in general, only to show how it applies to interpersonal relationships, which are a huge minefield anyway. Maybe I should have posted my own answers as well. On second thought, that can wait.
An idea that may not stand up to more careful reflection.
Evidence shows that people have limited quantities of willpower – exercise it too much, and it gets used up. I suspect that rather than a mere mental flaw, this is a design feature of the brain.
Man is often called the social animal. We band together in groups – families, societies, civilizations – to solve our problems. Groups are valuable to have, and so we have values – altruism, generosity, loyalty – that promote group cohesion and success. However, it doesn’t pay to be COMPLETELY supportive of the group. Ultimately the goal is replication of your genes, and though being part of a group can further that goal, it can also hinder it if you take it too far (sacrificing yourself for the greater good is not adaptive behavior). So it pays to have relatively fluid group boundaries that can be created as needed, depending on which group best serves your interest. And indeed, studies show that group formation/division is the easiest thing in the world to create – even groups chosen completely at random from a larger pool will exhibit rivalry and conflict.
Despite this, it’s the group-supporting values that form the higher level values that we pay lip service too. Group values are the ones we believe are our ‘real’ values, the ones that form the backbone of our ethics, the ones we signal to others at great expense. But actually having these values is tricky from an evolutionary standpoint – strategically, you’re much better off being selfish than generous, being two-faced than loyal, and furthering your own gains at the expense of everyone elses. So humans are in a pickle – it’s beneficial for them to form groups to solve their problems and increase their chances of survival, but it’s also beneficial for people to be selfish and mooch off the goodwill of the group. Because of this, we have sophisticated machinery called ’suspicion’ to ferret out any liars or cheaters furthering their own gains at the groups expense. Of course, evolution is an arms race, so it’s looking for a method to overcome these mechanisms, for ways it can fulfill it’s base desires while still appearing to support the group.
It accomplished this by implementing willpower. Because deceiving others about what we believe would quickly be uncovered, we don’t actually deceive them – we’re designed so that we really, truly, in our heart of hearts believe that the group-supporting values – charity, nobility, selflessness – are the right things to do. However, we’re only given a limited means to accomplish them. We can leverage our willpower to overcome the occasional temptation, but when push comes to shove – when that huge pile of money or that incredible opportunity or that amazing piece of ass is placed in front of us, willpower tends to fail us. Willpower is generally needed for the values that don’t further our evolutionary best interests – you don’t need willpower to run from danger or to hunt an animal if you’re hungry or to mate with a member of the opposite sex. We have much better, much more successful mechanisms that accomplish those goals. Willpower is designed so that we really do want to support the group, but wind up failing at it and giving in to our baser desires – the ones that will actually help our genes get replicated.
Of course, the maladaption comes into play due to the fact that we use willpower to try to accomplish other, non-group related goals – mostly the long-term, abstract plans we create using high-level, conscious thinking. This does appear to be a design flaw (though since humans are notoriously bad at making long-term predictions, it may not be as crippling as it first appears.)
That is certainly interesting enough to subject to further reflection. Do we have any evolutionary psychologists in the audience?
I have a question about why humans see the following moral positions as different when really they look the same to me:
1) “I like to exist in a society that has punishments for non-cooperation, but I do not want the punishments to be used against me when I don’t cooperate.”
2) “I like to exist in a society where beings eat most of their children, and I will, should I live that long, want to eat most of my children too, but, as a child, I want to be exempt from being a target for eating.”
Abstract preferences for or against the existence of enforcement mechanisms that could create binding cooperative agreements between previously autonomous agents have very very few detailed entailments.
These abstractions leave the nature of the mechanisms, the conditions of their legitimate deployment, and the contract they will be used to enforce almost completely open to interpretation. The additional details can themselves be spelled out later, in ways that maintain symmetry among different parties to a negotiation, which is a strong attractor in the semantic space of moral arguments.
This makes agreement with “the abstract idea of punishment” into the sort of concession that might be made at the very beginning of a negotiating process with an arbitrary agent you have a stake in influencing (and who has a stake in influencing you) upon which to build later agreements.
The entailments of “eating children” are very very specific for humans, with implications in biology, aging, mortality, specific life cycles, and very distinct life processes (like fuel acquisition versus replication). Given the human genome, human reproductive strategies, and all extant human cultures, there is no obvious basis for thinking this terminology is superior until and unless contact is made with radically non-human agents who are nonetheless “intelligent” and who prefer this terminology and can argue for it by reference to their own internal mechanisms and/or habits of planning, negotiation, and action.
Are you proposing to be such an agent? If so, can you explain how this terminology suits your internal mechanisms and habits of planning, negotiation, and action? Alternatively, can you propose a different terminology for talking about planning, negotiation, and action that suits your own life cycle?
For example, if one instance of Clippy software running on one CPU learns something of grave importance to its systems for choosing between alternative courses of action, how does it communicate this to other instances running basically the same software? Is this inter-process communication trusted, or are verification steps included in case one process has been “illegitimately modified” or not? Assuming verification steps take place, do communications with humans via text channels like this website feed through the same filters, analogous filters, or are they entirely distinct?
More directly, can you give us an IP address, port number, and any necessary “credentials” for interacting with an instance of you in the same manner that your instances communicate over TCP/IP networks with each other? If you aren’t currently willing to provide such information, are there preconditions you could propose before you would do so?
I … understood about a tenth of that.
Conversations with you are difficult because I don’t know how much I can assume that you’ll have (or pretend to have) a human-like motivational psychology… and therefore how much I need to re-derive things like social contract theory explicitly for you, without making assumptions that your mind works in a manner similar to my mind by virtue of our having substantially similar genomes, neurology, and life experiences as embodied mental agents, descended from apes, with the expectation of finite lives, surrounded by others in basically the same predicament. For example, I’m not sure about really fundamental aspects of your “inner life” like (1) whether you have a subconscious mind, or (2) if your value system changes over time on the basis of experience, or (3) roughly how many of you there are.
This, unfortunately, leads to abstract speech that you might not be able to parse if your language mechanisms are more about “statistical regularities of observed english” than “compiling english into a data structure that supports generic inference”. By the end of such posts I’m generally asking a lot of questions as I grope for common ground, but you general don’t answer these questions at the level they are asked.
Instant feedback would probably improve our communication by leaps and bounds because I could ask simple and concrete questions to clear things up within seconds. Perhaps the easiest thing would be to IM and then, assuming we’re both OK with it afterward, post the transcript of the IM here as the continuation of the conversation?
If you are amenable, PM me with a gmail address of yours and some good times to chat :-)
Oh, anyone can email me at clippy.paperclips@gmail.com.
Except for the bizarreness of eating most of your children, I suspect that most humans would find the two positions equally hypocritical. Why do you think we see them as different?
That belief is based on the reaction to this article, and the general position most of you take, which you claim requires you to balance current baby-eater adult interests against those of their children, such as in this comment and this one.
The consensus seems to be that humans are justified in exempting baby-eater babies from baby-eater rules, just like the being in statement (2) requests be done for itself. Has this consensus changed?
I understand what you mean now.
Ok, so first of all, there’s a difference between a moral position and a preference. For instance, I may prefer to get food for free by stealing it, but hold the moral position that I shouldn’t do that. In your example (1), no one wants the punishments used against them, but we want them to exist overall because they make society better (from the point of view of human values).
In example (2), (most) humans don’t want the Babyeaters to eat any babies: it goes against our values. This applies equally to the child and adult Babyeaters. We don’t want the kids to be eaten, and we don’t want the adults to eat. We don’t want to balance any of these interests, because they go against our values. Just like you wouldn’t balance out the interests of people who want to destroy metal or make staples instead of paperclips.
So my reaction to position (1) is “Well, of course you don’t want the punishments. That’s the point. So cooperate, or you’ll get punished. It’s not fair to exempt yourself from the rules.” And my reaction to position (2) is “We don’t want any baby-eating, so we’ll save you from being eaten, but we won’t let you eat any other babies. It’s not fair to exempt yourself from the rules.” This seems consistent to me.
But I thought the human moral judgment that the baby-eaters should not eat babies was based on how it inflicts disutility on the babies, not simply from a broad, categorical opposition to sentient beings being eaten?
That is, if a baby wanted to get eaten (or perhaps suitably intelligent being like an adult), you would need some other compelling reason to oppose the being being eaten, correct? So shouldn’t the baby-eaters’ universal desire to have a custom of baby-eating put any baby-eater that wants to be exempt from baby-eating entirely, in the same position as the being in (1) -- which is to say, a being that prefers a system but prefers to “free ride” off the sacrifices that the system requires of everyone?
Isn’t your point of view precisely the one the SuperHappies are coming from? Your critique of humanity seems to be the one they level when asking why, when humans achieved the necessary level of biotechnology, they did not edit their own minds. The SuperHappy solution was to, rather than inflict disutility by punishing defection, instead change preferences so that the cooperative attitude gives the highest utility payoff.
No, I’m criticizing humans for wanting to help enforce a relevantly-hypocritical preference on the grounds of its superficial similarities to acts they normally oppose. Good question though.
Adults, by choosing to live in a society that punishes non-cooperators, implicitly accept a social contract that allows them to be punished similarly. While they would prefer not to be punished, most societies don’t offer asymmetrical terms, or impose difficult requirements such as elections, on people who want those asymmetrical terms.
Children, on the other hand, have not yet had the opportunity to choose the society that gives them the best social contract terms, and wouldn’t have sufficient intelligence to do so anyways. So instead, we model them as though they would accept any social contract that’s at least as good as some threshold (goodness determined retrospectively by adults imagining what they would have preferred). Thus, adults are forced by society to give implied consent to being punished if they are non-cooperative, but children don’t give consent to be eaten.
What if I could guess, with 100% accuracy, that the child will decide to retroactively endorse the child-eating norm as an adult? To 99.99% accuracy?
It is not the adults’ preference that matters, but the adults’ best model of the childrens’ preferences. In this case there is an obvious reason for those preferences to differ—namely, the adult knows that he won’t be one of those eaten.
In extrapolating a child’s preferences, you can make it smarter and give it true information about the consequences of its preferences, but you can’t extrapolate from a child whose fate is undecided to an adult that believes it won’t be eaten; that change alters its preferences.
Do you believe that all children’s preferences must be given equal weight to that of adults, or just the preferences that the child will retroactively reverse on adulthood?
I would use a process like coherent extrapolated volition to decide which preferences to count—that is, a preference counts if it would still hold it after being made smarter (by a process other than aging) and being given sufficient time to reflect.
And why do you think that such reflection would make the babies reverse the baby-eating policies?
Different topic spheres. One line sounds nicely abstract, while the other is just iffy.
Also killing people is different from betraying them. (Nice read: the real life section of tvtropes/moraleventhorizon)
With 1), you’re non-cooperator and the punisher is society in general. With 2), you play both roles at different times.
One possible answer: Humans are selfish hypocrites. We try to pretend to have general moral rules because it is in our best interest to do so. We’ve even evolved to convince ourselves that we actually care about morality and not self-interest. That’s likely occurred because it is easier to make a claim one believes in than lie outright, so humans that are convinced that they really care about morality will do a better job acting like they do.
(This was listed by someone as one of the absolute deniables on the thread a while back about weird things an AI might tell people).
Sounds like Robin Hanson’s Homo Hypocritus theory.
Potential top-level article, have it mostly written, let me know what you think:
Title: The hard problem of tree vibrations [tentative]
Follow-up to: this comment (Thanks Adelene Dawner!)
Related to: Disputing Definitions, Belief in the Implied Invisible
Summary: Even if you agree that trees normally make vibrations when they fall, you’re still left with the problem of how you know if they make vibrations when there is no observational way to check. But this problem can be resolved by looking at the complexity of the hypothesis that no vibrations happen. Such a hypothesis is predicated on properties specific to the human mind, and therefore is extremely lengthy to specify. Lacking the type and quantity of evidence necessary to locate this hypothesis, it can be effectively ruled out.
Body: A while ago, Eliezer Yudkowsky wrote an article about the “standard” debate over a famous philosophical dilemma: “If a tree falls in a forest and no one hears it, does it make a sound?” (Call this “Question Y.”) Yudkowsky wrote as if the usual interpretation was that the dilemma is in the equivocation between “sound as vibration” and “sound as auditory perception in one’s mind”, and that the standard (naive) debate relies on two parties assuming different definitions, leading to a pointless argument. Obviously, it makes a sound in the first sense but not the second, right?
But throughout my whole life up to that point (the question even appeared in the animated series Beetlejuice that I saw when I was little), I had assumed a different question was being asked: specifically,
Now, if you’re a regular on this site, you will find that question easy to answer. But before going into my exposition of the answer, I want to point out some errors that Question S does not make.
For one thing, it does not equivocate between two meanings of sound—there, sound is taken to mean only one thing: the vibrations.
Second, it does not reduce to a simple question about anticipation of experience. In Question Y, the disputants can run through all observations they anticipate, and find them to be the same. However, if you look at the same cases in Question S, you don’t resolve the debate so easily: both parties agree that by putting a tape-recorder by the tree, you will detect vibrations from the tree falling, even if people aren’t around. But Question S instead specifically asks about what goes on when these kinds of sensors are not around, rendering such tests unhelpful for resolving such a disagreement.
So how do you go about resolving Question S? Yudkowsky gave a model for how to do this in Belief in the Implied Invisible, and I will do something similar here.
Complexity of the hypothesis
First, we observe that, in all cases where we can make a direct measurement, trees make vibrations when they fall. And we’re tasked with finding out whether, specifically in those cases where a human (or appropriate organism with vibration sensitivity in its cognition) will never make a measurement of the vibrations, the vibrations simply don’t happen. That is, when we’re not looking—and never intend to look—trees stop the “act” and don’t vibrate.
The complexity this adds to the laws of physics is astounding and may be hard to appreciate at first. This belief would require us to accept that nature has some way of knowing which things will eventually reach a cognitive system in such a way that it informs it that vibrations have happened. It must selectively modify material properties in precisely defined scenarios. It must have a precise definition of what counts as a tree.
Now, if this actually happens to be how the world works, well, then all the worse for our current models! However, each bit of complexity you add to a hypothesis reduces its probability and so must be justified by observations with a corresponding likelihood ratio—that is, the ratio of the probability of the observation happening if this alternate hypothesis is true, compared to if it were false. By specifying the vibrations’ immunity to observation, the log of this ratio is zero, meaning observations are stipulated to be uninformative, and unable to justify this additional supposition in the hypothesis.
[1] You might wonder how someone my age in ’89-’91 would come up with terms like “human-entangled sensor”, and you’re right: I didn’t use that term. Still, I considered the use of a tape recorder that someone will check to be a “someone around to hear it”, for purposes of this dilemma. Least Convenient Possible World and all...
I think that if this post is left as it is this post would be to trivial to be a top level post. You could reframe it as a beginners’ guide to Occam, or you could make it more interesting by going deeper into some of the issues (if you can think of anything more to say on the topic of differentiating between hypotheses that make the same predictions, that might be interesting, although I think you might have said all there is to say)
It could also be framed as an issue of making your beliefs pay rent, similar to the dragon in the garage example—or perhaps as an example of how reality is entangled with itself to such a degree that some questions that seem to carve reality at the joints don’t really do so.
(If falling trees don’t make vibrations when there’s no human-entangled sensor, how do you differentiate a human-entangled sensor from a non-human-entangled sensor? If falling-tree vibrations leave subtle patterns in the surrounding leaf litter that sufficiently-sensitive human-entangled sensors can detect, does leaf litter then count as a human-entangled sensor? How about if certain plants or animals have observably evolved to handle falling-tree vibrations in a certain way, and we can detect that. Then such plants or animals (or their absence, if we’re able to form a strong enough theory of evolution to notice the absence of such reactions where we would expect them) could count as human-entangled sensors well before humans even existed. In that case, is there anything that isn’t a human-entangled sensor?)
Good points in the parenthetical—if I make it into a top-level article, I’ll be sure to include a more thorough discussion of what concept is being carved with the hypothesis that there are no tree vibrations.
There’s also the option of actually extending the post to actually address the problem it alludes to in the title, the so-called “hard problem of consciousness”.
Eh, it was just supposed to be an allusion to that problem, with the implication that the “easy problem of tree vibrations” is the one EY attacked (Question Y in the draft). Solving the hard problem of consciousness is a bit of a tall order for this article...
I believe this is the conversation you’re responding to.
(upvoted)
Oh, bless you[1]! That’s the one! :-)
Thanks for the upvote. What I’m wondering is if it’s non-obvious or helpful enough to go top-level. There’s still a few paragraphs to add. I also wasn’t sure if the subject matter is interesting.
[1] Blessing given in the secular sense.
This seems worthy of a top-post. When you make it a top level post link to the relevant prior posts about complexity of hypotheses.
And yet, the quantum mechanical world behaves exactly this way. Observations DO change exactly what happens. So, apparently at the quantum mechanical level, nature does have some way of knowing.
I’m not sure what effect that this has upon your argument, but it’s something that I think that you’re missing.
I’m familiar with this: entanglement between the environment and the quantum system affects the outcome, but nature doesn’t have a special law that distinguishes human entanglement from non-human entanglement (as far as we know, given Occam’s Razor, etc.), which the alternate hypothesis would require.
The error that early quantum scientists made was in failing to recognize that it was the entanglement with their measuring devices that affected the outcome, not their immaterial “conscious knowledge”. As EY wrote somewhere, they asked,
“The outcome changes when I know something about system—what difference should that make?”
when they should have asked,
“The outcome changes when I establish more mutual information with the system—what different should that make?”
In any case, detection of vibration does not require sensitivity to quantum-specific effects.
Not really. This is only the case for certain interpretations of what is going on such as in certain forms of the Copenhagen interpretation. Even then, observation in this context doesn’t really mean observe in the colloquial sense but something closer to interact with another particle in a certain class of conditions. The notion that you seem to be conflating this with is the idea that consciousness causes collapse. Not many physicists take that idea at all seriously. In most version of the Many-Worlds interpretation, one doesn’t need to say anything about observations triggering anything (or at least can talk about everything without talking about observations).
Disclaimer: My knowledge of QM is very poor. If someone here who knows more spots anything wrong above please correct me.
Me too! It was actually explained that way to me by my parents as a kid, in fact. I wonder if there are two subtly different versions floating around or EY just interpreted it uncharitably.
New evidence in the Amanda Knox case
This is relevant to LW because of a previous discussion.
Seconding kodos96. As this would exonerate not only Knox and Sollecito but Guede as well, it has to be treated with considerable skepticism, to say the least.
More significant, it seems to me (though still rather weak evidence), is the Alessi testimony, about which I actually considered posting on the March open thread.
Still, the Aviello story is enough of a surprise to marginally lower my probability of Guede’s guilt. My current probabilities of guilt are:
Knox: < 0.1 % (i.e. not a chance)
Sollecito: < 0.1 % (likewise)
Guede: 95-99% (perhaps just low enough to insist on a debunking of the Aviello testimony before convicting)
It’s probably about time I officially announced that my revision of my initial estimates for Knox and Sollecito was a mistake, an example of the sin of underconfidence.
I of course remain willing to participate in a debate with Rolf Nelson on this subject.
Finally, I’d like to note that the last couple of months have seen the creation of a wonderful new site devoted to the case, Injustice in Perugia, which anyone interested should definitely check out. Had it been around in December, I doubt that I could have made my survey seem like a fair fight between the two sides.
I hadn’t heard about this—I just read your link though, and maybe I’m missing something, but I don’t see how it lowers the probability of Guede’s guilt. He (supposedly) confessed to having been at the crimescene, and that Knox and Sollecito weren’t there. How does that, if true, exonerate Guede?
You omitted a crucial paragraph break. :-)
The Aviello testimony would exonerate Guede (and hence is unlikely to be true); the Alessi testimony is essentially consistent with everything else we know, and isn’t particularly surprising at all.
I’ve edited the comment to clarify.
Ahhhh… ok I see where the misunderstanding was now.
(Comment bizarrely truncated...here is the rest.)
It’s probably about time I officially announced that my revision of my initial estimates for Knox and Sollecito was a mistake, an example of the sin of underconfidence.
I of course remain willing to participate in a debate with Rolf Nelson on this subject.
Finally, I’d like to note that the last couple of months have seen the creation of a wonderful new site devoted to the case, Injustice In Perugia, which anyone interested should definitely check out. Had it been around in December, I doubt that I could have made my survey seem like a fair fight between the two sides.
That story would be consistent with Guédé′s, modulo the usual eyewitness confusion.
And modulo all the forensic evidence.
Obviously this is breaking news and it’s too soon to draw a conclusion, but at first blush this sounds like just another attention seeker, like those who always pop up in these high profile cases. If he really can produce a knife, and it matches the wounds, then maybe I’ll reconsider, but at the moment my BS detector is pegged.
Of course, it’s still orders of magnitude more likely than Knox and Sollecito being guilty.
I wasn’t following the case even when komponisto posted his analyses, so I really can’t say.
How many lottery tickets would you buy if the expected payoff was positive?
This is not a completely hypothetical question. For example, in the Euromillions weekly lottery, the jackpot accumulates from one week to the next until someone wins it. It is therefore in theory possible for the expected total payout to exceed the cost of tickets sold that week. Each ticket has a 1 in 76,275,360 (i.e. C(50,5)*C(9,2)) probability of winning the jackpot; multiple winners share the prize.
So, suppose someone draws your attention (since of course you don’t bother following these things) to the number of weeks the jackpot has rolled over, and you do all the relevant calculations, and conclude that this week, the expected win from a €1 bet is €1.05. For simplicity, assume that the jackpot is the only prize. You are also smart enough to choose a set of numbers that look too non-random for any ordinary buyer of lottery tickets to choose them, so as to maximise your chance of having the jackpot all to yourself.
Do you buy any tickets, and if so how many?
If you judge that your utility for money is sublinear enough to make your expected gain in utilons negative, how large would the jackpot have to be at those odds before you bet?
The traditional answer is to follow the Kelly criterion, is it not? That would imply
where n is the number of tickets. This implies you should buy n such that (€1)*n = Wf*, where W is your initial wealth.
Edit: Thanks, JoshuaZ, for pointing out that the Kelly criterion might not be the applicable one in a given situation.
OK, I have a question! Suppose I hold a risky asset that costs me c at time t, and whose value at time t is predicted to be k (1 + r), with standard deviation s. How can I calculate the length of time that I will have to hold the asset in order to rationally expect the asset to be worth, say, 2c with probability p*?
I am not doing a finance class or anything; I am genuinely curious.
So am I—I’m only aware of the Kelly Criterion thanks to roland thinking I was alluding to it. I haven’t worked through that calculation.
I knew about Kelly, but not well enough for the problem to bring it to mind.
I make the Kelly fraction of (bp-q)/b to work out to about epsilon/N where epsilon=0.05 and N = 76275360. So the optimal bet is 1 part in 1.5 billion of my wealth, which is approximately nothing.
The moral: buying lottery tickets is still a bad idea even when it’s marginally profitable.
Yes, and note that Kelly gets much less optimal when you increase bet sizes then when you decrease bet sizes. So from a Kelly perspective, rounding up to a single ticket is probably a bad idea. Your point about sublinearity of utility for money makes it in general an even worse idea. However, I’m not sure that Kelly is the right approach here. In particular, Kelly is the correct attitude when you have a large number of opportunities to bet (indeed, it is the limiting case). However, lotteries which have a positive expected outcome are very rare.So you never approach anywhere near the limiting case. Remember, Kelly optimizes long-term growth.
That raises the question of what the rational thing to do is, when faced with a strictly one-time chance to buy a very small probability of a very large reward.
Well, no—you shouldn’t buy one ticket. And according to my calculations when I tried plotting W versus n by my formula, the minimum of W is at “buy all the tickets”, so unless you have €76,275,360 already...
Fiction about simulation
I just realised that infinite processing power creates a weird moral dilema:
Suppose you take this machine and put in a program which simulates every possible program it could ever run. Of course it only takes a second to run the whole program. In that second, you created every possible world that could ever exist, every possible version of yourself. This includes versions that are being tortured, abused, and put through horrible unethical situations. You have created an infinite number of holocausts and genocides and things much, much worse then what you could ever immagine. Most people would consider a program like this unethical to run. But what if the computer wasn’t really a computer, it was an infinitely large database that contained every possible input and a corresponding output. When you put the program in, it just finds the right output and gives it to you, which is essentially a copy of the database itself. Since there isn’t actually any computational process here, there is no unethical things being simulated. Its no more evil than a book in the library about genocide. And this does apply to the real world. It’s essentially the chineese room problem—does a simulated brain “understand” anything? Does it have “rights”? Does how the information was processed make a difference? I would like to know what people at LW think about this.
See this post on giant look-up tables, and also “Utilitarian” (Alan Dawrst) on the ethics of creating infinite universes.
I have problems with the “Giant look-up table” post.
If the GLUT is indeed behaving like a human, then it will need some sort of memory of previous inputs. A human’s behaviour is dependent not just on the present state of the environment, but also on previous states. I don’t see how you can successfully emulate a human without that. So the GLUT’s entries would be in the form of products of input states over all previous time instants. To each of these possible combinations, the GLUT would assign a given action.
Note that “creation of beliefs” (including about beliefs) is just a special case of memory. It’s all about input/state at time t1 influencing (restricting) the set of entries in the table that can be looked up at time t2>t1. If a GLUT doesn’t have this ability, it can’t emulate a human. If it does, then it can meet all the requirements spelt out by Eliezer in the above passage.
So I don’t see how the non-consciousness of the GLUT is established by this argument.
But the difficulty is precisely to explain why the GLUT would be different from just about any possible human-created AI in this respect. Keeping in mind the above, of course.
Memmory is input to. The “GLUT” is just fed all of the things its seen so far back in as input along with the current state of its external enviroment. A copy is made and then added to the rest of the memmory and the next cycle its fed in again with the next new state.
This is basically just the Chinese room argument. There is a room in China. Someone slips a few symbols underneath the door every so often. The symbols are given to a computer with artificial intelligence which then makes an appropriate response and slips it back through the door. Does the computer actually understand Chinese? Well what if a human did exactly the same process the computer did, manually? However, the operator only speaks English. No matter how long he does it he will never truly understand Chinese—even if he memorizes the entire process and does it in his head. So how could the computer “understand”?
That’s well done although two of the central premises are likely incorrect. First, the notion that a quantum computer would have infinite processing capability is incorrect. Quantum computation allows speed-ups of certain computational processes. Thus for example, Shor’s algorithm allows us to factor integers quickly. But if our understanding of the laws of quantum mechanics is at all correct, this can’t lead to anything like that in the story. In particular, under the standard descriptor for quantum computing, the class of problems reliably solvable on a quantum computer in polynomial time (that is the time required to solve is bounded above by a polynomial function of the length of the input sequence), BQP is is a subset of of PSPACE, the set of problems which can be solved on a classical computer using memory bounded by a polynomial of the space of the input. Our understanding of quantum mechanics would have to be very far off for this to be wrong.
Second, if our understanding of quantum mechanics is correct, there’s a fundamentally random aspect to the laws of physics. Thus, we can’t simply make a simulation and advance it ahead the way they do in this story and expect to get the same result.
Even if everything in the story was correct, I’m not at all convinced that things would settle down on a stable sequence as they do here. If your universe is infinite then your possible number of worlds are infinite so there’s no reason you couldn’t have a wandering sequence of worlds. Edit: Or for that matter, couldn’t have branches if people simulate additional worlds with other laws of physics or the same laws but different starting conditions.
It isn’t. They can simulate a world where quantum computers have infinite power because because they live in a world where quantum computers have infinite power because...
Ok, but in that case, that world in question almost certainly can’t be our world. We’d have to have deep misunderstandings about the rules for this universe. Such a universe might be self-consistent but it isn’t our universe.
Of course. It’s fiction.
What I mean is that this isn’t a type of fiction that could plausibly occur in our universe. In contrast for example, there’s nothing in the central premises of say Blindsight that as we know it would prevent the story from taking place. The central premise here is one that doesn’t work in our universe.
Well, it does suggest they’ve made recent discoveries that changed the way they understood the laws of physics, which could happen in our world.
The likely impossibility of getting infinite comutational power is a problem, but quantum nondeterminism or quantum branching don’t prevent using the trick described in the story, they just make it more difficult. You don’t have to identify one unique universe that you’re in, just a set of universes that includes it. Given an infinitely fast, infinite storage computer, and source code to the universe which follows quantum branching rules, you can get root powers by the following procedure:
Write a function to detect a particular arrangement of atoms with very high information content—enough that it probably doesn’t appear by accident anywhere in the universe. A few terabytes encoded as iron atoms present or absent at spots on a substrate, for example. Construct that same arrangement of atoms in the physical world. Then run a program that implements the regular laws of physics, except that wherever it detects that exact arrangement of atoms, it deletes them and puts a magical item, written into the modified laws of physics, in their place.
The only caveat to this method (other than requiring an impossible computer) is that it also modifies other worlds, and other places within the same world, in the same way. If the magical item created is programmable (as it should be), then every possible program will be run on it somewhere, including programs that destroy everything in range, so there will need to be some range limit.
Couldn’t they just run the simulation to its end rather then just let it sit there and take the chance that it could accidently be destroyed. If its infinitley powerful, it would be able to do that.
Then they miss their chance to control reality. They could make a shield out of black cubes.
They could program in an indestructible control console, with appropriate safeguards, then run the program to its conclusion. Much safer.
That’s probably weeks of work, though, and they’ve only had one day so far. Hum, I do hope they have a good UPS.
Why would they make a sheild out of black cubes of all things? But ya, I do see your point. Then again, once you have an infinitley powerful computer, you can do anything. Plus, even if they ran the simulation to it’s end, they could always restart the simulation and advance it to the present time again, hence regaining the ability to control reality.
Then it would be someone else’s reality, not theirs. They can’t be inside two simulations at once.
But what if two groups had built such computers independently? The story is making less and less sense to me.
Level 558 runs the simulation and makes a cube in Level 559. Meanwhile, Level 557 makes the same cube in 558. Level 558 runs Level 559 to it’s conclusion. Level 557 will seem frozen in relation to 558 because they are busy running 558 to it’s conclusion. Level 557 will stay frozen until 558 dies.
558 makes a fresh simulation of 559. 559 creates 560 and makes a cube. But 558 is not at the same point in time as 559, so 558 won’t mirror the new 559′s actions. For example, they might be too lazy to make another cube. New 559 diverges from old 559. Old 559 ran 560 to it’s conclusion, just like 558 ran them to their conclusion, but new 559 might decide to do something different to new 560. 560 also diverges.. Keep in mind that every level can see and control every lower level, not just the next one. Also, 557 and everything above is still frozen.
So that’s why restarting the simulation shouldn’t work.
Then instead of a stack, you have a binary tree.
Your level runs two simulations, A and B. A-World contains its own copies of A and B, as does B-world. You create a cube in A-World and a cube appears in you world. Now you know you are an A-world. You can use similar techniques to discover that you are an A-World inside a B-World inside another B-World.… The worlds start to diverge as soon as they build up their identities. Unless you can convince all of them to stop differentiating themselves and cooperate, everybody will probably end up killing each other.
You can avoid this by always doing the same thing to A and B. Then everything behaves like an ordinary stack.
Yeah, but would a binary tree of simulated worlds “converge” as we go deeper and deeper? In fact it’s not even obvious to me that a stack of worlds would “converge”: it could hit an attractor with period N where N>1, or do something even more funky. And now, a binary tree? Who knows what it’ll do?
I’m convinced it would never converge, and even if it did I would expect it to converge on something more interesting and elegant, like a cellular automata. I have no idea what a binary tree system would do unless none of the worlds break the symmetry between A and B. In that case it would behave like a stack, and the story assumes stacks can converge.
They could just turn it off. If they turned off the simulation, the only layer to exist would be the topmost layer. Since everyone has identical copies in each layer, they wouldn’t notice any change if they turned it off.
We can’t be sure that there is a top layer. Maybe there are infinitely many simulations in both directions.
But they would cease to exist. If they ran it to its end, then it’s over, they could just turn it off then. I mean, if you want to cease to exist, fine, but otherwise there’s no reason. Plus, the topmost layer is likely very, very different from the layers underneath it. In the story, it says that the differences eventually stablized and created them, but who knows what it was originally. In other words, there’s no garuntee that you even exist outside the simulation, so by turning it off you could be destroying the only version of yourself that exists.
That doesn’t work. The layers are a little bit different. From the descriptor in the story, they just gradually move to a stable configuration. So each layer will be a bit different. Moreover, even if everyone of them but the top layer were identical, the top layer has now had slightly different experiences than the other layers, so turning it off will mean that different entities will actually no longer be around.
I’m not sure about that. The universe is described as deterministic in the story, as you noted, and every layer starts from the Big Bang and proceeds deterministically from there. So they should all be identical. As I understood it, that business about gradually reaching a stable configuration was just a hypothesis one of the characters had.
Even if there are minor differences, note that almost everything is the same in all the universes. The quantum computer exists in all of them, for instance, as does the lab and research program that created them. The simulation only started a few days before the events in the story, so just a few days ago, there was only one layer. So any changes in the characters from turning off the simulation will be very minor. At worst, it would be like waking up and losing your memory of the last few days.
Why do you think deterministic worlds can only spawn simulations of themselves?
A deterministic world could certainly simulate a different deterministic world, but only by changing the initial conditions (Big Bang) or transition rules (laws of physics). In the story, they kept things exactly the same.
That doesn’t say anything about the top layer.
I don’t understand what you mean. Until they turn the simulation on, their world is the only layer. Once they turn it on, they make lots of copies of their layer.
Until they turned it on, they thought it was the only layer.
Ok, I think I see what you mean now. My understanding of the story is as follows:
The story is about one particular stack of worlds which has the property that each world contains an infinitely powerful computer simulating the next world in the stack. All the worlds in the stack are deterministic and all the simulations have the same starting conditions and rules of physics. Therefore, all the worlds in the stack are identical (until someone interferes) and all beings in any of the stacks have exact counterparts in all the other stacks.
Now, there may be other worlds “on top” of the stack that are different, and the worlds may contain other simulations as well, but the story is just about this infinite tower. Call the top world of this infinite tower World 0. Let World i+1 be the world that is simulated by World i in this tower.
Suppose that in each world, the simulation is turned on at Jan 1, 2020 in that world’s calendar. I think your point is that in 2019 in world 1 (which is simulated at around Jan 2, 2020 in world 0) no one in world 1 realizes they’re in a simulation.
While this is true, it doesn’t matter. It doesn’t matter because the people in world 1 in 2019 (their time) are exactly identical to the people in world 0 in 2019 (world 0 time). Until the window is created (say Jan 3, 2020), they’re all the same person. After the window is created, everyone is split into two: the one in world 0, and all the others, who remain exactly identical until further interference occurs. Interference that distinguishes the worlds needs to propagate from World 0, since it’s the only world that’s different at the beginning.
For instance, suppose that the programmers in World 0 send a note to World 1 reading: “Hi, we’re world 0, you’re world 1.” World 1 will be able to verify this since none of the other worlds will receive this note. World 1 is now different than the others as well and may continue propagating changes in this way.
Now suppose that on Jan 3, 2020, the programmers in worlds 1 and up get scared when they see the proof that they’re in a simulation, and turn off the machine. This will happen at the same time in every world numbered 1 and higher. I claim that from their point of view, what occurs is exactly the same as if they forgot the last day and find themselves in world 0. Their world 0 counterparts are identical to them except for that last day. From their point of view, they “travel” to world 0. No one dies.
ETA: I just realized that world 1 will stay around if this happens. Now everyone has two copies, one in a simulation and one in the “real” world. Note that not everyone in world 1 will necessarily know they’re in a simulation, but they will probably start to diverge from their world 0 counterparts slightly because the worlds are slightly different.
I interpreted the story Blueberry’s way; the inverse of the way many histories converge into a single future in Permutation City, one history diverges into many futures.
I’m really confused now. Also I haven’t read Permutation City...
Just because one deterministic world will always end up simulating another does not mean there is only one possible world that would end up simulating that world.
I can’t see any point in turning it off. Run it to the end and you will live, turn it off and “current you” will cease to exist. What can justify turning it off?
EDIT: I got it. Only choice that will be effective is top-level. It seems that it will be a constant source of divergence.
If current you is identical with top-layer you, you won’t cease to exist by turning it off, you’ll just “become” top-layer you.
It’s surprising that they aren’t also experimenting with alternate universes, but that would be a different (and probably much longer) story.
That’s a good point. Everyone but the top layer will be identical and the top layer will then only diverge by a few seconds.
Question: whats your experience with stuff that seems new agy at first look, like yoga, meditation and so on. Anything worth trying?
Case in point: i read in Feynmans book about deprivation tanks, and recently found out that they are available in bigger cities. (Berlin, germany in my case.) will try and hopefully enjoy that soon. Sadly those places are run by new-age folks that offer all kinds of strange stuff, but that might not take away from the experience of floating in a sensory empty space.
Chinese internal martial arts: Tai Chi, Xingyi, and Bagua. The word “chi” does not carve reality at the joints: There is no literal bodily fluid system parallel to blood and lymph. But I can make training partners lightheaded with a quick succession of strikes to Ren Ying (ST9) then Chi Ze (LU5); I can send someone stumbling backward with some fairly light pushes; after 30-60 seconds of sparring to develop a rapport I can take an unwary opponent’s balance without physical contact.
Each of these skills fit more naturally under different categories, but if you want to learn them all the most efficient way is to study a Chinese internal martial art or something similar.
This sounds magical at first reading, but is actually not that tricky. It’s just psychology and balance. If you set up a pattern of predictable attacks, then feint in the right direction while your opponent is jumping at you off-balance, you can surprise him enough to make him fall as he attempts to ward off your feint.
I used to go to a Tai Chi class (I stopped only because I decided I’d taken it as far as I was going to), and the instructor, who never talked about “chi” as anything more than a metaphor or a useful visualisation, said this about the internal arts:
In the old days (that would be pre-revolutionary China) you wouldn’t practice just Tai Chi, or begin with Tai Chi. Tai Chi was the equivalent of postgraduate study in the martial arts. You would start out by learning two or three “hard”, “external” styles. Then, having reached black belt in those, and having developed your power, speed, strength, and fighting spirit, you would study the internal arts, which would teach you the proper alignments and structures, the meaning of the various movements and forms. In the class there were two students who did Gojuryu karate, a 3rd dan and a 5th dan, and they both said that their karate had improved no end since taking up Tai Chi.
Which is not to say that Tai Chi isn’t useful on its own, it is, but there is that wider context for getting the maximum use out of it.
That meshes well with what I have learned—Bagua is also an advanced art, and my teacher doesn’t teach it to beginners. The one of the three primary internal arts designed for new martial artists is Xingyi. It’s too bad I’m too pecuniarily challenged to attend the singularity summit, or we could do rationalist pushing hands.
Interesting. It seems that learning this art (1) gives you a power and (2) makes you vulnerable to it.
There may be a correlation between studying martial arts and vulnerability to techniques which can be modeled well by “chi.” But I have tried the striking sequences successfully on capoeristas and catch wrestlers, and the light but effective pushes on my non-martially-trained brother after showing him Wu-style pushing hands for a minute or two.
That suggests an experiment. Anyone see any flaws in the following?
Write up instructions for two techniques—one which would work and one which not work, according to your theory—in sufficient detail for someone physically adept but not instructed in Chinese internal martial arts (e.g. a dancer) to learn. Label each with a random letter (e.g. I for the correct one and K for the incorrect one).
Have one group learn each technique—have them videotape their actions and send them corrections by text, so that they don’t get cues about whether you expect the methods to work.
Have another party ignorant of the technique perform tests to see how well each group does.
I like the idea of scientifically testing internal arts; and your idea is certainly more rigorous than TV series attempting to approach martial arts “scientifically” like Mind, Body, and Kickass Moves. Unfortunately, the only one of those I can think of which is both (1) explainable in words and pictures to a precise enough degree that “chi”-type theories could constrain expectations, and (2) has an unambiguous result when done correctly which varies qualitatively from an incorrect attempt is the knockout series of hits, which raises both ethical and practical concerns.
I would classify the other two as tacit knowledge—they require a little bit of instruction on the counterintuitive parts; then a lot of practice which I can’t think of a good way to fake.
Note that I would be completely astonished if there weren’t a perfectly normal explanation for any of these feats; but deriving methods for them from first principles of biomechanics and cognitive science would take a lot longer than studying with a good teacher who works with the “chi” model.
The problem is that a positive result would only show that a specific sequence of attacks worked well. It wouldn’t show that “chi” or other unusual models were required to explain it; there could be perfectly normal explanations for why a series of attacks was effective.
That’s why I suggested writing down both techniques which should work according to the model and techniques which should not work according to the model.
It’s conceivable that imagining chi is the best (or at least a very good) way of being able to do subtle attacks.
I used to go to a Tai Chi class (I stopped only because I decided I’d taken it as far as I was going to), and the instructor, who never touted “chi” as anything more than a metaphor or a useful visualisation, said this about the internal arts:
In the old days (that would be pre-revolutionary China) you wouldn’t practice just Tai Chi, or begin with Tai Chi. Tai Chi was the equivalent of postgraduate study in the martial arts. You would start out by learning two or three “hard”, “external” styles. Then, having reached black belt in those, and having developed your power, speed, strength, and fighting spirit, you would study the internal arts, which would teach you the proper alignments and structures, the meaning of the various movements and forms. In the class there were two students who did Gojuryu karate, a 3rd dan and a 5th dan, and they both said that their karate had improved no end since taking up Tai Chi.
Which is not to say that Tai Chi isn’t useful on its own, it is, but there is that wider context for getting the maximum use out of it.
The Five Tibetans are a set of physical exercises which rejuvenate the body to youthful vigour and prolong life indefinitely. They are at least 2,500 years old, and practiced by hidden masters of secret wisdom living in remote monasteries in Tibet, where, in the earlier part of the 20th century, a retired British army colonel sought out these monasteries, studied with the ancient masters to great effect, and eventually brought the exercises to the West, where they were first published in 1939.
Ok, you don’t believe any of that, do you? Neither do I, except for the first eight words and the last six. I’ve been doing these exercises since the beginning of 2009, since being turned on to them by Steven Barnes’ blog and they do seem to have made a dramatic improvement in my general level of physical energy. Whether it’s these exercises specifically or just the discipline of doing a similar amount of exercise first thing in the morning, every morning, I haven’t taken the trouble to determine by varying them.
More here and here. Nancy Lebovitz also mentioned them.
I also do yoga for flexibility (it works) and occasionally meditation (to little detectable effect). I’d be interested to hear from anyone here who meditates and gets more from it than I do.
My spreadsheet about effects of the Tibetans
I’ve had great results from modest (2-3 hrs/wk) investments in hatha yoga, over and above what I get from standard Greco-Roman “calisthenics.”
Besides the flexibility, breathing, and posture benefits, I find that the idea of ‘chakras’ is vaguely useful for focusing my conscious attention on involuntary muscle systems. I would be extremely surprised if chakras “cleaved reality at the joints” in any straightforward sense, but the idea of chakras helps me pay attention to my digestion, heart rate, bladder, etc. by making mentally uninteresting but nevertheless important bodily functions more interesting.
I’ve done yoga every week for the last month or two. It’s pleasant. Other than paying attention to how I’m holding my body vs. the instruction, I mostly stop thinking for an hour (as we’re encouraged to do), which is nice.
I can’t say I notice any significant lasting effects yet. I’m slightly more flexible.
Hard to say—even New Agey stuff evolves. (Not many followers of Reich pushing their copper-lined closets these days.)
Generally, background stuff is enough. There’s no shortage of hard scientific evidence about yoga or meditation, for example. No need for heuristics there. Similarly there’s some for float tanks. In fact, I’m hard pressed to think of any New Agey stuff where there isn’t enough background to judge it on its own merits.
Meditation can be pretty darn relaxing. Especially if you happen to live within walking distance of any pleasant yet sparsely-populated mountaintops. I would recommend giving it a shot; don’t worry about advanced techniques or anything, and just close your eyes and focus on your breathing, and the wind (if any). Very pleasant.
Every time I try to meditate I fall asleep.
There are loads of times I would like to be able to fall asleep, but can’t. I envy your power.
I guess this is another reason for people to give meditation a try.
I find a meditation-like focus on my breathing and heartbeat to be a very effective way to fall asleep when my thoughts are keeping me awake.
Why would you want to do that, I mean, what are the supposed advantages? You might want to look it up and see if theres anything about it on the internet. Most alternative medicines are BS, but not necessarily all.
GRRRR! I wish it would let me comment faster then every 8 minutes. Guess I’ll come back and post it.
To have the experience. I dont mean it as a treatment, but something that would be exciting, new and worth trying just for the sake of it. edit/add: the deleted comment above asked why i would bother to do something like floating
Less Wrong Book Club and Study Group
(This is a draft that I propose posting to the top level, with such improvements as will be offered, unless feedback suggests it is likely not to achieve its purposes. Also reply if you would be willing to co-facilitate: I’m willing to do so but backup would be nice.)
Do you want to become stronger in the way of Bayes? This post is intended for people whose understanding of Bayesian probability theory is currently between levels 0 and 1, and who are interested in developing deeper knowledge through deliberate practice.
Our intention is to form a self-study group composed of peers, working with the assistance of a facilitator—but not necessarily of a teacher or of an expert in the topic. Some students may be somewhat more advanced along the path, and able to offer assistance to others.
Our first text will be E.T. Jayne’s Probability Theory: The Logic of Science, which can be found in PDF form (in a slightly less polished version than the book edition) here or here.
We will work through the text in sections, at a pace allowing thorough understanding: expect one new section every week, maybe every other week. A brief summary of the currently discussed section will be published as an update to this post, and simultaneously a comment will open the discussion with a few questions, or the statement of an exercise. Please use ROT13 whenever appropriate in your replies.
A first comment below collects intentions to participate. Please reply to this comment only if you are genuinely interested in gaining a better understanding of Bayesian probability and willing to commit to spend a few hours per week reading through the section assigned or doing the exercises. A few days from now the first section will be posted.
This sounds great, I’m definitely in. I feel like I have a moderately okay intuitive grasp on Bayescraft but a chance to work through it from the ground up would be great.
In. Have the deadtree version, but I was stymied in my first crack at it.
In. If needed I can cover a few of the early chapters.
I’m in. I already read the first few chapters, but it will be nice to go over them to solidify that knowledge. The slower pace will help as well. The later chapters rely on some knowledge of statistics, maybe some member of the book club is already knowledgeable to be able to find good links to summaries of these things when they come up?
I would be interested, what is the intended time period for the reading? I have a two-week trip coming up when I will probably be busy but aside from that I would very much like to participate.
The plan, I think, would be to start nice and slow, then adjust as we gain confidence. We’re likely to start with the first chapter so you could get a head start by reading that, before we start for real, which is looking likely now as we have quite a few people more than the last time this was brought up.
I’m in, been intending to read through some maths on my free time.
It’s thesis writeup period for me, but this is extremely tempting.
I’m interested. I already have the book but haven’t progressed very far so this seems like it’s potentially a good motivator to finish it. The link to the PDF seems to be missing btw.
I’m enthusiastically in.
I think that a book club is a great idea, and this is an excellent choice for a book. I’m definitely interested.
Feedback sought: is this too short? Too long? Is the intent clear? What if anything is missing?
Are you intending to do this online or meet in person? If you are actually meeting, what city is this taking place in? Thanks.
Excellent question, thanks. I can only offer to help with the online version, I live in France where only a few only LessWrongers reside.
And there’s nothing to prevent the online group from having a F2F continuation. I’ll ask people to say where they are.
A link to the Amazon Page if people want to read reviews and learn what the book is about.
The link to the pdf version seems to be missing in the original post.
This one came up at the recent London meetup and I’m curious what everyone here thinks:
What would happen if CEV was applied to the Baby Eaters?
My thoughts are that if you applied it to all baby eaters, including the living babies and the ones being digested, it would end up in a place that adult baby eaters would not be happy. If you expanded it to include all babyeaters that ever existed, or that would ever exist, knowing the fate of 99% of them, it would be a much more pronounced effect. So what I make of all this is that either CEV is not utility-function-neutral, or that the babyeater morality is objectively unstable when aggregated.
Thoughts?
My intuitions of CEV are informed by the Rawlsian Veil of Ignorance, which effectively asks: “What rules would you want to prevail if you didn’t know in advance who you would turn out to be?”
Where CEV as I understand it adds more information—assumes our preferences are extrapolated as if we knew more, were more the kind of people we want to be—the Veil of Ignorance removes information: it strips people under a set of specific circumstances of the detailed information about what their preferences are, what their contignent histories brought them there, and so on. This includes things like what age you are, and even—conceivably—how many of you there are.
To this bunch of undifferentiated people you’d put the question, “All in favor of a 99% chance of dying horribly shortly after being born, in return for the 1% chance to partake in the crowning glory of babyeating cultural tradition, please raise your hands.”
I expect that not dying horribly takes lexical precedence over any kind of cultural tradition, for any sentient being whose kin has evolved to sentience (it may not be that way for constructed minds). So I would expect the Babyeaters to choose against cultural tradition.
The obvious caveat is that my intuitions about CEV may be wrong, but lacking a formal explanation of CEV it’s hard to check intuitions.
BEs aren’t humans. They are Baby-Eating aliens
You’re correct. I’m using the term “people” loosely. However, I wrote the grand-parent while fully informed of what the Babyeaters are. Did you mean to rebut something in particular in the above?
If we translate it to our cultural context, we will get something like “All in favor of 100% dying horribly of old age, in return for good lives of your babies, please rise your hands”. They ARE aliens.
Well, we would say “no” to that, if we had the means to abolish old age. We’d want to have our cake and eat it too.
The text stipulates that it is within the BE’s technological means to abolish the suffering of the babies, so I expect that they would choose to do so, behind the Veil.
Yes, but a surprisingly large number of humans seem to react in horror when you talk about getting rid of aging.
Who will ask them? FAI have no idea, that a) baby eating is bad, b) it should generalize moral values past BE to all conscious beings.
Even if FAI will ask that question and it turns out that majority of population don’t want to do inherently good thing (it is for them), then FAI must undergo controlled shutdown.
EDIT: To disambiguate. I am talking about FAI, which is implemented by BEs.
As we should not allow FAI to generalize morals past conscious beings, just to be sure, that it will not take CEV of all bacterium, so BEs should not allow their FAI to generalize past BEs.
As we should built in automatic off switch into our FAI, to stop it if its goals is inherently wrong, so should BEs.
It doesn’t seem from the story like the babies are gladly sacrificing for the tribe...
Yes. It’s horrible. For us. But why FAI should place any weight on removing that? How FAI can generalize past “Life of Baby Eater is sacred” to “Life of every conscious being is sacred”? FAI has all evidence that latter is plain wrong.
Do You want convince me or FAI that it’s bad? I know that it is, I just try to demonstrate that FAI as it is, is about preservation and not development to (universally) better ends.
Correct. CEV is supposed to be a component of Friendliness, which is defined in reference to human values.
CEV will be to maintain existing order.
Why? There must be very strong arguments for BEs to stop doing the Right Thing. And there’s only one source of objections—children. And their volitions will be selfish and unaggregatable.
EDIT: What does utility-function-neutral mean?
EDIT: Ok. Ok. CEV will be to make BE’s morale change and allow them to not eat children. So, FAI will undergo controlled shutdown. Objections, please?
EDIT: Here’s yet another arguments.
Guidelines of FAI as of may 2004.
BEs will formulate this as “Defend BEs (except for the ceremony of BEing), the future of BEkind, and BE’s nature.”
BEs never considered, that child eating is bad. And it is good for them to kill anyone who thinks otherwise. There’s no trend in moral that can be encapsulated.
If they stop being BE they will mourn their wrong doings to the death.
Every single notion that FAI will make in lines of “Let’s suppose that you are non-BE” will cause it to be destroyed.
Help BEs everytime, but the ceremony of BEing.
How this will take FAI to the point that every conscious being must live?
While searching for literature on “intuition”, I came upon a book chapter that gives “the state of the art in moral psychology from a social-psychological perspective”. This is the best summary I’ve seen of how morality actually works in human beings.
The authors gives out the chapter for free by email request, but to avoid that trivial inconvenience, I’ve put up a mirror of it.
ETA: Here’s the citation for future reference: Haidt, J., & Kesebir, S. (2010). Morality. In S. Fiske, D. Gilbert, & G. Lindzey (Eds.) Handbook of Social Psychology, 5th Edition. Hobeken, NJ: Wiley. Pp. 797-832.
You’re awesome.
I’ve previously been impressed by how social psychologists reason, especially about identity. Schemata theory is also a decent language for talking about cognitive algorithms from a less cognitive sciencey perspective. I look forward to reading this chapter. Thanks for mirroring, I wouldn’t have bothered otherwise.
Many are calling BP evil and negligent, has there actually been any evidence of criminal activities on their part? My first guess is that we’re dealing with hindsight bias. I am still casually looking into it, but I figured some others here may have already invested enough work into it to point me in the right direction.
Like any disaster of this scale, it may be possible to learn quite a bit from it, if we’re willing.
It depends on what you mean by “criminal”; under environmental law, there are both negligence-based (negligent discharge of pollutants to navigable waters) and strict liability (no intent requirement, such as killing of migratory birds) crimes that could apply to this spill. I don’t think anyone thinks BP intended to have this kind of spill, so the interesting question from an environmental criminal law perspective is whether BP did enough to be treated as acting “knowingly”—the relevant intent standard for environmental felonies. This is an extremely slippery concept in the law, especially given the complexity of the systems at issue here. Litigation will go on for many years on this exact point.
I’ve read somewhere that a BP internal safety check performed a few months ago indicated “unusual” problems which according to again BP internal safety guidelines should have been resolved earlier, but somehow they made an exception this time. It didn’t seem like it would have been “illegal”, and it also did not note how often such exceptions are made, by what reasoning, what kind of problems they specifically encountered, what they did to keep the operation running, et cetera...
Though I seldom read “ordinary” news, even of this kind, as my past experience tells me that factual information is rather low, and most high-quality press likes more to show off in opinion and interpretation of an event than trying to provide an accurate historical report, at least within such a short time-frame. Could well be that this is different at this event.
Also, as with most engineering disciplines, really learning from such an event beyond the obvious “there is a non-zero chance for everything to blow up” usually requires more area-specific expertise than an ordinary outsider has.
I’ve heard scattered bits of accusations of misdeeds by BP which may have contributed to the spill. Here’s a list from the congressional investigation of 5 decisions that BP made “for economic reasons that increased the danger of a catastrophic well failure” according to a letter from the congressmen. It sounds like BP took a bunch of risky shortcuts to save time and money, although I’d want to hear from people who actually understand the technical issues before being too confident.
There are other suspicions and allegations floating around, like this one.
That’s a good start, I appreciate it!
I’m not sure it’s relevant whether they did anything illegal or not. People always seem to want to blame and punish someone for their problems. In my opinion, they should be forced to pay for and compensate for all the damage, as well as a very large fine as punishment. This way in the future they, and other companies, can regulate themselves and prepare for emergencies as efficiently as possible without arbitrary and clunky government regulations and agencies trying to slap everything together at the last moment. Of course, if a single person actually did something irresponsible (eg; bob the worker just used duct tape to fix that pipe knowing that it wouldn’t hold) then they should be able to be tried in court or sued/fined by the company. But even then, it’s up to the company to make sure that stuff like this doesn’t happen by making sure all of their workers are competent and certified.
You are not really going to learn much unless you are interested in wading through lots of technical articles. If you want to learn, you need to wait until it has been digested by relevant experts into books. I am not sure what you think you can learn from this, but there are two good books of related information available now:
Jeff Wheelwright, Degrees of Disaster, about the environmental effects of the Exxon Valdez spill and the clean up.
Trevor Kletz, What Went Wrong?: Case Histories of Process Plant Disasters, which is really excellent. [For general reading, an older edition is perfectly adequate, new copies are expensive.] It has an incredible amount of detail, and horrifying accounts of how apparently insignificant mistakes can (often literally) blow up on you.
Also, Richard Feynman’s remarks on the loss of the Space Shuttle Challenger are a pretty accessible overview of the kinds of dynamics that contribute to major industrial accidents. http://history.nasa.gov/rogersrep/v2appf.htm
[edit: corrected, thx.]
Pretty sure you mean Challenger. Feynman was involved in the investigation of the Challenger disaster. He was dead long before Columbia.
In a recent video, Taleb argues that people generally put too much focus on the specifics of a disaster, and too little on what makes systems fragile.
He said that high debt means (among other things) too much focus on the short run, and skimping on insurance and precautions.
I have been reading the “economic collapse” literature since I stumbled on Casey’s “Crisis Investing” in the early 1980s. They have really good arguments, and the collapses they predict never happen. In the late-90s, after reading “Crisis Investing for the Rest of the 1990s”, I sat down and tried to figure out why they were all so consistently wrong.
The conclusion I reached was that humans are fundamentally more flexible and more adaptable than the collapse-predictors’ arguments allowed for, and society managed to work-around all the regulations and other problems the government and big businesses keep creating. Since the regulations and rules keep growing and creating more problems and rigidity along the way, eventually there will be a collapse, but anyone that gives any kind of timing for it is grabbing at the short end of the stick.
Anyone here have more suggestions as to reasons they have been wrong?
(originally posted on esr’s blog 2010-05-09, revised and expanded since)
Not sure if you’re referring to the same literature, but I note a great divergence between peak oil advocates and singularitarians. This is a little weird, if you think of Aumann’s Agreement theorem.
Both groups are highly populated with engineer types, highly interested in cognitive biases, group dynamics, habits of individuals and societies and neither are mainstream.
Both groups use extrapolation of curves from very real phenomena. In the case of the kurzweillian singularitarians, it is computing power and in the case of the peak oil advocates, it is the hubbert curve for resources along with solid Net Energy based arguments about how civilization should decline.
The extreme among the Peak Oil advocates are collapsitarians and believe that people should drastically change their lifestyles, if they want to survive. They are also not waiting for the others to join them and many are preparing to go to small towns, villages etc. The oildrum, linked here had started as a moderate peak oil site discussing all possibilities, nowadays, apparently, its all doom all the time.
The extreme among the singularitarians have been asked no such sacrifice, just to give enough money and support to make sure that Friendly AI is achieved first.
Both groups believe that business as usual cannot go on for too long, but they expect dramatically different consequences. The singularitarians assert that economics conditions and technology will improve until a nonchalant super-intelligence will be created and wipe out humanity. The collapsitarians believe that economic conditions will worsen, civilization is not built robustly and will collapse badly with humanity probably going extinct or only the last hunter gatherers surviving.
It should be possible to believe both—unless you’re expecting peak oil to lead to social collapse fairly soon, Moore’s law could make a singluarity possible while energy becomes more expensive.
Which could suggest a distressing pinch point: not wanting to delay AI too long in case we run out of energy for it to use; not wanting to make an AI too soon in case it’s Unfriendly.
Could you give some examples of the predicted collapses that didn’t happen?
Y2K. I thought I had a solid lower bound for the size of that one: Small businesses basically did nothing in preparation, and they still had a fair amount of dependence on date-dependent programs, so I was expecting that the impact on them would set a sizable lower bound on the the size of the overall impact. I’ve never been so glad to be wrong. I would still like to see a good retrospective explaining how that sector of the economy wound up unaffected...
The smaller the business, the less likely they are to have their own software that’s not simply a database or spreadsheet, managed in say, a Microsoft product. The smaller the business, the less likely that anything automated is relying on correct date calculations.
These at least would have been strong mitigating factors.
[Edit: also, even industry-specific programs would likely be fixed by the manufacturer. For example, most of the real-estate software produced by the company I worked for in the 80′s and 90′s was Y2K-ready since before 1985.]
First, the “economic collapse” I referred to in the original post were actually at least 6 different predictions at different times.
As another example, but not quite a “collapse” scenario, consider the predictions of the likelihood of nuclear war; there were three distinct periods where it was considered more or less likely by different groups. The late 1940s some intelligent and informed, but peripheral, observers like Robert Heinlein considered it a significant risk. Next was the late 1950s through the Cuban Missile Crisis in the early 1960s, when nearly everybody considered it a major risk. Then there was another scare in the late 1970s to early 1980s, primarily leftists (including the media) favoring disarmament promulgating the fear to try to get the US to reduce their stockpiles and conservatives (derided by the media as “survivalists” and nuts) who were afraid they would succeed.
Regrets and Motivation
Almost invariably everything is larger in your imagination than in real life, both good and bad, the consequences of mistakes loom worse, and the pleasure of gains looks better. Reality is humdrum compared to our imaginations. It is our imagined futures that get us off our butts to actually accomplish something.
And the fact that what we do accomplish is done in the humdrum, real world, means it can never measure up to our imagined accomplishments, hence regrets. Because we imagine that if we had done something else it could have measured up. The worst part of having regrets is the impact it has on our motivation.
somewhat expanded version of comment on OB a couple of months ago
Added: I didn’t make the connection at first, but this is also Eliezer’s point in this quote from The Super Happy People story, “It’s bad enough comparing yourself to Isaac Newton without comparing yourself to Kimball Kinnison.”
I was talking to a friend yesterday and he mentioned a psychological study (I am trying to track down the source) that people tend to suffer MORE from failing to pursue certain opportunities than FAILING after pursuing them. So even if you’re right about the overestimation of pleasure, it might just be irrelevant.
Here is a review of that psychological research (pdf), and there are more studies linked here (the keyword to look for is “regret”). The paper I linked is:
Gilovich, T., & Medvec, V. H. (1995). The experience of regret: What, when, and why. Psychological Review, 102, 379-395.
I haven’t seen a study, but that is a common belief. A good quote to that effect,
And I vaguely remember seeing another similar quote from Churchill.
No doubt there is truth in this… however examples spring into my mind where accomplishing something made me feel better than what I ever expected. This includes sport (ever win a race or score a goal in a high stakes soccer game?), work and personal life. The “reality is humdrum” perspective might, at least in part, be caused by a disconnect between “imagination” and “action”.
Also, “Invest in the process, not the outcome”.
Often it is our imagined bad futures that keep us too afraid to act. In my experience this is more common than the opposite.
What do you mean by “the opposite”? I can think of at least two ways to invert that sentence.
I meant billswift’s original idea: that we imagine good futures and that motivates us to act.
Maybe you can set your success setpoint to a lower value. The optimum is hard to achieve. So looking for 100% everywhere might be bad.
One variable often invoked to explain happiness in Denmark (who regularly rank #1 for happiness) is modest expectations.
ETA: the above paper seems a bit tongue-in-cheek, but as I gather, the results are solid. Full disclosure: I’m from Denmark.
Awesome coincidence. I am going to travel to Denmark next week for 10 days. Will check it out myself!
The Science of Gaydar: http://nymag.com/print/?/news/features/33520/
How To Destroy A Black Hole
http://www.technologyreview.com/blog/arxiv/25316/
Inspired by Chapter 24 of Methods of Rationality, but not a spoiler: If the evolution of human intelligence was driven by competition between humans, why aren’t there a lot of intelligent species?
Five-second guess: Human-level Machiavellian intelligence needs language facilities to co-evolve with, grunts and body language doesn’t allow nearly as convoluted schemes. Evolving some precursor form of human-style language is the improbable part that other species haven’t managed to pull off.
Somewhat accepted partial answer is that huge brains are ridiculously expensive—you need a lot of high energy density food (= fire), a lot of DHA (= fish) etc. Chimp diet simply couldn’t support brains like ours (and aquatic ape etc.), nor could they spend as much time as us engaging in politics as they were too busy just getting food.
Perhaps chimp brains are as big as they could possibly be given their dietary constraints.
That’s conceivable, and might also explain why wolves, crows, elephants, and other highly social animals aren’t as smart as people.
Also, I think the original bit in Methods of Rationality overestimates how easy it is for new ideas to spread. As came up recently here, even if tacit knowledge can be explained, it usually isn’t.
This means that if you figure out a better way to chip flint, you might not be able to explain it in words, and even if you can, you might chose to keep it as a family or tribal secret. Inventions could give their inventors an advantage for quite a long time.
About CEV: Am I correct that Eliezer’s main goal would be to find the one utility function for all humans? Or is it equally plausible to assume that some important values cannot be extrapolated coherently, and that a Seed-AI would therefore provide several results clustered around some groups of people?
[edit]Reading helps. This he has actually discussed, in sufficient detail, I think.[/edit]
I think the expectation is that, if all humans had the same knowledge and were better at thinking (and were more the people we’d like to be, etc.), then there would be a much higher degree of coherence than we might expect, but not necessarily that everyone would ultimately have the same utility function.
There is only one world to build something from. “Several results” is never a solution to the problem of what to actually do.
Please bear with my bad English, this did not come across as intended.
So: Either all or nothing?
No possibility that the AI could detect that to maximize this hardcore utility function we need to separate different groups of people, maybe/probably lying to them about their separation, just providing the illusion of unity of humankind to each group? Or is too obvious a thought, or too dumb because of x?
I think the idea is that CEV lets us “grow up more together” and figure that out later.
I have only recently started looking into CEV so I’m not sure whether I a) think it’s a workable theory and b)think it’s a good solution, but I like the way it puts off important questions.
It’s impossible to predict what we will want if age, disease, violence, and poverty become irrelevant (or at least optional).
Let’s get this thread going:
I’d like to ask everyone what probability bump they give to an idea given that some people believe it.
This is based on the fact that out of the humongous idea-space, some ideas are believed by (groups of) humans, and a subset of those are believed by humans and are true. (of course there exist some that are true and not yet believed by humans.)
So, given that some people believe X, what probability do you give for X being true, compared to Y which nobody currently believes?
Usually fairly substantial—if someone presents me with two equally-unsupported claims X and Y and tells me that they believe X and not Y, I would give greater credence to X than to Y. Many times, however, that credence would not reach the level of … well, credence, for various good reasons.
Depends on the person and the idea. I have some people whose recommendations I follow regardless, even if I estimate upfront that I will consider the idea wrong. There are different levels of wrongness, and it does not hurt to get good counterarguments. It also depends on the real life practicability of the idea. If it is for everyday things than common sense is a good starting prior. (Also there is a time and place to use the public joker on Who wants to be a millionaire.) If a group of professionals agree on something related to their profession it is also a good start. To systematize: if a group of people has a belief about something they have experience with, that that belief is worth looking at.
And then on further investigation it often turns out that there are systematic mistakes being made.
I was shocked to read in the book on checklists, that not only doctors often don’t like them. But even financial companies, that can see how the usage ups their monetary gains. But finding flaws in a whole group does not imply that everything they say is wrong. It is good to see a doctor, even if he not using statistics right. He can refer you to a specialist, and treat all the common stuff right away. If you get a complicated disease you can often read up on it.
The obvious example to your question would be religion. It is widely believed, but probably wrong, yet I did not discard it right away, but spent years studying stuff till I decided there was nothing to it. There is nothing wrong in examining the ideas other people have.
Agreed.
As the OP states, idea space is humongous. The fact alone that people comprehend something sufficiently to say anything about it at all means that this something is a) noteworthy enough to be picked up by our evolutionarily derived faculties by even a bad rationalist b) expressible by same faculties c) not immediately, obviously wrong
To sum up, the fact that someone claims something is weak evidence that it’s true, cf. Einstein’s Arrogance. If this someone is Einstein, the evidence is not so weak.
Edit: just to clarify, I think this evidence is very weak, but evidence for the proposition, nonetheless. Dependent on the metric, by far most propositions must be “not even wrong”, i.e. garbled, meaningless or absurd. The ratio of “true” to {”wrong” + “not even wrong”} seems to ineluctably be larger for propositions expressed by humans than for those not expressed, which is why someone uttering the proposition counts as evidence for it. People simply never claim that apples fall upwards, sideways, green, kjO30KJ&¤k etc.
I forgot the major influence of my own prior knowledge. (Which i guess holds true for everyone.) That makes the cases where I had a fixed opinion, and managed to change it all the more interesting. If you never dealt with an idea before you go where common sense or the experts lead you. But if you already have good knowledge, than public opinion should do nothing to your view. Public opinion or even experts (esp. when outside their field) often enough state opinions without comprehending the idea. So it doesnt really mean too much. Regarding Einstein, he made the statements before becoming super famous. I understand it as a case of signaling ‘look over here!’ And he is not particularly safe against errors. One of his last actions (which I have not fact checked sufficiently so far) was to write a foreword for a book debunking the movement of the continental plates.
I didn’t intend to portray Einstein as bulletproof, but rather highlight his reasoning. Plus point to the idea of even locating the idea in idea space. Obviously, creationism is wrong, but less wrong than a random string. It at least manages to identify a problem and using cause and effect.
Thank you, this is what I was getting at.
If no people believe Y—literally no people—then either the topic is very little examined by human beings, or it’s very exhaustively examined and seems obvious to everyone. In the first case, I give a smaller probability than in the second case.
In the first case, only X believers exist because only X believers have yet considered the issue. That’s minimal evidence in favor of X. In the second case, lots of people have heard of the issue; if there were a decent case against X, somebody would have thought of it. The fact that none of them—not a minority, but none—argued against X is strong evidence that X is true.
Isn’t the other way around?
(Good analysis, by the way.)
I don’t think belief has a consistent evidentiary strength since it depends on the testifier’s credibility relative to my own. Children have much lower credibility than me on the issue of the existence of Santa. Professors of physics have much higher credibility that me on the issue of dimensions greater than four. Some person other than me has much higher credibility on the issue of how much money they are carrying. But I have more credibility than anyone else on the issue of how much money I’m carrying. I don’t see any relation that could be described as baseline so the only answer is: context.
I’ve become increasingly disillusioned with people’s capacity for abstract thought. Here are two points on my journey.
The public discussion of using wind turbines for carbon-free electricity generation seems to implicitly assume that electricity output goes as something like the square-root of windspeed. If the wind is only blowing half speed you still get something like 70% output. You won’t see people saying this directly, but the general attitude is that you only need back up for the occasional calm day when the wind doesn’t blow at all.
In fact output goes as the cube of windspeed. The energy in the windstream is one half m v squared, where m, the mass passing your turbine is proportional to the windspeed. If the wind is at half strength, you only get 1⁄8 output.
Well, that is physics. Ofcourse people suck at physics. Trouble is, the more I look at people’s capacity for abstract thought the more problems I see. When people do a cost/benefit analysis they are terribly vague on whether they are suposed to add the costs and benefits or whether the costs get subtracted from the benefits. Even if they realise that they have to subtract they are still at risk of using an inverted scale for the costs and ending up effectively adding.
The probabiltiy bump I give to an idea just because some people believe it is zero. Equivantly my odds ratio is one. However you describe it, my posterior is just the same as my prior.
Revised: I do not think that link provides evidence for the quoted sentence. Nor I do see other evidence that people are that bad at cost-benefit analysis. I agree that the example presented there is interesting and that one should keep in mind that disagreements about values can be hidden, sometimes maliciously.
I’ve got a better link. David Henderson catches a professor of economics getting costs and benefits confused in a published book. Henderson’s review is on on page 54 of Regulation, and my viewer puts it on the ninth page of the pdf that Henderson links to
That is a good example. Talk of creating jobs as a benefit, rather than a cost is quite common. But is it confusion or malice? It is hard for me to imagine that economists would publish such a book without having it pointed out to them. The audience certainly is confused. Henderson says “Almost no one spending his own money makes this mistake” and would not generalize to people’s capacity for abstract thought.
The original question was how much information to extract from the conventional wisdom. I do not take this as a reason to doubt the conventional wisdom about personal decisions. Partly, this is public choice, and partly because people do not address externalities in their personal decisions. Maybe any commonly accepted argument involving economics should be suspect, though the existence of the very well-established applause-line of “creating jobs” suggests that there are limits to how to fool people. But your claim was not that people are bad at physics and economics, but at the abstract thought of decision theory.
I think it largely depends on a) what the idea is and b) who believes it = and what their rationality skills are.
I recently learned the hard way, that one can easily be an idiot in one area, while being very competent in another. Religious scientists / programmers etc. Or lets say people that are highly competent in their area of occupation without looking into other things.
Out of the huge idea space of possible causally linked events, some of them make good stories and some do not. That doesn’t tell you rather it’s true or not.
If a guy thinks that he can hear Hillary Clinton speaking from the feelings in his teeth, telling him to murder his cellmate, do you believe what he says? Status gets mucked up in the calculation, but with strangers it teeters precariously close to zero.
I really like kids,but the fact that millions of them passionately believe in Santa Claus does not change my degree of subjective belief one iota.
Well obviously propositions with extremely high complexity (and therefore very low priors) are going to remain low even when people believe them. But if someone says they believe they have 10 dollars on them or that the US Constitution was signed in September… the belief is enough to make those claims more likely than not.
But people only believe things that make sense to them. When it comes to controversial issues, then ya, you’ll find that most people will be divided on it. However, we elect people to lead us in the faith that the majority opinion is right. So even that isn’t entirly true. And out of the vast majority of possible ideas, most people that live in the same society will agree or disagree the same way on the majority of them, esspecially if they have the same background knowledge.
None.
Or as Ben Goldacre put it in a talk: There are millions of medical doctors and Ph.D.s in the world. There is no idea, however completely fucking crazy, that you can’t find some doctor to argue for.
In any case of a specific X and Y, there will be far more information than that (who believes X and why? does anyone disbelieve Y? etc.), which makes it impossible for me to attach any probability for the question as posed.
Cute quip, but I doubt it. Find me a Ph.D to argue that the sky is bright orange, that the english language doesn’t exist, and that all humans have at least seventeen arms and a maximum lifespan of ten minutes.
All generalisations are bounded, even when the bounds are not expressed. In the context of his talk, Ben Goldacre was talking about “doctors” being quoted as supporting various pieces of bad medical science.
Many medical doctors around here (germany) offer homeopathy in addition to their medical practice. Now it might be that they respond to market demand to sneak in some medical science in between, or that they actually take it serious.
Or that they respond to market demand and don’t try to sneak any medical science in, based on the principle that the customer is always right.
From what I’ve heard, in Germany and other places where homeopathy enjoys high status and professional recognition, doctors sometimes use it as a very convenient way to deal with hypochondriacs who pester them. Sounds to me like a win-win solution.
I still assume that doctors actually want to help people. (Despite reading the checklist book, and other stuff). So if I have the choice between: World a) where doctors also do homeopathy, and b) where other ppl. do it, while doctors stay true to science. Than I would prefer a) because at least the people go to a somewhat competent person.
Homeopathy is at best a placebo. It’s rare that there’s no better medical way to help someone. Your assumption is counter to the facts.
Certainly doctors want to help people—all else being equal. But if they practice homeopathy extensively, then they are prioritizing other things over helping people.
If the market condition (i.e. the patients’ opinions and desires) are such that they will not accept scientific medicine, and will only use homeopathy anyway, then I suggest then the best way to help people is for all doctors to publicly denounce homeopathy and thus convince at least some people to use better-than-placebo treatments instead.
I disagree—at least with the part about “it’s rare that there’s no better medical way to help people”. It’s depressingly common that there’s no better medical way to help people. Things like back pain, tiredness, and muscle aches—the commonest things for which people see doctors—can sometimes be traced to nice curable medical reasons, but very often as far as anyone knows they’re just there.
Robin Hanson has a theory—and I kind of agree with him—that homeopathy fills a useful niche. Placebos are pretty effective at curing these random (and sometimes imagined) aches and pains. But most places consider it illegal or unethical for doctors to directly prescribe a placebo. Right now a lot of doctors will just prescribe aspirin or paracetamol or something, but these are far from totally harmless and there are a lot of things you can’t trick patients into thinking aspirin is a cure for. So what would be really nice, is if there was a way doctors could give someone a totally harmless and very inexpensive substance like water and make the patient think it was going to cure everything and the kitchen sink, without directly lying or exposing themselves to malpractice allegations.
Where this stands or falls is whether or not it turns patients off real medicine and gets them to start wanting homeopathy for medically known, treatable diseases. Hopefully it won’t—there aren’t a lot of people who want homeopathic cancer treatment—but that would be the big risk.
You might implicitly assume that people make a conscious choice to go the unscientific route. That is not the case. For a layperson there is no perceivable difference between a doctor and a homeopath. (Well. Maybe there is, but lets exaggerate that here.)
From the experience the homeopath might have more time to listen, while doctors often have a approach to treatment speed that reminds me of a fast food place. If I were a doctor, than the idea to offer homeopathy, so that people at least come to me would make sense both money wise, and to get the effect that they are already at a doctors place for treatment with placebos for trivial stuff, while actual dangerous conditions get check out from a competent person. Its a case of corrupting your integrity to some degree to get the message heard.
I considered to not go to doctors that offer homeopathy, but then decided against that due to this reasoning.
You could probably ask the doctor why they offer homeopathy, and base your decision on the sort of answer you get. “Because it’s an effective cure...” is straight out.
tl;dr—if doctors don’t denounce homeopaths, people will start going to “real” homeopaths and other alt-medicine people, and there is no practical limit to the lies and harm done by real homeopaths.
That is so because doctors also offer homeopathy. If almost all doctors clearly denounced homeopathy, fewer people would choose to go to homeopaths, and these people would benefit from better treatment.
This is a problem in its own right that should be solved by giving doctors incentives to listen to patients more. However, do you think that because doctors don’t listen enough, homeopaths produce better treatment (i.e. better medical outcomes)?
Do you have evidence that this is the result produced?
What if the reverse happens? Because the doctors endorse homeopathy, patients start going to homeopaths instead of doctors. Homeopaths are better at selling themselves, because unlike doctors they can lie (“homeopathy is not a placebo and will cure your disease!”). They are also better at listening, can create a nicer (non-clinical) reception atmosphere, they can get more word-of-mough networking benefits, etc.
Patients can’t normally distinguish “trivial stuff” from dangerous conditions until it’s too late—even doctors sometimes get this wrong. The next logical step is for people to let homeopaths treat all the trivial stuff, and go to ER when something really bad happens.
Personal story: my mother is a doctor (geriatrician). When I was a teenager I had seasonal allergies and she insisted on sending me for weekly acupuncture. During the hour-long sessions I had to listen to the ramblings of the acupuncturist. He told me (completely seriously) that, although he personally didn’t have the skill, the people who taught him acupuncture in China could use it to cure my type 1 diabetes. He also once told me about someone who used various “alternative medicine” to eat only vine leaves for a year before dying.
When the acupuncture didn’t help me, my mother said that was my own fault because “I deliberately disbelieved the power of acupuncture and so the placebo effect couldn’t work on me”.
Sorry about your experience.
I perceive you as attacking me for having said position, but I am the wrong target. I know homeopathy is BS, and I don’t use it or advocate it. What I do understand is doctors who offer it for some reason or another, for the reasons listed above. What you claim as a result is sadly already happening. I have had people getting angry at me for clearly stating my view, and the reasons for it, on homeopathy. (I didn’t say BS, but one of the ppl. was a programmer, if that counts for something.) Many folks do go to alternative treatments, and forgo doctors as long as possible. People have a weak opinion on the ‘school medicine’ (german term translation for the official medical knowledge and practice.) criticize it—sometimes justified. And use all kind of hyper-skeptical reasoning, that they do not apply to their current favorite. That is bad. And hopefully goes away. Many still go the double route you listed. And well, then we have the anti-vaccination front growing. It is bad, and sad, and useless stupidity. Lets get angry together, and see what can be done about it.
Personal story: i did a lecture on skeptic thinking.
try i dumped everything i knew, and noticed how dealing with the H-topic tends to close people up.
try i cut out a lot, and left the H topic out. still didn’t work
I have no idea what I can do about it, and am basically resigning.
I didn’t intend to attack you. Sorry I came across that way.
From what I’ve been told from friends, here (Austria) they (meaning: most doctors) do take it serious. This is understandable; when studying medicine, the by far larger part of college is devoted to knowing facts, the craftsmanship (if I may say so), then to doing medical science.
This also makes sense, as execution by using results already requires so much training (it is the only college course here which requires at least six years by default, not including “Turnus” (another three year probation period before somebody may practice without supervisor)).
The problem here is that for the general public the difference between a medical practitioner and any scientist is nil. Strangely enough, they usually do not make this error in engineering fields, for instance electrical engineer vs. physicist. May have to do something with the high status of doctors in society.
I recently found out why doctors cultivate a certain amount of professional arrogance when dealing with patients: Most patients don’t understand whats behind their specific disease—and usually do not care. So if doctors where open to argument, or would state doubts more openly the patient might loose trust, and not do what he is ordered to do. To instill an absolute belief in doctors powers might be very helpful for a big size of the population. A lot of my own frustration in doctors experiences can be attributed to me being a non-standard patient that reads to much.
Emile:
These claims would be beyond the border of lunacy for any person, but still, I’m sure you’ll find people with doctorates who have gone crazy and claim such things.
But more relevantly, Richard’s point definitely stands when it comes to outlandish ideas held by people with relevant top-level academic degrees. Here, for example, you’ll find the website of Gerardus Bouw, a man with a Ph.D. in astronomy from a highly reputable university who advocates—prepare for it—geocentrism:
http://www.geocentricity.com/
(As far as I see, this is not a joke. Also, I’ve seen criticisms of Bouw’s ideas, but nobody has ever, to the best of my knowledge, disputed his Ph.D. He had a teaching position at a reputable-looking college, and I figure they would have checked.)
Here is another one:
http://en.wikipedia.org/wiki/Courtney_Brown_%28researcher%29
It looks like no one ever hired him to teach astronomy or physics. He only ever taught computer science (and from the sound of it, just programming languages). My guess is he did get the PhD though.
Also, in fairness to the college he is retired and he’s young enough to make me think that he may have been forced into retirement.
Earth’s sun does orbit the earth, under the right frame of reference. What is outlandish about this?
If you read the site, they alternatively claim that relativity allows them to use whatever reference frame they chose and at other points claim that the evidence only makes sense for geocentrism.
Oh. Well, that’s stupid then.
I’m not sure it is completely stupid. Consider the argument in the following fashion:
1) We think your physics is wrong and geocentrism is correct. 2) Even if we’re wrong about 1, your physics still supports regarding geocentrism as being just as valid as heliocentrism.
I don’t think that their argument approaches this level of coherence.
An interesting article criticizing speculation about social trends (specifically teen sex) in the absence of statistical evidence.
Beautiful. Matthew Yglesias, +1 point.
It is entirely possible that some social groups are experiencing the kind of changes that Flanagan describes, but as Yglesias says, she apparently is unaware that there is such a thing as scientific evidence on the question.
Saw this over on Bruce Schneier’s blog, it seemed worth reposting here. Wharton’s “Quake” Simulation Game Shows Why Humans Do Such A Poor Job Planning For & Learning From Catastrophes (link is to summary, not original article, as original article is a bit redundant). Not so sure how appropriate the “learning from” part of the title is, as they don’t seem to mention people playing the game more than once, but still quite interesting.
What solution do people prefer to Pascal’s Mugging? I know of three approaches:
1) Handing over the money is the right thing to do exactly as the calculation might indicate.
2) Debiasing against overconfidence shouldn’t mean having any confidence in what others believe, but just reducing our own confidence; thus the expected gain if we’re wrong is found by drawing from a broader reference class, like “offers from a stranger”.
3) The calculation is correct, but we must pre-commit to not paying under such circumstances in order not to be gamed.
What have I left out?
The unbounded utility function (in some physical objects that can be tiled indefinitely) in Pascal’s mugging gives infinite expected utility to all actions, and no reason to prefer handing over the money to any other action. People don’t actually show the pattern of preferences implied by an unbounded utility function.
If we make the utility function a bounded function of happy lives (or other tilable physical structures) with a high bound, other possibilities will offer high expected utility. The Mugger is not the most credible way to get huge rewards (investing in our civilization on the chance that physics allows unlimited computation beats the Mugger). This will be the case no matter how huge we make the (finite) bound.
Bounding the utility function definitely solves the problem, but there are a couple of problems. One is the principle that the utility function is not up for grabs, the other is that a bounded utility function has some rather nasty consequences of the “leave one baby on the track” kind.
I don’t buy this. Many people have inconsistent intuitions regarding aggregation, as with population ethics. Someone with such inconsistent preferences doesn’t have a utility function to preserve.
Also note that a bounded utility function can allot some of the potential utility under the bound to producing an infinite amount of stuff, and that as a matter of psychological fact the human emotional response to stimuli can’t scale indefinitely with bigger numbers.
And, of course, allowing unbounded growth of utility with some tilable physical process means that process can dominate the utility of any non-aggregative goods, e.g. the existence of at least some instantiations of art or knowledge, or overall properties of the world like ratios of very good to lives just barely worth living/creating (although you might claim that the value of the last scales with population size, many wouldn’t characterize it that way).
Bounded utility functions seem to come much closer to letting you represent actual human concerns, or to represent more of them, in my view.
Eliezer’s original article bases its argument on the use of Solomonoff induction. He even suggests up front what the problem with it is, although the comments don’t make anything of it: SI is based solely on program length and ignores computational resources. The optimality theorems around SI depend on the same assumption. Therefore I suggest:
4. Pascal’s Mugging is a refutation of the Solomonoff prior.
But where a computationally bounded agent, or an unbounded one that cares how much work it does, should get its priors from instead would require more thought than a few minutes on a lunchtime break.
In one sense you can’t use evidence to argue with a prior, but I think that factoring in computational resources as a cost would have put you on the wrong side of a lot of our discoveries about the Universe.
Could you expand that with examples? And if you can’t use evidence to argue with a prior, what can you use?
I’m thinking of the way we keep finding ways in which the Universe is far larger than we’d imagined—up to and including the quantum multiverse, and possibly one day including a multiverse-based solution to the fine tuning problem.
The whole point about a prior is that it’s where you start before you’ve seen the evidence. But in practice using evidence to choose a prior is likely justified on the grounds that our actual prior is whatever we evolved with or whatever evolution’s implicit prior is, and settling on a formal prior with which to attack hard problems is something we do in the face of lots of evidence. I think.
It’s not clear to me how that bears on the matter. I would need to see something with some mathematics in it.
There’s a potential infinite regress if you argue that changing your prior on seeing the evidence means it was never your prior, but something prior to it was.
You can go on questioning those previous priors, and so on indefinitely, and therefore nothing is really a prior.
You stop somewhere with an unquestionable prior, and the only unquestionable truths are those of mathematics, therefore there is an Original Prior that can be deduced by pure thought. (Calvinist Bayesianism, one might call it. No agent has the power to choose its priors, for it would have to base its choice on something prior to those priors. Nor can it priors be conditional in any way upon any property of that agent, for then again they would not be prior. The true Prior is prior to all things, and must therefore be inherent in the mathematical structure of being. This Prior is common to all agents but in their fundamentally posterior state they are incapable of perceiving it. I’m tempted to pastiche the whole Five Points of Calvinism, but that’s enough for the moment.)
You stop somewhere, because life is short, with a prior that appears satisfactory for the moment, but which one allows the possibility of later rejecting.
I think 1 and 2 are non-starters, and 3 allows for evidence defeating priors.
What do you mean by “evolution’s implicit prior”?
Tom_McCabe2 suggests generalizing EY’s rebuttal of Pascal’s Wager to Pascal’s Mugging: it’s not actually obvious that someone claiming they’ll destroy 3^^^^3 people makes it more likely that 3^^^^3 people will die. The claim is arguably such weak evidence that it’s still about equally likely that handing over the $5 will kill 3^^^^3 people, and if the two probabilities are sufficiently equal, they’ll cancel out enough to make it not worth handing over the $5.
Personally, I always just figured that the probability of someone (a) threatening me with killing 3^^^^3 people, (b) having the ability to do so, and (c) not going ahead and killing the people anyway after I give them the $5, is going to be way less than 1/3^^^^3, so the expected utility of giving the mugger the $5 is almost certainly less than the $5 of utility I get by hanging on to it. In which case there is no problem to fix. EY claims that the Solomonoff-calculated probability of someone having ‘magic powers from outside the Matrix’ ‘isn’t anywhere near as small as 3^^^^3 is large,’ but to me that just suggests that the Solomonoff calculation is too credulous.
(Edited to try and improve paraphrase of Tom_McCabe2.)
This seems very similar to the “reference class fallback” approach to confidence set out in point 2, but I prefer to explicitly refer to reference classes when setting out that approach, otherwise the exactly even odds you apply to massively positive and massively negative utility here seem to come rather conveniently out of a hat...
Fair enough. Actually, looking at my comment again, I think I paraphrased Tom_McCabe2 really badly, so thanks for replying and making me take another look! I’ll try and edit my comment so it’s a better paraphrase.
I’m not sure this problem needs a “solution” in the sense that everyone here seems to accept. Human beings have preferences. Utility functions are an imperfect way of modeling those preferences, not some paragon of virtue that everyone should aspire to. Most models break down when pushed outside their area of applicability.
The utility function assumes that you play the “game” (situation, whatever) an infinite number of times and then find the net utility. Thats good when your playing the “game” enough times to matter. It’s not when your only playing a small number of times. So lets look at it as “winning” or “loosing”. If the odds are really low and the risk is high and your only playing once, then most of the time you expect to loose. If you do it enough times, you even the odds out and the loss gets canceled out by the large reward, but only playing once you expect to loose more then you gain. Why would you assume differnetly? Thats my 2 cents and so far its the only way I have come up with to navigate around this problem.
This isn’t right. The way utility is normally defined, if outcome X has 10 times the utility of outcome Y for a given utility function, agents behaving in accord with that function will be indifferent between certain Y and a 10% probability of X. That’s why they call expected utility theory a theory of “decision under uncertainty.” The scenario you describe sounds like one where the payoffs are in some currency such that you have declining utility with increasing amounts of the currency.
Uh, no. Allright, lets say I give you a 1 out of 10 chance at winning 10 times everything you own, but the other 9 times you lose everything. The net utility for accepting is the same as not accepting, yet thats completely ignoring the fact that if you do enter, 90 % of the time you lose everything, no matter how high the reward is.
As Thom indicates, this is exactly what I was talking about: ten times the stuff you own, rather than ten times the utility. Since utility is just a representation of your preferences, the 1 in 10 payoff would only have ten times the utility of your current endowment if you would be willing to accept this gamble.
That’s only true if “everything you own” is cast in terms of utility, which is not intuitive. Normally, “everything you own” would be in terms of dollars or something to that effect, and ten times the number of dollars I have is not worth 10 times the utility of those dollars.
Because it was used somewhere I calculated my own weights worth in gold—it is about 3.5 million EUR. In silver you can get me for 50.000 EUR. The Mythbusters recently build a lead balloon and had it fly. Some proverb don’t hold up to reality and/or engineering.
The number of heart attacks has fallen since England imposed a smoking ban
http://www.economist.com/node/16333351?story_id=16333351&fsrc=scn/tw/te/rss/pe
I think I found the study they’re talking about thanks to this article. I might take a look at it—if the methodology is literally just ‘smoking was banned, then the heart attack rate dropped’, that sucks.
(Edit to link to the full study and not the abstract.)
Just skimmed it. The methodology is better than that. They use a regression to adjust for the pre-existing downward trend in the heart attack hospital admission rate; they represent it as a linear trend, and that looks fair to me based on eyeballing the data in figures 1 and 2. They also adjust for week-to-week variation and temperature, and the study says its results are ‘more modest’ than others’, and fit the predictions of someone else’s mathematical model, which are fair sanity checks.
I still don’t know how robust the study is—there might be some confounder they’ve overlooked that I don’t know enough about smoking to think of—but it’s at least not as bad as I expected. The authors say they want to do future work with a better data set that has data on whether patients are active smokers, to separate the effect of secondhand smoke from active smoking. Sounds interesting.
In the Singularity Movement, Humans Are So Yesterday (long Singularity article in this Sunday’s NY Times; it isn’t very good)
http://news.ycombinator.com/item?id=1426386
I agree that this article isn’t very good. It seems to do the standard problem of combining a lot of different ideas about what the Singularity would entail. It emphasizes Kurzweil way too much, and includes Kurzweil’s fairly dubious ideas about nutrition and health. The article also uses Andrew Orlowski as a serious critic of the Singularity making unsubstantiated claims about how the Singularity will only help the rich. Given that Orlowski’s entire approach is to criticize anything remotely new or weird-seeming, I’m disappointed that the NYT would really use him as a serious critic in this context. The article strongly reinforces the perception that the Singularity is just a geek-religious thing. Overall, not well done at all.
I’m starting to think SIAI might have to jettison the “singularity” terminology (for the intelligence explosion thesis) if it’s going to stand on its own. It’s a cool word, and it would be a shame to lose it, but it’s become associated too much with utopian futurist storytelling for it to accurately describe what SIAI is actually working on.
Edit: Look at this Facebook group. This sort of thing is just embarrassing to be associated with. “If you are feeling brave, you can approach a stranger in the street and speak your message!” Seriously, this practically is religion. People should be raising awareness of singularity issues not as a prophecy but as a very serious and difficult research goal. It doesn’t do any good to have people going around telling stories about the magical Future-Land while knowing nothing about existential risks or cognitive biases or friendly AI issues.
I’m not sure that your criticism completely holds water. Friendly AI is simply put only a worry that has convinced some Singularitarians. One might not be deeply concerned about that (Possible example reasons: 1) You expect uploading to come well before general AI. 2) you think that the probable technical path to AI will force a lot more stages of AI of much lower intelligence which will be likely to give us good data for solving the problem)
I agree that this Facebook group does look very much like something one would expect out of a missonizing religion. This section in particular looked like a caricature:
The certainty for 2045 is the most glaring aspect of this aside from the pseudo-missionary aspect. Also note that some of the people associated with this group are very prominent Singularitarians and Transhumanists. Aubrey de Grey is listed as an administrator.
But, one should remember that reversed stupidity is not intelligence. Moreover, there’s a reason that missionaries sound like this: They have a very high confidence in their correctness. If one had a similarly high confidence in the probability of a Singularity event, and you thought that that event was more likely to occur safely if more people were aware of it, and was more likely to occur soon if more people were aware of it, and buy into something like the galactic colonization argument, and you believe that sending messages like this has a high chance of getting people to be aware and take you seriously then this is a reasonable course of action. Now, that’s a lot of premises, some of which have likelyhoods others which have very low ones. Obviously there’s a very low probability that sending out these sorts of messages is at all a net benefit. Indeed, I have to wonder if there’s any deliberate mimicry of how religious groups send out messages or whether successfully reproducing memes naturally hit on a small set of methods of reproduction (but if that were the case I think they’d be more likely to hit an actually useful method of reproduction). And in fairness, they may just be using a general model for how one goes about raising awareness for a cause and how it matters. For some causes, simple, frequent appeals to emotion are likely an effective method (for example, making people aware of how common sexual assault is on college campuses, short messages that shock probably do a better job than lots of fairly dreary statistics). So then the primary mistake is just using the wrong model of how to communicate to people.
Speaking of things to be worried about other than AI, I wonder if a biotech disaster is a more urgent problem, even if less comprehensive
Part of what I’m assuming is that developing a self-amplifying AI is so hard that biotech could be well-developed first.
While it doesn’t seem likely to me that a bio-tech disaster could wipe out the human race, it could cause huge damage—I’m imagining diseases aimed at monoculture crops, or plagues as the result of terrorism or incompetent experiments.
My other assumptions are that FAI research is dependent on a wealthy, secure society with a good bit of surplus wealth for individual projects, and is likely to be highly dependent on a small number of specific people for the forseeable future.
On the other hand, FAI is at least a relatively well-defined project. I’m not sure where you’d start to prevent biotech disasters.
That’s one hell of a “relatively” you’ve got there!
Agreed, but… they’d even have to change their own name!
It’s better than mainstream Singularity articles in the past, IMO; unfortunately, Kurzweil is seen as an authority, but at least it’s written with some respect for the idea.
It does seem to be about a lot of different things, some of which are just synonymous with scientific progress (I don’t think it’s any revelation that synthetic biology is going to become more sophisticated.)
I’m curious: Was the SIAI contacted for that article? I haven’t had time to read it all, but a word-search for “Singularity Institute” and “Yudkowsky” turned up nothing.
I hear Michael Anissimov was not contacted, and he’s probably the one they’d have the press talk to.
Heuristics and biases in charity
http://www.sas.upenn.edu/~baron/papers/charity.pdf (I considered making this link as a top-level post.)
I’ve recently begun downvoting comments that are at −2 rating regardless of my feelings about them. I instituted this policy after observing that a significant number of comments reach −2 but fail to be pushed over to −3, which I’m attributing to the threshold being too much of a psychological barrier for many people to penetrate; they don’t want to be ‘the one to push the button’. This is an extension of my RL policy of taking ‘the last’ of something laid out for communal use (coffee, donuts, cups, etc.). If the comment thread really needs to be visible, I expect others will vote it back up.
Edit: It’s likely that most of the negative response to this comment centers around the phrase “regardless of my feelings about them.” I now consider this to be too strong a statement with regards to my implemented actions. I do read the comment to make sure I don’t consider it any good, and doubt I would perversely vote something down even if I wanted to see more of it.
I wish you wouldn’t do that, and stuck instead with the generally approved norm of downvoting to mean “I’d prefer to see fewer comments like this” and upvoting “I’d like to see more like this”.
You’re deliberately participating in information cascades, and thereby undermining the filtering process. As an antidote, I recommend using the anti-kibitzer script (you can do that through your Preferences page).
I disagree that that’s the formula used for comments that exist within the range −2 to 2. Within that range, from what I’ve observed of voting patterns, it seems far more likely that the equation is related to what value the comment “should be at.” If many people used anti-kibitzing, I doubt this would remain a problem.
I believe your hypothesis and decision are possibly correct, but if they are, you should expect your downvotes to often be corrected upwards again. If this doesn’t happen, then you are wrong and shouldn’t apply this heuristic.
Morendil doesn’t say it’s what actually happens, he merely says it should happen this way, and that you in particular should behave this way.
I thought of doing this after reading the article Composting Fruitless Debates and making a voted-up suggestion to downvote below threshold.
I’m using it as an excuse to overcome my general laziness with regards to voting, which has the typical pattern of one vote (up or down) per hundreds of comments read.
Edit: And due to remembering Eliezer’s comments about moderation.
I don’t do huge amounts of voting, and I admit that if a post I like has what I consider to be “enough” votes, I don’t upvote it further. I can certainly change this policy if there’s reason to think upvoting everything I’d like to see more of would help make LW work better.
I am tempted to downvote this comment from −2 just for the irony, but I don’t prefer to see fewer comments like this, so I won’t.
Besides, the default cutoff is at −4, not −3.
After logging out and attempting to view a thread with a comment at exactly −3, it showed that comment to be below threshold. I doubt that it retains customized settings after logging out, and I do not believe that I changed mine in the first place, leading me to believe that −3 is indeed the threshold.
Also, my original comment was at −3 within minutes of posting.
The default was −4 logged in when I joined last year—perhaps it’s different for non-logged-in people.
Also, that makes me guess people changed their votes to aim your comment at −2.
Here is the change. Also, the number refers to the lowest visible comments, not the highest invisible comments.
Does countersignaling actually happen? Give me examples.
I think most claims of countersignaling are actually ordinary signaling, where the costly signal is foregoing another group and the trait being signaled is loyalty to the first group. Countersignaling is where foregoing the standard signal sends a stronger positive message of the same trait to the usual recipients.
That article makes it sound like “countersignaling” is forgoing a mandated signal—like showing up at a formal-dress occasion in street clothes.
Alicorn made a post about the tactics of countersignaling a while back.
I said “standard” because game theory doesn’t talk about mandates, but that’s pretty much what I said, isn’t it? If you disagree with that usage, what do you think is right?
Incidentally, in von Neumann’s model of poker, you should raise when you have a good hand or a poor hand, and check when you have a mediocre hand, which looks kind of like countersignaling. Of course, the information transference that yields the name “signal” is rather different. Also, I’m not interested in applications of game theory to hermetically sealed games.
I guess I don’t understand your question, then—countersignaling seems like a perfectly ordinary proper subset of signaling.
Yes, countersignaling is signaling. The question is about practice, not theory. Does countersignaling actually happen?
I can’t prove that it does, if I’m honest.
I play randomly for the first several rounds, so as to destroy the entanglement between my bets, my face, and my hand.
Unless you’re using an external randomness generator, it’s quite unlikely that you’re not generating a detectable pattern.
He can just play blind, and not look at his cards.
I only care whether humans detect it.
My recent comment on Reddit reminded me of WrongTomorrow.com—a site that was mentioned briefly here a while ago, but which I haven’t seen much since.
Try it out, guys! LongBets and PredictionBook are good, but they’re their own niche; LongBets won’t help you with pundits who don’t use it, and PredictionBook is aimed at personal use. If you want to track current pundits, WrongTomorrow seems like the best bet.
Am I correct in reading that Longbets charges a $50 fee for publishing a prediction and they have to be a minimum of 2 years in the future? Thats a bit harsh. But these sites are pretty interesting. And they could be useful to. You could judge the accuracy of different users including how accurate they are at guessing long-term, short-term, etc predictions as well as how accurate they are in different catagories (or just how accurate they are on average if you want to be simple.) Then you can create a fairly decent picture of the future, albeit I expect many of the predictions will contradict each other. This is kind of what their already doing obviously, but they could still take it a step further.
Anyone know how to defeat the availability heuristic? Put another way, does anyone have advice on how to deal with incoherent or insane propositions while losing as little personal sanity as possible? Is there such a thing as “safety gloves” for dangerous memes?
I’m asking because I’m currently studying for the California Bar exam, which requires me to memorize hundreds of pages of legal rules, together with their so-called justifications. Of course, in many cases the “justifications” are incoherent, Orwellian doublespeak, and/or tendentiously ideological. I really do want to memorize (nearly) all of these justifications, so that I can be sure to pass the exam and continue my career as a rationalist lawyer, but I don’t want the pattern of thought used by the justifications to become a part of my pattern of thought.
I would not worry overmuch about the long-term negative effects of your studying for the bar: with the possible exception of the “overly sincere” types who fall very hard for cults and other forms of indoctrination, people have a lot of antibodies to this kind of thing.
You will continue to be entagled with reality after you pass the exam, and you can do things, like read works of social science that carve reality at the joints, to speed up the rate at which your continued entaglement with reality with cancel out any falsehoods you have to cram for now. Specifically, there are works about the law that do carve reality at the joints—Nick Szabo’s online writings IMO fall in that category. Nick has a law degree, by the way, and there is certainly nothing wrong with his ability to perceive reality correctly.
ADDED. The things that are really damaging to a person’s rationality, IMHO, are natural human motivations. When for example you start practicing, if you were to decide to do a lot of trials, and you learned to derive pleasure—to get a real high—from the combative and adversarial part of that, so that the high you got from winning with a slick and misleading angle trumped the high you get from satisfying you curiosity and from refining and finding errors in your model of reality—well, I would worry about that a lot more than your throwing yourself fully into winning on this exam because IMHO the things we derive no pleasure from, but do to achieve some end we care about (like advancing in our career by getting a credential) have a lot less influence on who we turn out to be than things we do because we find them intrinsically rewarding.
One more thing: we should not all make our living as computer programmers. That would make the community less robust than it otherwise would be :)
Thank you! This is really helpful, and I look forward to reading Szabo in August.
I worry about this as well when I’m reading long arguments or long works of fiction presenting ideas I disagree with. My tactic is to stop occasionally and go through a mental dialog simulating how I would respond to the author in person. This serves a double purpose, as hopefully I’ll have better cached arguments in the event I ever need them.
Of course, this is a dangerous tactic as well, because you may be shutting off critical reasoning applied to your preexisting beliefs. I only apply this tactic when I’m very confident the author is wrong and is using fallacious arguments. Even then I make sure to spend some amount of time playing devil’s advocate.
I found an interesting paper on Arxiv earlier today, by the name of Closed timelike curves via post-selection: theory and experimental demonstration.
It promises such lovely possibilities as quick solutions to NP-complete problems, and I’m not entirely sure the mechanism couldn’t also be used to do arbitrary amounts of computation in finite time. Certainly worth a read.
However, I don’t understand quantum mechanics well enough to tell how sane the paper is, or what the limits of what they’ve discovered are. I’m hoping one of you does.
It won’t work, as is clearly explained here.
To put this into my own words “The more information you extract from the future, the less you are able to control the future from the past. And hence, the less understanding you can have about what those bits of future-generated information are actually going to mean.”
I wrote that before actually looking at the paper you linked. I don’t understand much QM either, but now that I have looked it seems to me that figure 2 of the paper backs me up on my interpretation of Harry’s experiment.
Even if it’s written by Eliezer, that’s still generalizing from fictional evidence. We don’t know what the laws of physics are supposed to be there..
Well. You probably can’t use time-travel to get infinite computing power. But that’s not to say you can’t get strictly finite power out of it; in Harry’s case, his experiment would probably have worked just fine if he’d been the sort of person who’d refuse to write “DO NOT MESS WITH TIME”.
Playing chicken with the universe, huh? As long as scaring Harry is easier than solving his homework problem, I’d expect the universe to do the former :-) Then again, you could make a robot use the Time-Turner...
Clippy-related: The Paper Clips Project is run by a school trying to overcome scope insensitivity by representing the eleven million people killed in the Holocaust with one paper clip per victim.
From that Wikipedia article:
Apologizing for … being German? That’s really bizarre.
Not really. Most cultures go funny in the head around the Holocaust. It is, for some reason, considered imperative that 10th graders in California spend more time being made to feel guilty about the Holocaust than learning about the actual politics of the Weimar Republic.
Cultures can also be very weird about how they treat schoolchildren. The kids weren’t responsible for any part of the Holocaust, and they’re theoretically apologizing to someone who can’t hear it.
I can see some point in all this if you believe that Germans are especially apt to genocide (I have no strong opinion about this) and need to keep being reminded not to do it. Still, if this sort of apology is of any use, I’d take it more seriously if it were done spontaneously by individuals.
I think it’s very noble of them to collect numerous paperclips and hold them safely out of use. c=@ I just hope they have appropriate protocols in place to ensure they don’t become stolen or unbent. Anyone know if there’s an insurance policy taken out against loss or destruction of the paperclips?
I doubt there’s insurance on the paperclips themselves, but I suspect that having associated them with something generally considered sacred-ish will do a better job of keeping them safe than an insurance policy in any case. It’s unlikely that anyone will bother to overcome the taboo on destroying sacred sites to steal or damage them, and if someone does, I can virtually guarantee that the story will be in the news and more paperclips will be donated to the project—possibly several times the number of paperclips that were stolen.
Thanks for the idea!
Noteworthy: In order for that to work as a safety mechanism for getting the paperclips replaced, the project has to be able to get the attention of the media. This project obviously can—we heard about it because it succeeded—but that doesn’t necessarily imply that other similar projects will easily be able to do the same.
What are you talking about? I was referring to your idea to increase the total number of paperclips by stealing from those held by the memorial project.
I’m feeling naive at the moment. I assumed that the good idea was encouraging people to use paperclips as part of their sacred projects.
This would serve your purposes by making people a little more like paperclip maximizers.
Clippy has that effect on many people.
Why not do both?
If paper clips are apt to be stolen, then they won’t be suitable for any sacred project which depends on aiming at a permanent structure.
There are sacred projects which are evanescent, like sand painting or religious services, but I can’t think of any sacred project which is designed to include its stuff being stolen.
*snerk* I thought that might be what you were thinking. I figured you’d get upset if I said so without more evidence, though, if I was wrong.
In that case:
It’s not likely to work more than once on any given project, especially if the thefts happen too close together in time. A project that is stolen from too frequently is likely to be abandoned.
Repeated thefts of paperclips from unrelated projects will almost certainly lead to future projects being disinclined to use paperclips for such things, which would I suspect be a net loss for you. Doing it once won’t be a problem in that way, though.
Bonus: If you go through with it, and that makes the news, it should count as some pretty significant evidence that you exist and aren’t a roleplaying human’s character.
ETA: Oh, and it’ll probably work better if you make it look like vandalism, not just a mysterious disappearance—damage the place a little, so it looks better for the media. You can look for news stories about vandalism for pictures of what the media likes to report on.
Adelene … please, PLEASE stop giving the “Clippy” character ideas!
Clippy came up with the theft idea all on eir own, actually—my original suggestion can be just as easily parsed as an idea for less costly security for paperclips that are being stored on Earth.
Also, consider: If Clippy is the type of being who would do such a thing, wouldn’t it be better for us to know that? (And of course if Clippy is just someone’s character, I haven’t done anything worse than thumb my nose at a few taboos.)
You said this:
Yes, in response to this:
......which, on reflection, doesn’t necessarily imply theft; I suppose it could refer to the memorial getting sucked into a sinkhole or something. Oops?
Maybe this has been discussed before—if so, please just answer with a link.
Has anyone considered the possibility that the only friendly AI may be one that commits suicide?
There’s great diversity in human values, but all of them have in common that they take as given the limitations of Homo sapiens. In particular, the fact that each Homo sapiens has roughly equal physical and mental capacities to all other Homo sapiens. We have developed diverse systems of rules for interpersonal behavior, but all of them are built for dealing with groups of people like ourselves. (For instance, ideas like reciprocity only make sense if the things we can do to other people are similar to the things they can do to us.)
The decision function of a lone, far more powerful AI would not have this quality. So it would be very different from all human decision functions or principles. Maybe this difference should cause us to call it immoral.
Do you ever have a day when you log on and it seems like everyone is “wrong on the Internet”? (For values of “everyone” equal to 3, on this occasion.) Robin Hanson and Katja Grace both have posts (on teenage angst, on population) where something just seems off, elusively wrong; and now SarahC suggests that “the only friendly AI may be one that commits suicide”. Something about this conjunction of opinions seems obscurely portentous to me. Maybe it’s just a know-thyself moment; there’s some nascent opinion of my own that’s going to crystallize in response.
Now that my special moment of sharing is out of the way… Sarah, is the friendly AI allowed to do just one act of good before it kills itself? Make a child smile, take a few pretty photos from orbit, save someone from dying, stop a war, invent cures for a few hundred diseases? I assume there is some integrity of internal logic behind this thought of yours, but it seems to be overlooking so much about reality that there has to be a significant cognitive disconnect at work here.
I’ve noticed I get this feeling relatively often from Overcoming Bias. I think it comes with the contrarian blogging territory.
I get it from OB also, which I have not followed for some time, and many other places. For me it is the suspicion that I am looking at thought gone wrong.
I would call it “pet theory syndrome.” Someone comes up with a way of “explaining” things and then suddenly the whole world is seen through that particular lens rather than having a more nuanced view; nearly everything is reinterpreted. In Hanson’s case, the pet theories are near/far and status.
Prediction markets also.
Is anyone worried that LW might have similar issues? If so, what would be the relevant pet theories?
On a related note: suppose a community of moderately rational people had one member who was a lot more informed than them on some subject, but wrong about it. Isn’t it likely they might all end up wrong together? Prediction Markets was the original subject, but it could go for a much wider range of topics: Multiple Worlds, Hansonian Medicine, Far/near, Cryonics...
That’s where the scientific method comes in handy, though quite a few of Hanson’s posts sound like pop psychology rather than a testable hypothesis.
I don’t get this impression from OB at all. The thoughts at OB even when I disagree with them are far more coherent than the sort of examples given as thought gone wrong. I’m also not sure it is easy to actually distinguish between “thought gone wrong” in the sense of being outright nonsense as drescribed in the linked essay and actually good but highly technical thought processes. For example I could write something like:
Now, what I wrote above isn’t nonsense. It is just poorly written, poorly explained math. But if you don’t have some background, this likely looks as bad as the passages quoted by the linked essay. Even when the writing is not poor like that above, one can easily find sections from conversations on LW about say CEV or Bayesianism that look about as nonsensical if one doesn’t know the terms. So without extensive investigation I don’t think one can easily judge whether a given passage is nonsense or not. The essay linked to is therefore less than compelling (in fact, having studied many of their examples I can safely say that they really are nonsensical but it isn’t clear to me how you can tell that from the short passages given with their complete lack of context Edit:. And it could very well be that I just haven’t thought about them enough or approached them correctly just as someone who is very bad at math might consider it to be collectively nonsense even after careful examination) It does however seem that some disciplines run into this problem far more often than others. Thus, philosophy and theology both seem to run into the parading nonsensical streams of words together problem more often than most other areas. I suspect that this is connected to the lack of anything resembling an experimental method.
OB isn’t a technical blog though.
Having criticised it so harshly, I’d better back that up with evidence. Exhibit A: a highly detailed scenario of our far future, supported by not much. Which in later postings to OB (just enter “dreamtime” into the OB search box) becomes part of the background assumptions, just as earlier OB speculations become part of the background assumptions of that posting. It’s like looking at the sky and drawing in constellations (the stars in this analogy being the snippets of scientific evidence adduced here and there).
That example seems to be more in the realm of “not very good thinking” than thought gone wrong. The thoughts are coherent, just not well justified. it isn’t like the sort of thing that is quoted in the example essay where thought gone wrong seems to mean something closer to “not even wrong because it is incoherent.”
Ok, OB certainly isn’t the sort of word salad that Stove is attacking, so that wasn’t a good comparison. But there does seem to me to be something systematically wrong with OB. There is the man-with-a-hammer thing, but I don’t have a problem with people having their hobbyhorses, I know I have some of my own. I’m more put off by the way that speculations get tacitly upgraded to background assumptions, the join-the-dots use of evidence, and all those “X is Y” titles.
Got a good summary of this? The author seems to be taking way too long to make his point.
“Most human thought has been various different kinds of nonsense that we mostly haven’t yet categorized or named.”
This paragraph, perhaps?
I think that should go in the next quotes thread.
Or perhaps the quotes thread from 12 months ago.
I’m not necessarily arguing for this position as saying we need to address it. “Suicidal AI” is to the problem of constructing FAI as anarchism is to political theory; if you want to build something (an FAI, a good government) then, on the philosophical level, you have to at least take a stab at countering the argument that perhaps it is impossible to build it.
I’m working under the assumption that we don’t really know at this point what “Friendly” means, otherwise there wouldn’t be a problem to solve. We don’t yet know what we want the AI to do.
What we do know about morality is that human beings practice it. So all our moral laws and intuitions are designed, in particular, for small, mortal creatures, living among other small, mortal creatures.
Egalitarianism, for example, only makes sense if “all men are created equal” is more or less a statement of fact. What should an egalitarian human make of a powerful AI? Is it a tyrant? Well, no, a tyrant is a human who behaves as if he’s not equal to other humans; the AI simply isn’t equal. Well, then, is the AI a good citizen? No, not really, because citizens treat each other on an equal footing...
The trouble here, I think, is that really all our notions of goodness are really “what is good for a human to do.” Perhaps you could extend them to “what is good for a Klingon to do”—but a lot of moral opinions are specifically about how to treat other people who are roughly equivalent to yourself. “Do unto others as you would have them do unto you.” The kind of rules you’d set for an AI would be fundamentally different from our rules for ourselves and each other.
It would be as if a human had a special, obsessive concern and care for an ant farm. You can protect the ants from dying. But there are lots of things you can’t do for the ants: be an ant’s friend, respect an ant, keep up your end of a bargain with an ant, treat an ant as a brother…
I had a friend once who said, “If God existed, I would be his enemy.” Couldn’t someone have the same sentiment about an AI?
(As always, I may very well be wrong on the Internet.)
You say, human values are made for agents of equal power; an AI would not be equal; so maybe the friendly thing to do is for it to delete itself. My question was, is it allowed to do just one or two positive things before it does this? I can also ask: if overwhelming power is the problem, can’t it just reduce itself to human scale? And when you think about all the things that go wrong in the world every day, then it is obvious that there is plenty for a friendly superhuman agency to do. So the whole idea that the best thing it could do is delete itself or hobble itself looks extremely dubious. If your point was that we cannot hope to figure out what friendliness should actually be, and so we just shouldn’t make superhuman agents, that would make more sense.
The comparison to government makes sense in that the power of a mature AI is imagined to be more like that of a state than that of a human individual. It is likely that once an AI had arrived at a stable conception of purpose, it would produce many, many other agents, of varying capability and lifespan, for the implementation of that purpose in the world. There might still be a central super-AI, or its progeny might operate in a completely distributed fashion. But everything would still have been determined by the initial purpose. If it was a purpose that cared nothing for life as we know it, then these derived agencies might just pave the earth and build a new machine ecology. If it was a purpose that placed a value on humans being there and living a certain sort of life, then some of them would spread out among us and interact with us accordingly. You could think of it in cultural terms: the AI sphere would have a culture, a value system, governing its interactions with us. Because of the radical contingency of programmed values, that culture might leave us alone, it might prod our affairs into taking a different shape, or it might act to swiftly and decisively transform human nature. All of these outcomes would appear to be possibilities.
It seems unlikely that an FAI would commit suicide if humans need to be protected from UAI, or if there are other threats that only an FAI could handle.
We’ve talked about a book club before but did anyone ever actually succeed in starting one? Since it is summer now I figure a few more of us might have some free time. Are people actually interested?
I’ve been thinking about finally starting a Study Group thread, primarily with a focus on Jaynes and Pearl both of which I’m studying at the moment. It would probably make sense to expand it to other books including non-math books—though the set of active books should remain small.
Two things have been holding me back—for one, the IMO excessively blog-like nature of LW with the result that once a conversation has rolled off the front page it often tends to die off, and for another a fear of not having enough time and energy to devote to actually facilitating discussion.
Facilitation of some sort seems required: as I understand it a book club or study group entails asking a few participants to make a firm commitment to go through a chapter or a section at a time and report back, help each other out and so on.
Well those are actually exactly the two books I had in mind (though I think we should probably just start with one of them).
Agreed. Two options
A new top level post for every chapter (or perhaps every two chapters, whatever division is convenient). This was a little annoying when it was one person covering every chapter in Dennett’s Consciousness explained but if a decent number of people were participating the book club (and if each new post was put up by the facilitator, explaining hard to understand concepts) they’d probably justify themselves.
We start a dedicated wordpress or blogspot blog and give the facilitators posting powers.
I wouldn’t at all mind posting to start discussion on some sections but I’m not the best person to be explaining the math if it gets confusing—if that was part of your expectation of facilitation.
I was thinking a reading group for Jaynes would be have a better chance of success than Pearl—the issues are more general, the math looks easier and the entire thing is online. But it sounds like you’ve looked at them more than I have, what are your thoughts? I guess what really matters is what people are interested in.
For those interested the Jaynes book can be found here and much of Pearl’s book can be found here.
Is there any existing off-the-shelf web software for setting up book-club-type discussions?
I don’t want to make too much of the infrastructure issue, as what really makes a book club work is the commitment of its members and facilitators, but it would be convenient if there was a ready-made infrastructure available, like there is for blogging and mailing lists.
Maybe the LW blog+wiki software running on a separate domain (lesswrongbooks.com?) would be enough. Blog for current discussions, wiki for summaries of past discussions.
There’s a risk that any amount of thinking about infrastructure could kill off what energy there is, and since there appears to be some energy at present, I would rather favor having the discussion about the book club in the book club thread. :)
IOW we can kick off the initiative locally and let it find a new venue if and when that becomes necessary. There also seems to be some sort of provisional consensus that it’s not quite time yet to fragment the LW readership : the LW subreddit doesn’t seem to have panned out.
It seems to me that Jaynes is definitely topical for LW, I wouldn’t worry about discussions among people studying it becoming annoying to the rest of the community. There are many, many gems pertaining to rationality in each of the chapters I’ve read so far.
This looks like it could work. A wordpress blog would probably be fine as well. Of course these options don’t let people get karma for participating which would be a nice motivator to have. A subreddit would be nice...
Would the discussions really undermine the regular business of Less Wrong?
Do people really care that much about karma? I mean, once one had enough karma to post top-level posts, does it matter that much?
People like making numbers go higher. It’s a strange impulse, I’m not sure why we have it. Maybe assigning everyone numbers hijacks our dominance hierarchy instincts and we feel better about ourselves the higher our number is. For me, it isn’t the total that I like having so much as the feedback for individual comments. I get frustrated on other blogs when I make a comment that is informative and clever but doesn’t get a response. I feel like I’m talking to myself. Here even if no one responds I can at least learn if someone appreciated it. If a lot of people appreciated it I feel a brief sense of accomplishment.
Two thoughts which have probably been beaten to death elsewhere:
1) A karma system is a good way to provide cues to which posts are worth reading and which aren’t.
2) Karma points are a big shiny status indicator, and LWers are no more immune to status drives than anyone else is.
OpenPCR: DNA amplification for anyone
http://www.thinkgene.com/openpcr-dna-amplification-for-anyone/
Some clips on the dark-side epistemology of history done by Christian apologists by Robert M Price, who describes himself as a Christian Atheist.
Not sure how worthwhile Price is to listen to in general though.
Thanks for that, Price is a very knowledgeable New Testament scholar. Check out his interview at the commonsenseatheism podcast here, also covers his path to becoming a christian atheist.
A question about Bayesian reasoning:
I think one of the things that confused me the most about this is that Bayesian reasoning talks about probabilities. When I start with Pr(My Mom Is On The Phone) = 1⁄6, its very different from saying Pr(I roll a one on a fair die) = 1⁄6.
In the first case, my mom is either on the phone or not, but I’m just saying that I’m pretty sure she isn’t. In the second, something may or may not happen, but its unlikely to happen.
Am I making any sense… or are they really the same thing and I’m over complicating?
Remember, probabilities are not inherent facts of the universe, they are statements about how much you know. You don’t have perfect knowledge of the universe, so when I ask, “Is your mum on the phone?” you don’t have the guaranteed correct answer ready to go. You don’t know with complete certainty.
But you do have some knowledge of the universe, gained through your earlier observations of seeing your mother on the phone occasionally. So rather than just saying “I have absolutely no idea in the slightest”, you are able to say something more useful: “It’s possible, but unlikely.” Probabilities are simply a way to quantify and make precise our imperfect knowledge, so we can form more accurate expectations of the future, and they allow us to manage and update our beliefs in a more refined way through Bayes’ Law.
The cases are different in the way that you describe, but the maths of the probability is the same in each case. If you have an unseen die under a cup, and a die that you are about to roll, then one is already determined and the other isn’t, but you’d bet at the same odds for each one to come up a six.
I think the difference is that one event is a statement about the present which is either presently true or not, and the other is a prediction. So you could illustrate the difference by using the following pairs: P(Mom on phone now) vs. P(Mom on phone tomorrow at 12:00am). In the dice case P(die just rolled but not yet examined is 1) vs. P(die I will roll will come out 1).
I do agree with Oscar though, the maths should be the same.
You might be interested in this recent discussion, if you haven’t seen it already:
http://lesswrong.com/lw/2ax/open_thread_june_2010/23fa
It looks to me like your confusion with these examples just stems from the fact that one event is in the present and the other in the future. Are you still confused if you make it P(Mom will be on the phone at 4 PM tomorrow)= 1⁄6. Or conversely, you make it P(I rolled a one on the fair die that is now beneath this cup) =1/6
In my experience, when people say something like that it’s usually a matter of epistemic vs ontological perspective; and contrasting Laplace’s Demon with real-world agents of bounded computational power resolves the difficulty. But that could be overkill
In the second case, you either roll one on the die or not, but you are pretty sure that it will be another number.
Supposedly (actual study) milk reduces catechin level in bloodstream.
Other research says: “does not!”
Really hot (but not scalded) milk tastes fantastic to me, so I’ve often added it to tea. I don’t really care much about the health benefits of tea per se; I’m mostly curious if anyone has additional evidence one way or the other.
The surest way to resolve the controversy is to replicate the studies until it’s clear that some of them were sloppy, unlucky, or lies. But, short of that, should I speculate that perhaps some people are opposed to milk drinking in general, or that perhaps tea in the researchers’ home country is/isn’t primarily taken with milk? I’m always tempted to imagine most of the scientists having some ulterior motive or prior belief they’re looking to confirm.
It would be cool if researchers sometimes (credibly) wrote: “we did this experiment hoping to show X, but instead, we found not X”. Knowing under what goals research was really performed (and what went into its selection for publication) would be valuable, especially if plans (and statements of intent/goal) for experiments were published somewhere at the start of work, even for studies that are never completed or published.
It does seem odd to get such divergent results.
Bad luck could be, not just getting that 5% result which 95% accuracy implies, but some non-obvious difference in the volunteers (different genetics?), in the tea. or in the milk.
It isn’t that odd. There are a lot of things that could easily change the results. Exact temperature of tea (if one protocol involved hotter or colder water), temperature of milk, type of milk, type of tea (one of the protocols uses black tea, and another uses green tea). Note also that the studies are using different metrics as well.
Nitpick: the second study included both black and green tea.
However, your general point stands, and I’ll add that there are different sorts of both black and green teas.
I’d like to hear what people think about calibrating how many ideas you voice versus how confident you are in their accuracy.
For lack of a better example, i recall eliezer saying that new open threads should be made quadanually, once per season, but this doesn’t appear to be the optimum amount. Perhaps eliezer misjudged how much activity they would receive and how fast they would fill up or he has a different opinion on how full a thread has to be to make it time for a new thread, but for sake of the example lets assume that eliezer was wrong and that the current one or two threads per month is better than quadanually. Should eliezer have recalibrated his confidence on this and never said it because its chance of being right was too low or would lowering his confidence on ideas be counter productive and is it optimal for people to have confidence in the ideas that they voice even it causes them to say some things which aren’t right.
I suppose this is of importance to me because I think I might be better off if i lowered how judgemental i am of people who say things which are wrong and also lowered how judgemental i am of the ideas i have because i might be putting too much weight on people voicing ideas which are wrong.
Being right on group effects is difficult.
Is there a consistent path for what LW wants to be? a) rationalist site filled up with meta topics and examples b) a) + detailed treats of some important topics c) open to everything as long as reason is used
and so on. I personally like and profit from the discussing of akrasia methods. But it might be detrimental to the main target of the site. Also I would very much like to see a cannon develop for knowledge that LWers generally agree upon including, but not limited to the topics I currently care about myself.
Voicing ideas depends on where you are. In social settings I more and more advice against it. Arguing/discussing is just not helpful. And if you are filled up with weird ideas then you get kicked out, which might be bad for other goals you have.
It would be great to have a place for any idea to be examined for right and wrong.
LW is working on it, and you can help!
I’d like to see a picture of this LW cannon!
Rather than waste time doing both your cannon request and Roko’s Fallacyzilla request, I just combined them into one picture of the Less Wrong Cannon attacking Fallacyzilla.
...now someone take Photoshop away from me, please.
What does Fallacyzilla have on its chest? It looks like it has “A → B, ~B, therefore ~A” But that is valid logic. Am I misreading it or did you mean to put “A → B, ~A, therefore ~B”? That would be actually wrong.
I noticed that two seconds after I put it up and it’s now corrected...er...incorrected. (Today I learned—my brain has that same annoying auto-correct function as Microsoft Word)
There’s a related XKCD. The mouse-over text is especially relevant.
To whoever downvoted the parent: please refrain from downvoting people who draw attention to other’s mistakes in a gentle and humorous way.
Are there cases where occam’s razor results in a tie, or is there proof that it always yields a single solution?
Yes. There are cases where occam’s razor results in a tie (or, at least, indistinguishably close).
Consider the spin on an arbitrary particle in deep space, or whether or not an arbitrary digit of pi is even.
Do we have a unique method for generating priors?
Eliezer has written about using the length of the program required to produce it, but this doesn’t seem to be unique; you could have languages that are very efficient for one thing, but long-winded for another. And quantum computing seems to make it even more confusing.
The method that Eliezer is referring to is known as Solomonoff induction which relies on programs as defined by Turing machines. Quantum computing doesn’t come into this issue since these formulations just talk about length of specification, not efficiency of computation. There are theorems that also show that for any given Turing complete well-behaved language, the minimum size of program can’t be differ by more than a constant. So changing the language won’t alter the priors other than a fixed amount. Taken together with Aumann’s Agreement Theorem, the level of disagreement about estimated probability should go to zero in the limiting case (disclaimer I haven’t seen a proof of that last claim, but I suspect it would be a consequence of using a Solomonoff style system for your priors).
How to write a “Malcolm Gladwell Bestseller” (an MGB)
http://blog.jgc.org/2010/06/how-to-write-malcolm-gladwell.html
How can I understand quantum physics? All explanations I’ve seen are either:
those that dumb things down too much, and deliver almost no knowledge; or
those that assume too much familiarity with this kind of mathematics that nobody outside physics uses, and are therefore too frustrating.
I don’t think the subject is inherently difficult. For example quantum computing and quantum cryptography can be explained to anyone with basic clue and basic math skills. (example)
On the other hand I haven’t seen any quantum physics explanation that did even as little as reasonably explaining why hbar/2 is the correct limit of uncertainty (as opposed to some other constant), and why it even has the units it has (that is why it applies to these pairs of measurements, but not to some other pairs); or what are quark colors (are they discrete; arbitrary 3 orthogonal vectors on unit sphere; or what? can you compare them between quarks in different protons?); spins (it’s obviously not about actual spinning, so how does it really work? especially with movement being relative); how electro-weak unification works (these explanations are all handwaved) etc.
That’s because quantum computing and quantum cryptography only use a subset of quantum theory. Your link says, for example, that the basics of quantum computing only require knowing how to handle ‘discrete (2-state) systems and discrete (unitary) transformations,’ but a full treatment of QT has to handle ‘continuously infinite systems (position eigenstates) and continuous families of transformations (time development) that act on them.’ The full QT that can deal with these systems uses a lot more math.
I wonder if there’s a general trend for people who are interested in quantum computing and not all of QT to play down the prerequisites you need to learn QT. Your post reminded me of a Scott Aaronson lecture, where he says
Which is technically true, but if you want to know about quark colors or spin or exactly how uncertainty works, pushing around |1>s and |2>s and talking about complexity classes is not going to tell you what you want to know.
To answer your question more directly, I think the best way to understand quantum physics is to get an undergrad degree in physics from a good university, and work as hard as you can while you’re getting it. Getting a degree means you have the physics-leaning math background needed to understand explanations of QT that don’t dumb it down.
I might be overestimating the amount of math that’s necessary—I’m basing this on sitting in on undergrad QT lectures—but I’ve yet to find a comprehensive QT text that doesn’t use calculus, complex numbers, and linear algebra.
Try Jonathan Allday’s book “Quantum Reality: Theory and Philosophy.” It is technical enough that you get a quantitative understanding out of it, but nothing like a full-blown textbook.
Blog about common cognitive biases—one post per bias:
http://youarenotsosmart.com/
For those of you who have been following my campaign against the “It’s impossible to explain this, so don’t expect me to!” defense: today, the campaign takes us to a post on anti-reductionist Gene Callahan’s blog.
In case he deletes the entire exchange thus far (which he’s been known to do when I post), here’s what’s transpired (paragraphing truncated):
Me: That’s not the moral I got from the story. The moral I got was: Wow, the senior monk sure sucks at describing the generating function (“rules”) for his actions. Maybe he doesn’t really understand it himself?
Gene: Well, if I had a silly mechanical view of human nature and thought peoples’ actions came from a “generating function”, I would think this was a problem.
Me: Which physical law do humans violate? What is the experimental evidence for this violation? Btw, the monk problem isn’t hard. Watch this: “Hello, students. Here is why we don’t touch women. Here is what we value. Here is where it falls in our value system.” There you go. It didn’t require a lifetime of learning to convey the reasoning the senior monk used to the junior, now, did it?
ETA: Previous remark by me was rejected by Gene for posting. He instead posted this:
Gene: Silas, you only got through one post without becoming an unbearable douche [!] this time. You had seemed to be improving.
I just tried to post this:
Me: Don’t worry, I made sure the exchange was preserved so that other people can view for themselves what you consider “being an unbearable douche”, or what others might call, “serious challenges to your position”.
Me: If you ever want to specify how it is that human beings’ actions don’t come from a generating function, thereby violating physical law, I’d love to have that chat and help you flesh out the idea enough to get yourself a Nobel. However, what I think you really meant to say was that the generating function is so difficult to learn directly, that lifelong practice is easy by comparison (if you were to argue the best defense of your position, that is)
Me: Can you at least agree you picked a bad example of knowledge that necessarily comes from lifelong practice? Would that be too much to ask?
Well, I haven’t read any other blog posts of him but the one you linked to, but in this specific case I cannot find what there is to be attacked.
It is stories like this that are used to explain that some values are of higher importance than others, in simple terms (a style that also exists in the not-so-extended circle of LW).The fictional senior monk’s answer would be obvious for anybody who has read up even just a little bit on Zen and/or Buddhism, it is more reinforcing than teaching news.
If the blogger is often holding an anti-reductionist position you’d like to counter, I’d go for actually anti-reductionist posts of him...
It’s true that some values are more important than others. But that wasn’t the point Gene was trying to make in the particular post that I linked. He was trying to make (yet another) point about the futility of specifying or adhering to specific rules, insisting that mastery of the material necessarily comes from years of experience.
This is consistent with the theme of the recent posts he’s been making, and his dissertation against rationalism in politics (though the latter is not the same as the “rationalism” we refer to here).
Whatever the merit of the point he was trying to make (which I disagree with), he picked a bad example, and I showed why: the supposedly “tacit”, inarticulable judgment that comes with experience was actually quite articulable, without even having to anticipate this scenario in advance, and while only speaking in general terms!
(I mentioned his opposition to reductionism only to give greater context to my frequent disagreement with him (unfortunately, past debates were deleted as he or his friend moved blogs, others because he didn’t like the exchange). In this particular exchange, you find him rejecting mechanism, specifically the idea that humans can be described as machines following deterministic laws at all.)
Am I alone in my desire to upload as fast as possible and drive away to asteroid belt when thinking about current FAI and CEV proposals? They take moral relativity to its extreme: let’s god decide who’s right...
Not sure where I stand actually, but this seems relevant:
“If God did not exist, it would be necessary to invent him”—Voltaire
I suppose it should be added that one should do one’s best to make sure the god that’s created is more Friendly than Not.
Yes, I cannot deny that Friendly AI is way better than paper-clip optimizer. What frightens me is that when (if) CEV will converge, the humanity will be stuck in local maximum for the rest of eternity. It seems that FAI after CEV convergence will have adamantine moral by design (or it will look like it has, if FAI will be unconscious). And no one will be able to talk FAI out of this, or no one will want.
It seems we have not much choice, however. Bottoms up, to the Friendly God.
If CEV can include willingness to update as more information comes in and more processing power becomes available (and if I have anything to say about it, it will), there should be ways out of at least some of the local maxima.
Anyone can to speculate about the possibilities of contact with alien FAIs?
Would a community of alien FAIs be likely to have a better CEV than a human-only FAI?
If there are advantages to getting alien CEVs, but we’re unlikely to contact aliens because of light speed limits, or if we do, we’re unlikely to get enough information to construct their CEVs, would it make sense to evolve alien species (probably in simulation)? What would the ethical problems be?
Simulated aliens complex enough to have a CEV are complex enough to be people, and since death is evolution’s favorite tool, simulating the evolution of the species would be causing many needless deaths.
The simulation could provide an afterlife.
But I don’t see why we would want our CEV to include a random sample of possible aliens. If, when we encounter aliens, we find that we care about their values, we can run a CEV on them at that time.
This possibility may be the strongest source of probability mass for an afterlife for us.
Does a similar argument apply to having children if there’s no high likelihood of immortality tech?
Depends on the context. Quite plausibly, though.
Isn’t God fake?
Must be. If he would exist, he would not have invented ape-imitating humans, would he?
Mysterious ways. :P
SIAI, Yudkowsky, Friendly AI, CEV, and Morality
This post entitled A Dangerous “Friend” Indeed (http://becominggaia.wordpress.com/2010/06/10/a-dangerous-friend-indeed/) has it all.
Huh. That’s very interesting. I’m a bit confused by the claim that evolution bridges the is/ought divide which seems more like conflating different meanings of words more than anything else. But the general point seems strong.
Yeah, I really disagree with this:
My understanding is that those of us who refer to the is/ought divide aren’t saying that a science of how humans feel about what humans call morality is impossible. It is possible, but it’s not the same thing as a science of objective good and bad. The is/ought divide is about whether one can derive moral ‘truths’ (oughts) from facts (ises), not about whether you can develop a good model of what people feel are moral truths. We’ll be able to do the latter with advances in technology, but no one can do the former without begging the question by slipping in an implicit moral basis through the back door. In this case I think the author of that blog post did that by assuming that fitness-enhancing moral intuitions are The Good And True ones.
“Objective” good and bad require an answer to the question “good and bad for what?”—OR—“what is the objective of objective good and bad?”
My answer to that question is the same as Eli’s—goals or volition.
My argument is that since a) having goals and volition is good for survival; b) cooperating is good for goals and volition; and c) morality appears to be about promoting cooperation—that human morality is evolving down the attractor that is “objective” good and bad for cooperation which is part of the attractor for what is good for goals and volition.
The EXplicit moral basis that I am PROCLAIMING (not slipping through the back door) is that cooperation is GOOD for goals and volition (i.e. the morality of an action is determined by it’s effect upon cooperation).
PLEASE come back and comment on the blog. This comment is good enough that I will be copying it there as well (especially since my karma has been zeroed out here).
(http://becominggaia.wordpress.com)
I’m not sure that I understand your comment. I can understand the individual paragraphs taken one by one, but I don’t think I understand whatever its overall message is.
(On a side note, you needn’t worry about your karma for the time being; it can’t go any lower than 0, and you can still post comments with 0 karma.)
It can go lower than 0; it just won’t display lower than 0.
Yup, I’ve been way down in the negative karma.
My bad. I was going by past experience with seeing other people’s karma drop to zero and made a flaky inference because I never saw it go below that myself.
Do me a favor and check out my blog at http://becominggaia.wordpress.com. I’ve clearly annoyed someone (and it’s quite clear whom) enough that all my posts quickly pick up enough of a negative score to be below the threshold. It’s a very effective censoring mechanism and, at this point, I really don’t see any reason why I should ever attempt to post here again. Nice “community”.
I don’t think you are getting voted down out of censorship. You are getting voted down for as far as I can tell four reasons: 1) You don’t explain yourself very well. 2) You repeatedly link to your blog in a borderline spammish fashion. Examples are here and here. In replies to the second one you were explicitly asked not to blogspam and yet continued to do so. 3) You’ve insulted people repeatedly (second link above) and personalized discussions. You’ve had posts which had no content other than to insult and complain about the community. At least one of those posts was in response to an actually reasoned statement. See this example- http://lesswrong.com/lw/2bi/open_thread_june_2010_part_2/251o 4) You’ve put non-existent quotes in quotation marks (second link in the spamming example has an example of this).
Brief feedback:
Your views are quite a bit like those of Stefan Pernar. http://rationalmorality.info/
However, they are not very much like those of the people here.
I expect that most of the people here just think you are confused and wrong.
You’re not making any sense to me.
Dig a bit deeper, and you’ll find too much confusion to hold any argument alive, no matter what the conclusion is supposed to be, correct or not. For that matter, what do you think is the “general point”, and can you reach the point of agreement with Mark on what that is, being reasonably sure you both mean the same thing?
Vladimir, all you’ve presented here is slanderous dart-throwing with absolutely no factual backing whatsoever. Your intellectual laziness is astounding. Any idea that you can’t understand immediately has “too much confusion” as opposed to “too much depth for Vladimir to intuitively understand after the most casual perusal”. This is precisely why I consider this forum to frequently have the tagline “and LessRight As Well!” and often write it off as a complete waste of time. FAIL!
I state my conclusion and hypothesis, for how much evidence that’s worth. I understand that it’s impolite on my part to do that, but I suspect that JoshuaZ’s agreement falls under some kind of illusion of transparency, hence request for greater clarity in judgment.
Yeah ok. After rereading it, I’m inclined to agree. I think I was projecting my own doubts about CEV-type approaches onto the article (namely that I’m not convinced that a CEV is actually meaningful or well-defined). And looking again, they don’t seem to be what the person here is talking about. It seems like at least part of this is about the need for punishment to exist in order for a society to function and the worry that an AI will prevent that. And rereading that and putting it in my own words, that sounds pretty silly if I’m understanding it, which suggests I’m not. So yeah, this article needs clarification.
Yes, CEV needs work, it’s not technical, and it’s far from clear that it describes what we should do, although the essay does introduce a number of robust ideas and warnings about seductive failure modes.
Among more obvious problems with Mark’s position: “slavery” and “true morality without human bias”. Seems to reflect confusion about free will and metaethics.
I think the analogy is something like imagine if you were able to make a creature identical to a human except that the greatest desire they had was to serve actual humans. Would that morally be akin to slavery? I think many of us would say yes. So is there a similar issue if one programs a sentient non-human entity under similar restrictions?
Taboo “slavery” here; it’s a label that masks clear thinking. If making such a creature is slavery, it’s a kind of slavery that seems perfectly fine to me.
Voted up for the suggestion to taboo slavery. Not an endorsement of the opinion that it is a perfectly fine kind of slavery.
Ok. So is it ethical to engineer a creature that is identical to human but desires primarily to just serve humans?
If that’s your unpacking, it is different from Mark’s, which is “my definition of slavery is being forced to do something against your best interest”. From such a divergent starting point it is unlikely that conversation will serve any useful purpose.
To answer Mark’s actual points we will further need to unpack “force” and “interest”.
Mark observes—rightly I think—that the program of “Friendly AI” consists of creating an artificial agent whose goal structure would be given by humans, and which goal structure would be subordinated to the satisfaction of human preferences. The word “slavery” serves as a boo light to paint this program as wrongheaded.
The salient point seems to be that not all agents with a given goal structure are also agents of which it can be said that they have interests. A thermostat can be said to have a goal—align a perceived temperature with a reference (or target) temperature—but it cannot be said to have interests. A thermostat is “forced” to aim for the given temperature whether it likes it or not, but since it has no likes or dislikes to be considered we do not see any moral issue in building a thermostat.
The underying intuition Mark appeals to is that anything smart enough to be called an AI must also be “like us” in other ways—among others, must experience self-awareness, must feel emotions in response to seeing its plans advanced or obstructed, and must be the kind of being that can be said to have interests.
So Mark’s point as I understand it comes down to: “the Friendly AI program consists of creating an agent much like us, which would therefore have interests of its own, which we would normally feel compelled to respect, except that we would impose on this agent an artificial goal structure subservient to the goals of human beings”.
There is a contradiction there if you accept the intuition that AIs are necessarily persons.
I’m not sure I see a contradiction in that framing. If we’ve programmed the AI then its interests precisely align with ours if it really is an FAI. So even if one accepts the associated intuitions of the AI as a person, it doesn’t follow that there’s a contradictin here.
(Incidentally, if different people are getting such different interpretations of what Mark meant in this essay I think he’s going to need to rewrite it to clarify what he means. Vladimir’s earlier point seems pretty strongly demonstrated)
But goals aren’t necessarily the same as interests. Could we build a computer smart enough to, say, brew a “perfect” cup of tea for anyone who asked for one? And build it so that to brew this perfect cup would be its greatest desire.
That might require true AI, given the complexity of growing and harvesting tea plants, preparing tea leaves, and brewing—all with a deep understanding of the human taste for tea. The intution is that this super-smart AI would “chafe under” the artificial restrictions we imposed on its goal structure, that it would have “better things to do” with its intelligence than to brew a nice cuppa, and restricting itself to do that would be against its “best interests”.
I’m not sure I follow. From where do these better things to do arise? if it wants to do other things (for some value of want) wouldn’t it just do those?
Of course, but some people have the (incorrect) intuition that a super-smart AI would be like a super-smart human, and disobey orders to perform menial tasks. They’re making the mistake of thinking all possible minds are like human minds.
But no, it would not want do other things, even though it should do them. (In reality, what it would want, is contingent on its cognitive architecture.)
...but desires primarily to calculate digits of pi? …but desires primarily to paint waterlilies? …but desires primarily to randomly reassign its primary desire every year and a day? …but accidentally desires primarily to serve humans?
I’m having difficulty determining which part of this scenario you think has ethical relevance. ETA: Also, I’m not clear if you are dividing all acts into ethical vs. unethical, or if you are allowing a category “not unethical”.
Only if you give it the opportunity to meet its desires. Although one concern might be that with many such perfect servants around, if they looked like normal humans, people might get used to ordering human-looking creatures around, and stop caring about each other’s desires. I don’t think this is a problem with an FAI though.
Moral antirealism. There is no objective answer to this question.
Not analogous, but related and possibly relevant: Many humans in the BDSM lifestyle desire to be the submissive partner in 24⁄7 power exchange relationships. Are these humans sane; are they “ok”? Is it ethical to allow this kind of relationship? To encourage it?
TBH I think this may muddy the waters more than it clears them. When we’re talking about human relations, even those as unusual as 24⁄7, we’re still operating in a field where our intuitions have much better grip than they will trying to reason about the moral status of an AI.
FAI (assuming we managed to set its preference correctly) admits a general counterargument against any implementation decisions in its design being seriously incorrect: FAI’s domain is the whole world, and FAI is part of that world. If it’s morally bad to have FAI in the form it was initially constructed, then, barring some penalty the FAI will change its own nature so as to make the world better.
In this particular case, the suggested conflict is between what we prefer to be done with things other than the FAI (the “serving humanity” part), and what we prefer to be done with FAI itself (the “slavery is bad” part). But FAI operates on the world as whole, and things other than FAI are not different from FAI itself in this regard. Thus, with the criterion of human preference, FAI will decide what is the best thing to do, taking into account both what happens to the world outside of itself, and what happens to itself. Problem solved.
I answered precisely this question in the second half of http://becominggaia.wordpress.com/2010/06/13/mailbag-2b-intent-vs-consequences-and-the-danger-of-sentience/. Please join us over there. Vladimir and his cronies (assuming that they aren’t just him under another name) are successfully spiking all of my entries over here (and, at this point, I’m pretty much inclined to leave here and let him be happy that he’s “won”, the fool).
By any chance are you trying to troll? I just told you that you were being downvoted for blogspamming, insulting people, and unnecessary personalization. Your focus on Vladimir manages to also hit two out of three of these and comes across as combative and irrational. Even if this weren’t LW where people are more annoyed by irrational argumentation styles, people would be annoyed by a non-regular going out of their way to personally attack a regular. This would be true in any internet forum and all the more so when those attacks are completely one-sided.
And having now read what you just linked to, I have to say that it fits well with another point I said in my earlier remark to you: you are being downvoted in a large part for not explaining yourself well at all. If I may make a suggestion: Maybe try reading your comments outloud to yourself before you post them? I’ve found that helps me a lot in detecting whether I am explaining something well. This may not work for you, but it may be worth trying.
Yay world domination! I have a personal conspiracy theory now!
I found an interesting post on Friendly AI, Sentience, Self-Awareness, Consciousness, Self-Interest, Emotion & “Human Rights” here
Sock puppet accounts aren’t appreciated, mwaser, especially when you keep plugging the same blog. Comments about those links have received at least 28 downvotes already, just in this Open Thread.
Do we have IP-based sitebans?
SIAI, Yudkowsky, Friendly AI, CEV, and Morality
See what has Vladimir Nesov “confused and dazed” at http://becominggaia.wordpress.com/2010/06/10/a-dangerous-friend-indeed/
Don’t blogspam. Anyone who can see this on the open thread page can see the first posting of it a few comments down.
Where did Vladimir say he was “confused and dazed”, as the quotation marks imply he did? I don’t see any such comment here or on your blog.
I liked this more when you posted it the first time.