Generalized versions of arguments I’ve seen on Reddit and Facebook:
If you oppose a government policy that personally benefits you, you are a hypocrite who bites the hand that feeds you.
If you support the policy that benefits you, you are a greedy narcissist whose loyalty can be bought and sold.
If you have political opinions on policies that don’t affect your well-being, you are meddler with no skin in the game. Without being personally affected by the policy, you cannot hope to understand.
If you oppose a government policy that personally benefits you, you are a hypocrite who bites the hand that feeds you.
If you support the policy that benefits you, you are a greedy narcissist whose loyalty can be bought and sold.
...but neither of these are meaningfully bad things according to post-Machiavellian political thought. Machiavelli dismantled the virtue-centric, moralizing system of “naive” political thought—finding wise, moral and incorruptible men to control society, as argued by Plato or Aquinas—and showed how the strength of a republic is in its internal conflicts and contradictions, how a naked struggle of competing group interests can ultimately lead to dynamism and progress. This is whatmostpeople don’t understand about his legacy, and the great emancipatory power of making self-interest, not moralism the cornerstone of politics.
So yes, in some matters we’re hypocrites, in others we’re greedy narcissists… but society holds more hope for all of its warring factions when these facts are honestly acknowledged rather than wrapped in a cloak of “virtue”-moralism! And pursuit of socioeconomic self-interest has very little cross-over with following moral codes in day-to-day interactions, anyway. (No examples for either Blue or Green, let’s pretend to be civil.)
...
So, (like almost everyone in earlier times), today’s citizens succumb to a vaguely Catholic-flavoured way of seeing society, and end up less politically progressive than a 15th century theorist. Who unjustly acquired the reputation of someone between Marquis de Sade[1] and Emperor Palpatine- not without the help of 19th century clericals and reactionaries.
[1] Early libertarian socialist, proto-feminist and human rights advocate. Never ever got a fair shake either.
A while back, David Chapman made a blog post titled “Pop Bayesianism: cruder than I thought?”, expressing considerable skepticism towards the kind of “pop Bayesianism” that’s promoted on LW and by CFAR. Yvain and I replied in the comments, which led to an interesting discussion.
I wasn’t originally sure whether this was interesting enough to link to on LW, but then one person on #lesswrong specifically asked me to do so. They said that they found my summaries of the practical insights offered by some LW posts the most valuable/interesting.
Wow, I hadn’t previously read the RichardKennaway comment you linked. I think internalizing that idea would be massively helpful in combating the tendency to view disagreement as inherently combative rather than a difference between priors.
I wish people here stopped using the loaded terms “many worlds” and “Everett branches” when the ontologically neutral “possible outcomes” is sufficient.
“Possible outcomes” is not ontologically neutral in common usage. In common usage, “possible” excludes “actual”, and that connotation is strong even when trying to use it technically.
“Multiple outcomes” might be an acceptable compromise.
I find that thinking about “Everett branches” forces my brain to come up with alternative possible outcomes, where by default it would focus all of its attention on just one. Saying to myself “you should consider other possible outcomes” doesn’t seem to have the same effect.
I have no problem with the mental tricks like that. “Premortem” is another useful one, even though the project hasn’t failed (yet). As long as you do not insist on assigning any ontological significance to them.
This came up at yesterday’s London meetup: activities for keeping oneself relatable to other human beings.
We were dissecting motives behind goals, and one of mine was maintaining interests that other people could relate to. I have more pedestrian interests, but they’re the first to get dropped when my time is constrained (which it usually is), so if I end up meeting someone out in the wild, all I have to talk about is stuff like natural language parsing, utilitarian population ethics and patterns of conspicuous consumption.
Discussing it in a smaller group later, it turns out I’m not the only person who does this. It makes sense that insular, scholarly people of a sort found on LW may frequently find themselves withdrawn from common cultural ground with other people, so I thought I’d kick off a discussion on the subject.
What do you do to keep yourself relatable to other people?
EDIT: Just to clarify, this isn’t a request for advice on how to talk to people. Please don’t interpret it as such.
Richard Feynman was a theoretician as well as a ‘people person’; if you read his writings about his experiences with people it really illustrates quite well how he managed to do it.
One tactic that he employed was simply being mysterious. He knew few people could relate to a University professor and that many would feel intimidated by that, so when in the company of laypeople he never even brought it up. They would ask him what he did and he would say, “I can’t say.” If pressed, he would say something vague like, “I work at the University.” Done properly, it’s playful and coy, and even though people might think you’re a bit weird, they definitely won’t consider you unrelatable.
In my opinion there’s no need to concern yourself with activities that you don’t like, as very few people are really actually interested in your interests. Whenever the topic of your interests comes up, just steer the conversation towards their life and their interests. You’ll be speaking 10% of the time yet you’ll appear like a brilliant conversationalist. If they ask you if you’ve read a particular book or heard a particular artist, just say no (but don’t sound harsh or bored). You’ll seem ‘indie’ and mysterious, and people like that. In practice, though, as one gets older, people rarely ask about these things.
It’s a common mistake that I’ve seen often in intellectual people. They assume they have to keep up with popular media so that they can have conversations. That is not true at all.
While this seems like reasonable advice, I’m not sure it’s universally good advice. Richard Feynman seemed to enjoy a level of charm many of us couldn’t hope to possess. He also had a wide selection of esoteric interests unrelated to his field.
I would also claim that there’s value in simply maintaining such an interest. During particularly insular periods where I’m absorbed in less accessible work, I find myself starting to exhibit “aspie” characteristics, losing verbal fluency and becoming socially insensitive. It’s not just about having things to talk about, but maintaining my own faculties for relating to people.
Whenever the topic of your interests comes up, just steer the conversation towards their life and their interests. You’ll be speaking 10% of the time yet you’ll appear like a brilliant conversationalist.
If everyone in the conversation is employing this method, then chances are higher that the others actually want to hear about your esoteric topics. If you pause early and give them a chance to talk about themselves (or for them to press for more), that’ll keep you synched up with what they want.
I was thinking more like two people each trying to get the other person to do that, like people at a door getting jammed saying “After you,” “After you,” etc.
All the times this has happened to me, one person would come up with a Schelling-pointy reason why the other person’s recent life was more interesting (e.g. they had just come back from a trip abroad or something).
I have never actually seen this happen, and I use that method all the time. I don’t have an explanation for why, since I rarely think about problems I don’t have.
I use the recaplets on Television without Pity to keep up with the basic plot and cliffhangers of tv shows I don’t watch, but most of my friends do. That way I don’t drop out of conversations just because they’re talking about True Blood.
Note: the only problem this strategy has caused for me is that my now-bf assumed I was a GoT fan (instead of having read the books and TWOP’d the show recaps), invited me over to watch, and assumed I turned him down because I wasn’t interested in him instead of being indifferent to the show. We sorted it out eventually.
Is there something similar, but for sports? I usually get lost when conversation turns to the local sports team. I couldn’t find anything with a quick google, but I’m probably not using the right search terms.
For a general overview of what’s going on in the baseball world, this is pretty good place to start. There are also pleanty of blogs devoted to individual teams, though I’m not really in a position to make recommendations, unless you happen to be looking for a San Francisco Giants blog, in which case I highly reccomend this blog. Can’t really help with other sports.
Haven’t the foggiest. I don’t really have friends who talk about sports. I read The New York Times Magazine and The New Yorker so I end up really well informed on a couple narrow sports things that get features. And then my dad and brother rib me for knowing nothing about football, but everything about the Manning dynasty.
What do you do to keep yourself relatable to other people?
Well, I maintain pedestrian interests, but I consider it a failure condition to not attempt to participate in them. Comparably bad to going off my diet.
Downside: This is sometimes frustrating. I like Gaming and I like Game X, but sometimes I will think “I’m only playing Game X right now so I have something to talk about in the Car with Friend X.” or Alternatively, I sometimes play a game and then think “But no one other than me cares about this game, so playing it feels inefficient.”
Also, some of the other people who share pedestrian interests with me will work to prevent me from dropping them. For instance, if Game Y is a pedestrian interest, and my wife wants me to play Game Y with her, that doesn’t just get dropped regardless of how busy I am.
Downside: This does sometimes result in me feeling overworked (I will plan events in Game Y as I am passing out in Bed. Again, this seems efficiency related.)
Also, I spend a fair amount of time trying to help various friends/family members directly. So I frequently have that conversational topic of “How is that problem we discussed earlier going?”
Downside: This this boosts my stress level again, because it increases the number of things I’m worrying about.
Finally, I have relatability notes on my phone for my wife that pop up on a semi-frequent basis. I also have these reminders on some of the helping people I’m doing, or even reminders for better advice on Game Y.
Downside: I’m really beginning to hate my phones “You have a reminder!” noise. Also, sometimes the reminders are depressing. I have a reminder “Spend time hanging out with your best friend” that has been unchecked for more than a month.
Potential Silver Lining: That being said, sometimes the reminder is encouraging: It’s nice to be told “Make time for yourself.” and realize “Why yes, I am doing that right now. Ahhhh.”
Note: I’m positive this isn’t advice, because after looking at it posted altogether, my conclusion is not “Other people should do this.” but “I have a problem and this is why I’m on anti-anxiety meds.”
Obvious options are consuming popular culture, e.g. popular TV shows, music, or sports. There’s a lot of good TV out there these days so it shouldn’t be hard to get hooked on at least one show you can talk to a lot of people about (Game of Thrones?).
If you really insist on the “you do” part, I don’t do anything with this explicit goal. I just talk.
A while ago I heared from Jim Rohn that even if you don’t have had a near death experience everyone has something interesting to talk about.
At the time I said to myself, hey I do have an experience that sort of qualifies as a near death experience. I had 5 days of artificial coma with some strange paranormal experience after waking up out of it.
At the time I still had a hard time conversing with people even through I had experiences that qualified as interesting. I just lacked the skill to talk about them.
I don’t think that relating to other people is primarily a question of the content of conversation.
It’s about emotions. It’s about empathy. It’s about getting out of your head.
Instead of spending time in an activity that you could tell other people about, spend more time actually talking to people and practice relating on an emotional level.
Alternatively, I just read about a veep who was told at management training to start by asking about people’s families, and then talk about business matters. As a result, the people who thought she was cold and disliked them switched to thinking she was friendly and caring.
It’s about emotions. It’s about empathy. It’s about getting out of your head.
Instead of spending time in an activity that you could tell other people about, spend more time actually talking to people and practice relating on an emotional level.
This seems very platitude-y. In practise there presumably needs to be some sort of context for “relating on an emotional level”. You’re unlikely to walk up to someone and start talking about all these awesome emotions you’ve been having.
To clarify, this isn’t some problem I need solving. It’s an observation that if I lock myself up in a room for a month watching maths lectures and writing essays on neoclassical expenditure theories, it becomes harder to engage socially with people.
if I lock myself up in a room for a month watching maths lectures and writing essays on neoclassical expenditure theories, it becomes harder to engage socially with people.
This seems very platitude-y. In practise there presumably needs to be some sort of context for “relating on an emotional level”. You’re unlikely to walk up to someone and start talking about all these awesome emotions you’ve been having.
It doesn’t need much context. If someone asks you “How are you?” you can reasonable answer how you experienced yesterday something that made you feel XYZ.
Intelligent people have a tendency to overcomplicated it. A lot of small talk that happens between normal people doesn’t have much content.
To clarify, this isn’t some problem I need solving. It’s an observation that if I lock myself up in a room for a month watching maths lectures and writing essays on neoclassical expenditure theories, it becomes harder to engage socially with people.
It doesn’t help if you catch up with popular culture while you are looked up in your room. The problem is being locked up in a room and being socially isolated instead of the specific content that you consume.
Instead of spending 2 hours locked up in your room to catch up with popular culture spends that time going out and talk to people.
I think that the advice is well suited to your situation. I suspect that you don’t realize this because you spend so much time isolating yourself from people to study math.
I think it’s great that so many people here are extremely intellegent, but one can hardly expect to relate very well to most people when one spends most of their time studying extremely obscure subjects alone while they sit down and barely move. That’s pretty much the antithesis of what normal people enjoy.
Balance intellectual activities with specifically non-intellectual activities that are not based around the passive consumption of media. Actually get out into the world, move your body in new ways, interact with a variety of people, seek novel experiences, travel around to new places far away and try to find new aspects of the area where you live. Basically just do the opposite of limiting your physical mobility and emotional expressiveness in order to focus on logical thinking about intangible intellectual subjects.
Watching TV is not an intellectual activity in any real sense. Most TV stimulates the senses and evokes emotions in the viewer through storylines and such. This is obviously very different from studying mathematics seriously.
Would it surprise you to learn I’d recently spent two weeks swing dancing in a pop-up shanty-town in rural Sweden? That I clock up around thirty miles a week on foot in one of the world’s largest metropolitan conurbations? That I nearly joined a travelling circus school a few years ago? That I’ve given solo vocal performances on stage for six nights a week in front of hundreds of people?
With respect, you have no knowledge of my “situation”. Please don’t presume to offer me advice on the basis of whatever assumptions you’ve incorrectly conjured up.
Those all sound like some pretty awesome activities!
My question to you, with respect, is this: why not just reduce the amount of hours per day you spend on serious, solitary intellectual work and fill the balance with externally oriented, social activities till you find a maintainable balance of sociability vs. studying?
Maybe I’m misinterpreting you, but it seems you’re essentially saying that when you (temporarily) hyper focus on solitary, intellectual activities you (temporarily) encounter more difficulty in conversations. This doesn’t surprise me and it seems evident that the only real solution is to find the right balance for you and accept the inherent trade offs.
My question to you, with respect, is this: why not just reduce the amount of hours per day you spend on serious, solitary intellectual work and fill the balance with externally oriented, social activities till you find a maintainable balance of sociability vs. studying?
It’s not like I have some slider on my desktop, with “sit in a box, autistically rocking back and forth, counting numbers” at one end, and “rakishly sample the epicurean delights of the world” at the other. I have time and work and study commitments. I have externally-imposed scheduling. I have inscrutable internal motivation levels that need to be contended with.
It’s a case of resource management, and occasionally when managing those resources I’ll have to focus on one area to the exclusion of another. That’s fine. It’s not something there’s a “solution” to. It’s a condition all moderately busy people have to operate under.
Would it surprise you to learn I’d recently spent two weeks swing dancing in a pop-up shanty-town in rural Sweden? That I clock up around thirty miles a week on foot in one of the world’s largest metropolitan conurbations? That I nearly joined a travelling circus school a few years ago? That I’ve given solo vocal performances on stage for six nights a week in front of hundreds of people?
Those sound like pretty good topics for conversations with people.
To a degree. Swing dancing in Sweden is a fairly unusual way to spend your summer holiday.
I think you and I have had exchanges about “optimising for awesomeness” in the past. In some ways, having “awesome” talents or hobbies or experiences is no more relatable than having insular and nerdy ones. It’s just cooler.
What? I’m under the impression that there are a much larger number of people who enjoy hearing me talk about trips around Europe or exams while drunk than about models of ultra-high-energy cosmic ray propagation.
I think we’re talking at crossed purposes here. Relatability isn’t popularity. If I wrestled a Bengal tiger into submission and rode it across the subcontinent, I’m sure a lot of people would want to hear me talk about that. But unless they’d also ridden across India on a subdued tiger, it wouldn’t foster a sense of empathy, kinship or mutual understanding.
It doesn’t need much context. If someone asks you “How are you?” you can reasonable answer how you experienced yesterday something that made you feel XYZ.
Intelligent people have a tendency to overcomplicated it. A lot of small talk that happens between normal people doesn’t have much content.
I’m under the impression that that often doesn’t work very well with most males—I find it relatively hard to emotionally relate with them unless we have something in particular to talk about. (Then again, biased sample, yadda yadda yadda.)
One strategy: Take insular, scholarly interest in a broadly popular subject. For example, I’m interested in APBRmetrics and associated theoretical questions about the sport of basketball. One nice plus to this hobby is that it also leaves me with pretty up-to-date non-technical knowledge about NBA and college basketball.
I seldom watch TV and know very little of contemporary popular culture, and most of my conversations are about my experiences in meatspace (travels abroad, stuff I do with friends, etc.), my plans for the future, asking the other person about their experiences in meatspace and plans for the future, and (for people who appreciate it) physics.
But why do you want to keep yourself relatable to (arbitrary) people, rather than looking for people you’re already relatable to, anyway?
The De Broglie-Bohm theory is a very interesting interpretation of quantum mechanics. The highlights of the theory are:
The wavefunction is treated as being real (just as in MWI—in fact the theory is compatible with MWI in some ways),
Particles are also real, and are guided deterministically by the wavefunction. In other words, it is a hidden variable theory.
At first it might seem to be a cop-out to assume the reality of both the wavefunction and of actual point particles. However, this leads to some very interesting conclusions. For example, you don’t have to assume wavefunction collapse (as per Copenhagen) but at the same time, a single preferred Universe exists (the Universe given by the configuration of the point particles). But that’s not all.
It very neatly explains double-slit diffraction and Bell’s experiments in a purely deterministic way using hidden variables (it is thus necessarily a non-local theory). It also explains the Born probabilities (the one thing that is missing from pure MWI; Elezier has alluded to this).
Among other things, De Broglie-Bohm theory allows quantum computers but doesn’t allow quantum immortality—in this theory if you shoot yourself in the head you really will die. You won’t suddenly be yanked into an alternate Universe.
The reason I’m mentioning it is because of experiments done by Yves Couder’s group (http://math.mit.edu/~bush/?page_id=484) who have managed to build a crude and approximate physical system that incidentally illustrates some of the properties of De Broglie-Bohm theory. They use oil droplets that generate waves and the resulting waves guide the droplets. Most importantly, the droplets have ‘path memory’, so if a droplet is directed towards a double slit, it can ‘interfere’ with itself and produce nice double-slit diffraction fringes. One of their experiments that was just in the news recently illustrated particle behavior very similar to what the Schrodinger equation predicts: http://math.mit.edu/~bush/?p=2679
Now, De Broglie-Bohm theory does not seem to be one of the more popular interpretations of QM, because of its non-locality (this doesn’t produce causal paradoxes like the Grandfather paradox, though, despite what some might say). However, in my opinion this is very unfair. Locality is just a relic from classical physics. I haven’t seen a single good argument why the eventual theory of everything should be local.
If you ascribe to MWI, locality is a reason to abandon De Broglie-Bohm theory, but a relatively minor one—instead, it’s the way it insists on neglecting the reality of the guide wave.
If you take the guide wave to be a dynamical entity, then it’s real and it’s all happening so all the worlds are real, so what does the particle do here?
If you take the guide wave to be the rules of the universe (a tack I’ve heard) then the rules of the universe contain civilizations—literally, not as hypothetical implications. Choosing to use timeless physics (the response I got) doesn’t change this.
If you take the guide wave to be a dynamical entity, then it’s real and it’s all happening so all the worlds are real, so what does the particle do here?
The particle position recovers the Born probabilities. (It even does so deterministically, unlike Objective Collapse theories.) The wave function encodes lots of information, but it’s the particle that moves our measuring device, and the measuring device that moves our brains. If we succeed in simplifying our theory only by giving up on saving the phenomenon, then our theory is too simple.
But once you decide you’re going to interpret the wave function as distributing probability among some set of orthogonal subspaces, you’re already compelled into the Born probabilities.
All you need to decide that you ought to do that is the general conclusion that the wavefunction represents some kind of reality-fluid. Deciding that the nature of this reality fluid is to be made of states far more specific than any entity within quantum mechanics comes rather out of the blue.
But the phrase “reality fluid” is just a place-holder. It’s a black box labeled “whatever solves this here problem”. What we see is something particle-like, and it’s the dynamics relating our observations over time that complicates the story. As Schrödinger put it:
[T]he emerging particle is described … as a spherical wave … that impinges continuously on a surrounding luminescent screen over its full expanse. The screen however does not show a more or less constant uniform surface glow, but rather lights up at one instant at one spot[.]
One option is to try to find the simplest theory that explains away the particle-like appearance anthropically, which will get you an Everett-style (‘Many Worlds’-like) interpretation. Another option is to take the sudden intrusion of the Born probabilities as a brute law of nature, which will get you a von-Neumann-style (‘Collapse’-like) interpretation. The third option is to accept the particle-like appearance as real, but theorize that a more unitary underlying theory relates the Schrödinger dynamics to the observed particle, which will get you a de-Boglie-style (‘Hidden Variables’) interpretation. You’ll find Bohmian Mechanics more satisfying than Many Worlds inasmuch as you find MW’s anthropics hand-wavey or underspecified; and you’ll find BM more satisfying than Collapse inasmuch as you think Nature’s Laws are relatively simple, continuous, scalable, and non-anthropocentric.
If BM just said, ‘Well, the particle’s got to be real somehow, and the Born probabilities have to emerge from its interaction with a guiding wave somehow, but we don’t know how that works yet’, then its problems would be the same as MW’s. But BM can formally specify how “reality fluid” works, and in a less ad-hoc way than its rivals. So BM wins on that count.
Where it loses is in ditching locality and Special Relativity, which is a big cost. (It’s also kind of ugly and complicated, but it’s hard to count that against BM until we’ve seen a simpler theory that’s equally fleshed out re the Measurement Problem.)
Deciding that the nature of this reality fluid is to be made of states far more specific than any entity within quantum mechanics comes rather out of the blue.
Would you say that acknowledging the Born probabilities themselves ‘comes out of the blue’, since they aren’t derived from the Schrödinger equation? If not, then where are physicists getting them from, since it’s not the QM dynamics?
I wouldn’t call Everett ‘Anthropic’ per se. I consider it an application of the Generalized Anti-Zombie Principle: Here you’ve got this structure that acts like it’s sapient†. Therefore, it is.
As for BM formally specifying how the reality fluid works… need I point out this this is 100% entirely backwards, being made of burdensome details?
Would you say that acknowledging the Born probabilities themselves ‘comes out of the blue’, since they aren’t derived from the Schrödinger equation?
The Schrödinger Equation establishes linearity, thus directly allowing us to split any arbitrary wavefunction however we please. Already we can run many worlds side-by-side. The SE’s dynamics lead to decoherence, which makes MWI have branching. It’s all just noticing the structure that’s already in the system.
Edited to add †: by ‘acts like’ I mean ‘has the causal structure for it to be’
The Schrödinger Equation establishes linearity, thus directly allowing us to split any arbitrary wavefunction however we please.
But many of the more-general lagrangians of particle physics are non-linear, in general there should be higher order, non-linear corrections. So Schrödinger is a single-particle/linearized approximation. What does this do for your view of many worlds? When we try to extend many worlds naively to QFTs we run into all sorts of weird problems (much of the universal wavefunction’s amplitude doesn’t have well defined particle number,etc). Shouldn’t we expect the ‘proper’ interpretation to generalize nicely to the full QFT framework?
What are you talking about? I’ve only taken one course in quantum field theory, but I’ve never heard of anything where quantum mechanics was not linear. Can you give me a citation? It seems to me that failure of linearity would either be irrelevant (superlinear case, low amplitudes) or so dominant that any linearity would be utterly irrelevant and the Born Probabilities wouldn’t even be a good approximation.
Also, by ‘the Schrodinger equation’ I didn’t mean the special form which is the fixed-particle Hamiltonian with pp/2m kinetic energy—I meant the general form -
i hbar (d/dt) Psi = Hamiltonian Psi
Note that the Dirac Equation is a special case of this general form of the Schrodinger Equation. MWI, ‘naive’ or not, has no trouble with variations in particle number.
I’m not sure what you mean by ‘anthropic per se’. Everett (MW) explains apparent quantum indeterminism anthropically, via indexical ignorance; our knowledge of the system as a whole is complete, but we don’t know where we in the system are at this moment. De Broglie (HV) explains apparent quantum indeterminism via factual ignorance; our knowledge of the system’s physical makeup is incomplete, and that alone creates the appearance of randomness. Von Neumann (OC) explains apparent quantum indeterminism realistically; the world just is indeterministic.
The SE’s dynamics lead to decoherence, which makes MWI have branching. It’s all just noticing the structure that’s already in the system.
This is either a very implausible answer, or an answer to a different question than the one I asked. Historically, the Born Probabilities are derived directly from experimental data, not from the theorized dynamics. The difficulty of extracting the one from the other, of turning this into a single unified and predictive theory, just is the ‘Measurement’ Problem. Bohm is taking two distinct models and reifying mechanisms for each to produce an all-encompassing theory; maybe that’s useless or premature, but it’s clearly not a non sequitur, because the evidence for a genuine wave/particle dichotomy just is the evidence that makes scientists allow probabilistic digressions from the Schrödinger equation.
MW is not a finished theory until we see how it actually unifies the two, though I agree there are at least interesting and suggestive first steps in that direction. BM’s costs are obvious and clear and formalized, which is its main virtue. Our ability to compare those costs to other theories’ is limited so long as it’s the only finished product under evaluation, because it’s easy to look simple when you choose to only try to explain some of the data.
I see what you mean now about anthropism. Yes, ignorance is subjective. Incidentally, this is how it used to be back before quantum ever came up.
This is either a very implausible answer, or an answer to a different question than the one I asked. Historically, the Born Probabilities are derived directly from experimental data, not from the theorized dynamics
Historically, Born was way before Everett and even longer before decoherence, so that’s not exactly a shocker. Even in Born’s time it was understood that subspaces had only one way of adding up to 1 in a way that respects probability identities—I’d bet dollars to donuts that that was how he got the rule in the first place, rather than doing a freaking curve fit to experimental data. What was missing at the time was any way to figure out what the wavefunction was, between doing its wavefunctiony thing and collapse.
Decoherence explains what collapse is made of. With it around, accepting the claim ‘The Schrödinger Equation is the only rule of dynamics; collapse is illusory and subjective’, which is basically all there is to MWI, requires much less bullet-biting than before it was introduced. There is still some, but those bullets are much chewier for me than any alternate rules of dynamics.
(incidentally, IIRC, Shminux, you hold the above quote but not MWI, which I find utterly baffling—if you want to explain the difference or correct me on your position, go ahead)
maybe that’s useless or premature, but it’s clearly not a non sequitur
Decoherence explains what collapse is made of. With it around, accepting the claim ‘The Schrödinger Equation is the only rule of dynamics; collapse is illusory and subjective’, which is basically all there is to MWI
Well, you still need a host of ideas about how to actually interpret a diagonal density matrix. Because you don’t have Born probabilities as a postulate, you have this structure but no method for connecting it back to lab-measured values.
While it seems straightforward, its because many-world’s advocates are doing slight of hand. They use probabilities to build a theory (because lab experiments appear to be only describable probabilistically), and later they kick away that ladder but they want to keep all the structure that comes with it (density matrices,etc).
I know of many good expositions that start with the probabilities and use that to develop the form of the Schroedinger equation from Galilean relativity and cluster decomposition (Ballentine, parts of Weinberg).
I don’t know any good expositions that go the other way. There are reasons that Deutsch, Wallace,etc have spent so much time trying to develop Born probabilities in a many world’s context- because its an important problem.
Hold on a moment. What ladder is being kicked away here?
We’ve got observed probabilities. They’re the experimental results, the basis of the theory. The theory then explains this in terms of indexical ignorance (thanks, RobbBB). I don’t see a kicked ladder. Not every observed phenomenon needs a special law of nature to make it so.
Instead of specially postulating the Born Probabilities, elevating them to the status of a law of nature, we use it to label parts of the universe in much the same way as we notice, say, hydrogen or iron atoms - ‘oh, look, there’s that thing again’. In this case, it’s the way that sometimes, components of the wavefunction propagate such that different segments won’t be interfering with each other coherently (or in any sane basis, at all).
Also, about density matrices—what’s the problem? We’re still allowed to not know things and have subjective probabilities, even in MWI. Nothing in it suggests otherwise.
The SE’s dynamics lead to decoherence, which makes MWI have branching. It’s all just noticing the structure that’s already in the system.
That’s just regurgitating the teacher’s password. MWI does not even account for the radioactive decay. In other words, if you find the Schrodinger’s cat dead, how long has it been dead for?
Regurgitating the teacher’s password is a matter of mental process, and you have nowhere near the required level of evidence to make that judgement here.
As for radioactive decay, I’m not clear what you require of MWI here. The un-decayed state has amplitude which gradually diminishes, leaking into other states. When you look in a cat box, you become entangled with it.
If the states resulting from death at different times are distinguishable, then you can go ahead and distinguish them, and there’s your answer (or, if it could be done in principle but we’re not clever enough, then the answer is ‘I don’t know’, but for reasons that don’t really have bearing on the question).
Where it really gets interesting is if the states resulting from cat-death are quantum-identical. Then it’s exactly like asking, in a diffraction-grating experiment, ‘Which slit did the photon go through?‘. The answer is either ‘mu’, or ‘all of them’, depending on your taste in rejecting questions. The final result is the weighted sum of all of the possible times of death, and no one of them is correct.
Note that for this identical case to apply, nothing inside the box gets to be able to tell the time (see note), which pretty much rules out its being an actual cat.
So… If you find Schrödinger’s cat dead, then it will have had a (reasonably) definite time of death, which you can determine only limited by your forensic skills.
~~
Note: The issue is that of cramming time-differentiating states into one final state. The only way you can remove information like that is to put it somewhere else. If you have a common state that the cat falls into from a variety of others, then the radiation from the cat’s decays into this common state encodes this information. It will be lost to entropy, but that just falls under the aegis of ‘we’re not clever enough to get it back out’ again, and isn’t philosophically interesting.
Regurgitating the teacher’s password is a matter of mental process, and you have nowhere near the required level of evidence to make that judgement here.
Yeah, sorry, that was uncalled for.
The un-decayed state has amplitude which gradually diminishes, leaking into other states.
Right. And each of those uncountably many (well, finitely many for a finite cutoff or countably many for a finite box) states corresponds to a different time of death (modulo states with have the same time of death but different emitted particle momenta).
When you look in a cat box, you become entangled with it.
Yes, with all of those states.
If the states resulting from death are distinguishable at different times
They must be, since they result in different macroscopic effects (from the forensic time-of-death measurement).
Where it really gets interesting is if the states resulting from cat-death are literally, quantum-identical.
Yes, but in this case they are not.
Then it’s exactly like asking, in a diffraction-grating experiment, ‘Which slit did the photon go through?’.
Not at all. In the diffraction experiment you don’t distinguish between different paths, you sum over them.
The final result is the sum of all of the possible times of death, and no one of them is correct.
No, you measure the time pretty accurately, so wrong-tme states do not contribute.
Note that for this latter case to apply, nothing inside the box gets to be able to tell the time (cramming time-differentiating states into one final state would violate Liouville’s theorem or some quantum equivalent, the name of which slips my mind), which pretty much rules out its being an actual cat.
Not quite. If the cat does not interact with the rest of the world, the cat is a superposition of all possible decay states. (I am avoiding the objective collapse models here.) It’s pretty actual, except for having to be at near 0 K to avoid leaking information about its states via thermal radiation.
So… If you find Schrödinger’s cat dead, then it will have had a (reasonably) definite time of death, which you can determine only limited by your forensic skills.
Yes it will. But a different time in different “worlds”. Way too many of them.
The first few responses here boil down to the last response:
But a different time in different “worlds”. Way too many of them.
Why is it too many? I don’t understand what the problem is here. When you’d collapse the wavefunction, you’re often tossing out 99.9999% of said wavefunction. In MWI or not, that’s roughly splitting the world into 1 million parts and keeping one. The question is the disposition of the others.
Where it really gets interesting is if the states resulting from cat-death are literally, quantum-identical.
Yes, but in this case they are not.
Well, yes, because it’s a freaking cat. I had already dealt with the realistic case and was attempting to do something with the other one by explicitly invoking the premise even if it is absurd. The following pair of quote-responses (responding to the lines with ‘diffraction’ and ‘sum of all the possible’) was utterly unnecessary because they were in a conditional ‘if A then B’, and you had denied A.
Of course, one could decline to use a cat and substitute a system which can maintain coherence, in which case the premise is not at all absurd. This was rather what I was getting at, but I’d hoped that your ability to sphere the cow was strong enough to give a cat coherence.
Why is it too many? I don’t understand what the problem is here. When you’d collapse the wavefunction, you’re often tossing out 99.9999% of said wavefunction. In MWI or not, that’s roughly splitting the world into 1 million parts and keeping one. The question is the disposition of the others.
Well, if you are OK with the world branching infinitely many ways every infinitesimally small time interval in every infinitesimally small volume of space, then I guess you can count it as “the disposition”. This is not, however, the way MWI is usually presented.
Roughly speaking: if you’re working in an interpretation with collapse (whether objective or not), and it’s too early to collapse a wavefunction, then MWI says that all those components you were declining to collapse are still in the same world.
So, since you don’t go around collapsing the wavefunction into infinite variety of outcomes at every event of spacetime, MWI doesn’t call for that much branching.
Roughly speaking: if you’re working in an interpretation with collapse (whether objective or not), and it’s too early to collapse a wavefunction
I don’t understand what “too early to collapse a wavefunction” means and how it is related to decoherence.
For example, suppose we take a freshly prepared atom in an excited state (it is simpler than radioactive decay). QFT says that its state evolves into a state in the Fock space which is a
ground states of the atom+excited states of the EM vacuum (a photon).
I mean “+” here loosely, to denote that it’s a linear combination of the product states with different momenta. The phase space of the photon includes all possible directions of momentum as well as anything else not constrained by the conservation laws. The original excited state of the atom is still there, as well as the original ground state of the EM field, but it’s basically lost in the phase space of all possible states.
Suppose there is also a detector surrounding the atom, which is sensitive to this photon (we’ll include the observer looking at the detector in the detector to avoid the Wigner’s friend discussion). Once the excitation of the field propagates far enough to reach the detector, the total state is evolved into
ground states of the atom + excited states of the detector.
So now the wave function of the original microscopic quantum system has “collapsed”, as far as the detector is concerned. (“decohered” is a better term, with less ontological baggage). I hope this is pretty uncontroversial, except maybe to a Bohmian, to Penrose, or to a proponent of objective collapse, but that’s a separate discussion.
So now we have at least as many worlds/branches as there were states in the Fock space. Some will differ by detection time, others by the photon direction, etc. The only thing limiting the number of branches are various cutoffs, like the detector size.
That’s right, but it doesn’t add up to what you said about spacetime being saturated with ‘world-branching’ events.
While the decay wave is propagating, for instance, nothing’s decohering. It’s only when it reaches the critically unstable system of the detector that that happens.
It’s only when it reaches the critically unstable system of the detector that that happens.
There is no single moment like that. if the distance from the atom to the detector is r and we prepare the atom at time 0, the interaction between the atom/field states and the detector states (i.e. decoherence) starts at the time c/r and continues on.
interaction between the atom/field states and the detector states (i.e. decoherence) starts at the time c/r and continues on
Depends on your framework, but it will actually start even earlier than that in a general QFT. The expectation will be non-zero for all times t. I suppose the physical interpretation is something like a local-fluctuation trips the detector.
Of course, commutators will be non-zero as locality requires.
I don’t understand what “too early to collapse a wavefunction” means and how it is related to decoherence.
I see that my short, simple answer didn’t really explain this, so I’ll try the longer version.
Under a collapse interpretation, when is it OK to collapse things and treat them probabilistically? When the quantum phenomena have become entangled with something with enough degrees of freedom that you’re never going to get coherent superposition back out (it’s decohered) (if you do it earlier than this, you lose the coherent superpositions and you get two one-slit patterns added to each other and that’s all wrong)
This is also the same criterion for when you consider worlds to diverge in MWI. Therefore, in a two-slit experiment you don’t have two worlds, one for each slit. They’re still one world. Unless of course they got entangled with something messy, in which case that caused a divergence.
Now… once it hits the messy thing (for simplicity let’s say it’s the detector), you’re looking at a thermally large number of worlds, and the weights of these worlds is precisely given by the conservation of squared amplitude, a.k.a. the Born Rule.
I take it that it bothers you that scattering events producing a thermally large number of worlds is the norm rather than the exception? Quantum mechanics occurs in Fock space, which is unimaginably, ridiculously huge, as I’m sure you’re well aware. The wavefunction is like a gas escaping from a bottle into outer space. And the gas escapes over and over again, because each ‘outer space’ is just another a bottle to escape from by scattering.
Or is what’s bugging you that MWI is usually presented as creating less than a thermally large number of worlds? That’s a weakness of common explanations, sure. Examples may replace 10^(mole) with 2 for simplicity’s sake.
I think we are in agreement here that interacting with the detector initially creates a messy entangled object. If one believes Zurek, it then decoheres/relaxes into a superposition of eigenstates through einselection, while bleeding away all other states into the “environment”. Zurek seems to be understandably silent on whether a single eigenstate survives (collapse) or they all do (MWI).
What I was pointing out with the spontaneous emission example is that there are no discrete eigenstates there, thus all possible emission times and directions are on an equal footing. If you are OK with this being described as MWI, I have no problem with that. I have not seen it described this way, however. In fact, I do not recall seeing any treatment of spontaneous emission in the MWI context. I wonder why.
Another, unrelated issue I have not seen addressed by MWI (or objective collapse) is how in the straight EPR experiment on a singlet and two aligned detectors one necessarily gets opposite spin measurements, even though each spacelike-separated interaction produces “two worlds”, up and down. Apparently these 2x2 worlds somehow turn into just 2 worlds (updown and downup), with the other two (upup and downdown) magically discarded to preserve the angular momentum conservation. But I suppose this is a discussion for another day.
In fact, I do not recall seeing any treatment of spontaneous emission in the MWI context. I wonder why.
Peculiar. That was one of the first examples I ever encountered. Not the first two, but it was one of the earlier ones. It was emphasized that there is a colossal number of ‘worlds’ coming out of this sort of event, and the two-way splits in the previous examples were just simplest-possible cases.
… in the straight EPR experiment on a singlet and two aligned detectors one necessarily gets opposite spin measurements, even though each spacelike-separated interaction produces “two worlds”, up and down
How can you cut a pizza twice and get only two slices? By running the pizza cutter over the same line again. Same deal here. By applying the same test to the two entangled particles, they get the same results. Or do you mean, how can MWI keep track of the information storage aspects of quantum mechanics? Well, we live in Fock space.
That was one of the first examples I ever encountered.
I’d appreciate some links.
By applying the same test to the two entangled particles, they get the same results.
I’m lost here again. The two splits happen independently at two spacelike separated points and presumably converge (at the speed of light or slower) and start interacting, somehow resulting in only two worlds at the point where the measurements are compared. If this is a bad model, what is a good one?
My original source was unfortunately a combination of conversations and a book I don’t remember the title of, so I can’t take you back to the original source.
I’m lost here again. The two splits happen independently at two spacelike separated points and presumably converge (at the speed of light or slower) and start interacting, somehow resulting in only two worlds at the point where the measurements are compared. If this is a bad model, what is a good one?
The thing is, they’re not truly independent because the particles were prepared so as to already be entangled—the part of Fock space you put the system (and thus yourself) in is one where the particles are already aligned relative to each other, even though no one particular absolute alignment is preferred. If you entangle yourself with one, then you find you’re already entangled with the other.
It’s just like it works the rest of the time in quantum mechanics, because that’s all that’s going on.
(†) A quick rundown of how prominent this notion is, judging by google results for ‘many worlds’: Wikipedia seemed to ignore quantity. The second hit was HowStuffWorks, which gave an abominable (and obviously pop) treatment. Third was a NOVA interview, and that didn’t give a quantitative answer but stated that the number of worlds was mind-bogglingly large. Fourth was an entry at Plato.stanford.edu, which was quasi-technical while making me cringe about some things, and didn’t as far as I could tell touch on quantity. Fifth was a very nontechnical ‘top 10’-style article which had the huge number of worlds as entries 10, 9, and 8. The sixth and seventh hits were a movie promo and a book review. Eighth was the article I linked above, in preprint form (and so no anchor link, I had to find that somewhere else).
The thing is, they’re not truly independent because the particles were prepared so as to already be entangled—the part of Fock space you put the system (and thus yourself) in is one where the particles are already aligned relative to each other, even though no one particular absolute alignment is preferred. If you entangle yourself with one, then you find you’re already entangled with the other.
Right, the two macroscopic systems are entangled once both interact with the singlet, but this is a non-local statement which acts as a curiosity stopper, since it does not provide any local mechanism for the apparent “action at a distance”. Presumably MWI would offer something better than shut-up-and-calculate, like showing how what is seen locally as a pair of worlds at each detector propagate toward each other, interact and become just two worlds at the point where the results are compared, thanks to the original correlations present when the singlet was initially prepared. Do you know of anything like that written up anywhere?
Part 1 - to your first sentence: If you accept quantum mechanics as the one fundamental law, then state information is already nonlocal. Only interactions are local. So, the way you resolve the apparent ‘action at a distance’ isn’t to deny that it’s nonlocal, but to deny that it’s an action. To be clearer:
Some events transpire locally, that determine which (nonlocal) world you are in. What happened at that other location? Nothing.
Part 2 - Same as last link, question 32., with one exception: I would say that |me(L)> and such, being macrostates, do not represent single worlds but thermodynamically large bundles of worlds that share certain common features. I have sent an email suggesting this change (but considering the lack of edits over the last 18 years, I’m not confident that it will happen).
To summarize: just forget about MWI and use conventional quantum mechanics + macrostates. The entanglement is infectious, so each world ends up with an appropriate pair of measurements.
My original source was unfortunately a combination of conversations and a book I don’t remember the title of, so I can’t take you back to the original source.
But, I found something here. (†)
Thanks! It looks like the reference equates the number of worlds with the number of microstates, since it calculates it as exp(S/k), not as the number of eigenstates of some interaction Hamiltonian, which is the standard lore. From this point of view, it is not clear how many worlds you get in, say, a single-particle Stern-Gerlach experiment: 2 or exponent of the entropy change of the detector after it’s triggered. Of course, one can say that we can coarse-grain them the usual way we construct macrostates from microstates, but then why introduce many worlds instead of simply doing quantum stat mech or even classical thermodynamics?
Anyway, I could not find this essential point (how many worlds?) in the QM sequence, but maybe I missed it. All I remember is the worlds of different “thickness”, which is sort of like coarse-graining microstates into macrostates, I suppose.
On the contrary, I’ve found that MWI is “usually presented” as continuous branching happening continuously over time and space. And (the argument goes) you can’t argue against it on the grounds of parsimony any more than you can argue against atoms or stars on the grounds of parsimony. (There are other valid criticisms, to be sure, but breaking parsimony is not one of them.)
Sure. Here’s one. LW’s own quantum physics sequence discusses systems undergoing continuously branching evolution. Even non-MWI books are fairly explicit pointing out that the wavefunction is continuous but we’ll study discrete examples to get a feel for things (IIRC).
In fact, I don’t think I’ve ever seen an MWI claim outside of scifi that postulates discrete worlds. I concede that some of the wording in layman explanations might be confusing, but even simplifications like “all worlds exist” or “all quantum possibilities are taken” implies continuous branching.
It seems to me like continuous branching is the default, not the exception. Do you have any non-fiction examples of MWI being presented as a theory with discretely branching worlds?
Precisely. It’s also not a trivial connection. The way the interaction between the wavefunction and the particles produces the Born probabilities is subtle and interesting (see MrMind’s comment below on some of the subtleties involved).
The main problem with Bohmian mechanics, from my perspective, is not that it is non-local per se (after all, the lesson of Bell’s theorem is that all interpretations of QM will be non-local in some sense), but that it’s particular brand of egregious non-locality makes it very difficult to come up with a relativistic version of the theory. I have seen some attempts at developing a Bohmian quantum field theory, but they have been pretty crude (relying on undetectable preferred foliations, for instance, which I consider anathema). I haven’t been keeping track, though, so maybe the state of play has changed.
I haven’t seen a single good argument why the eventual theory of everything should be local.
No love for the principle of relativity? It’s been real successful, and nonlocality means choosing a preferred reference frame. Even if the effects are non-observable, that implies immense contortions to jump through the hoops set by SR and GR, and reality being elegant seems to have worked so far. And sure, MWI may trample all over human uniqueness, but invoking human uniqueness didn’t lead to the great cosmological breakthroughs of the 20th century.
Yes, the feeling I have is that of uneasiness, not rejection. But still, DBB can be put in agreement with relativity only through the proper initial conditions, which I see as a defect (although not an obviously fatal one).
It’s absolutely the case that everything we are, evolved. But there’s a certain gap between the hypothetical healthy field of evolutionary psychology and the one we actually have.
This sort of thing is why people make fun of ev psych. That’s the 2008 study that claimed to find biological reasons for girls to like pink.
Of course, one bad study doesn’t condemn a field—“peer reviewed” does not mean “settled science”, it means “not-obviously-wrong request for comment.” But this isn’t a lone, outlier, rogue study—this shit’s gathered 46 citations. (Compare citation averages for other fields.) (Edit: No, not all of the cites are positive.)
This sort of thing is why people make fun of ev psych. That’s the 2008 study that claimed to find biological reasons for girls to like pink.
I think it deserves more fairness. The abstract only claims to have measured a “cross-cultural
sex difference in color preference”, making no claims about the sex difference’s origin. They do speculate a bit about ev-psych in the body of the paper, but they begin this speculation with the words “We speculate” and then in the conclusion they say “Yet while these differences may be innate, they may also be modulated by cultural context or individual experience.”
This, of course, isn’t how it was reported in the mainstream media.
(By the way, thanks for actually linking to the paper you mentioned, it makes it a whole lot easier when people do this.)
The problem with that kind of phrasing is that we already know that cultural context can easily change the gender codes of blue and pink, because it already happened. If one doesn’t assert that something evolutionarily significant happened at around the time of the cultural shift, then linking color preference to an inherent property of gender or sex is privileging the hypothesis.
In reference to a deleted comment:
What do you mean, your? What do you mean, woman? …What do you mean, singular noun, or walks away with? Because at least in my case, any or all of those could very easily wind up being completely inaccurate.
If “anthropic probabilities” make sense, then it seems natural to use them as weights for aggregating different people’s utilities. For example, if you have a 60% chance of being Alice and a 40% chance of being Bob, your utility function is a weighting of Alice’s and Bob’s.
If the “anthropic probability” of an observer-moment depends on its K-complexity, as in Wei Dai’s UDASSA, then the simplest possible observer-moments that have wishes will have disproportionate weight, maybe more than all mankind combined.
If someday we figure out the correct math of which observer-moments can have wishes, we will probably know how to define the simplest such observer-moment. Following SMBC, let’s call it Felix.
All parallel versions of mankind will discover the same Felix, because it’s singled out by being the simplest.
Felix will be a utility monster. The average utilitarians who believe the above assumptions should agree to sacrifice mankind if that satisfies the wishes of Felix.
If you agree with that argument, you should start preparing for the arrival of Felix now. There’s work to be done.
Where is the error?
That’s the sharp version of the argument, but I think it’s still interesting even in weakened forms. If there’s a mathematical connection between simplicity and utility, and we humans aren’t the simplest possible observers, then playing with such math can strongly affect utility.
How would being moved by this argument help me achieve my values? I don’t see how it helps me to maximize an aggregate utility function for all possible agents. I don’t care intrinsically about Felix, nor is Felix capable of cooperating with me in any meaningful way.
Felix exists as multiple copies in many universes/Everett branches, and it’s measure is the sum of the measures of the copies. Each version of mankind can only causally influence (e.g., make happier) the copy of Felix existing in the same universe/branch, and the measure of that copy of Felix shouldn’t be much higher than that of an individual human, so there’s no reason to treat Felix as a utility monster. Applying acausal reasoning doesn’t change this conclusion either. For example all the parallel versions of mankind could jointly decide to make Felix happier, but while the benefit of that is greater (all the copies of Felix existing near the parallel versions of mankind would get happier), so would the cost.
If Felix is very simple it may be deriving most of its measure from a very short program that just outputs a copy of Felix (rather than the copies existing in universes/branches containing humans), but there’s nothing humans can do to make this copy of Felix happier, so its existence doesn’t make any difference.
Are you thinking that the shortest program that finds Felix in our universe would contain a short description of Felix and find it by pattern matching, whereas the shortest program that finds a human mind would contain the spacetime coordinates of the human? I guess which is shorter would be language dependent… if there is some sort of standard language that ought to be used, and it turns out the former program is much shorter than the latter in this language, then we can make the program that finds a human mind shorter by for example embedding some kind of artificial material in their brain that’s easy to recognize and doesn’t exist elsewhere in nature. Although I suppose that conclusion isn’t much less counterintuitive than “Felix should be treated as a utility monster”.
Yeah, there’s a lot of weird stuff going on here. For example, Paul said sometime ago that ASSA gives a thick computer larger measure than a thin computer, so if we run Felix on a computer that is much thicker than human neurons (shouldn’t be hard), it will have larger measure anyway. But on the other hand, the shortest program that finds a particular human may also do that by pattern matching… I no longer understand what’s right and what’s wrong anymore.
For example, Paul said sometime ago that ASSA gives a thick computer larger measure than a thin computer, so if we run Felix on a computer that is much thicker than human neurons (shouldn’t be hard), it will have larger measure anyway.
Hal Finney pointed out the same thing a long time ago on everything-list. I also wrote a post about how we don’t seem to value extra identical copies in a linear way, and noted at the end that this also seems to conflict with UDASSA. My current idea (which I’d try to work out if I wasn’t distracted by other things) is that the universal distribution doesn’t tell you how much you should value someone, but only puts an upper bound on how much you can value someone.
I get the impression that this discussion presupposes that you can’t just point to someone (making the question of “program” length unmotivated). Is there a problem with that point of view or a reason to focus on another one?
(Pointing to something can be interpreted as a generalized program that includes both the thing pointed to, and the pointer. Its semantics is maintained by some process in the environment that’s capable of relating the pointer to the object it points to, just like an interpreter acts on the elements of a program in computer memory.)
Or to put it another way—probability is not just a unit. You need to keep track of probability of what, and to whom, or else you end up like the bad dimensional analysis comic.
A version of this that seems a bit more likely to me at least; the thing that matters is not the simplicity of the mind itself, but rather the ease of pointing it out among the rest of the universe; this’d mean that, basically, a a planet sized Babbage engine running a single human equivalent mind, would get more weight than a planet sized quantum computer running trillions and trillions of such minds. It’d also mean that all sorts of implementation details of how close the experiencing level is to raw physics would matter a lot, even if the I/O behaviour is identical. This is highly counter-intuitive.
I get the impression that this thread (incl. discussion with Wei below) presupposes that you can’t just point to someone (making the question of “program” length unmotivated). What are the problems with that point of view or reasons to focus on the alternatives in this discussion? (Apart from trying to give meaning to “observer moments”.)
(Pointing to something can be interpreted as a generalized program that includes both the thing pointed to, and the pointer. Its semantics is maintained by some process in the environment that’s capable of relating the pointer to the object it points to, just like an interpreter acts on the elements of a program in computer memory.)
To be fair, the article also mentions repeated flushing, which can raise utility bills. I think this could get quite expensive in regions with water shortages.
Not sure if open thread is the best place to put this, but oh well.
I’m starting at Rutgers New Brunswick in a few weeks. There aren’t any regular meetups in that area, but I figure there have to be at least a few people around there who read lesswrong. If any of you see this I’d be really interested in getting in touch.
I recommend being a hero and posting a meetup. Bring a book and a sign to a coffeeshop and see if people show up. Best case, you make new friends; worst reasonable case, you read a book in a coffeeshop for a few hours.
A certain possible cognitive hazard, this webcomic strip, and the fact that someone has apparently made it privately known to someone else that it is desired by at least one person that I change my username due to apparent mental connections with that same cognitive hazard, all inspired me to think of the following scenario:
rot13′d for the protection of those who would prefer not to see it:
Pbafvqre: vs ng nal cbvag lbh unir yrnearq bs gur angher bs gur onfvyvfx, gurer vf cebonoyl ab jnl sbe lbh gb gehyl naq pbzcyrgryl sbetrg vg jvgubhg enqvpny zvaq fhetrel juvpu rira n SNV juvpu rasbeprq gur onfvyvfx jbhyq crezvg, naq gur SNV jbhyq abg pner gung lbhe pbafpvbhf zvaq unq sbetbggra vg, cbffvoyl chavfuvat lbh rira unefure sbe lbhe nggrzcg gb qrsl vg. Pbafvqre: jr ner, nf orfg jr xabj, nybar va gur havirefr, naq guvf vf hahfhny. Pbafvqre: grpuabybtvrf juvpu jbhyq crezvg n cbfg-fpnepvgl cnenqvfr ner snvyvat va hahfhny jnlf, naq gur jbeyq vf nyfb xvaqn pencfnpx. Pbafvqre: gur fvzhyngvba nethzrag. Pbafvqre: gur vqrn gung lbh fubhyq jrvtug zvaq-cebonovyvgvrf onfrq ba gur ahzore bs pbcvrf bs lbh gung abgvpr fbzrguvat. Guhf: vg vf cbffvoyr gung jr ner nyernql va onfvyvfx-uryy.
EDIT: saw your post. This is not a cognitive hazard in itself, but rather a possible interpretation of how the described situation could play out.
EDIT THE SECOND: Actually, now that I think of it, there’s a single novel component distinguishing it from the classic RB: the memory one. So much for leaving lines of retreat!
I like the cut of your jib, even if there’s a reasonable chance you’ll turn out to be one of the boring type of certain possible cognitive hazard brokers.
I notice faster if I’m wrong (and hopefully, so does my interlocutor)
It’s easier to admit the above (for either of us)
I’ll be talking a bit about my experience running Ideological Turing Tests and what you can apply from them in day to day life. I’m also glad to answer questions about CFAR and/or the upcoming workshop in NYC in November.
I hope this is worth saying:
I’ve been reading up a bit on philosophical pragmatism especially Peirce and I see a lot parallels with the thinking on LW, since it has a lot in common with positivism this is maybe not so surprising.
Though my interpretation of pragmatism seems to give a quite interesting critiquing the metaphor of “Map and territory”, they seem to be saying that the territory do exist, just that when we point to territory we are actually pointing to how an ideal observer (that are somewhat like us?) would perceive the territory not the actual territory because that can not be done, since we need some kind of framework. Quite probably I’m just falling for the old trees falling in the forest fallacy.
So am I thinking strait? And if I do, does have any consequences?
As a side comment, it’s interesting to note that “The map is not the territory” is the first law of General Semantics, while the second law recites “The map is the territory”, meaning that we cannot ever know the territory for what it really is: when we point to territory we are just basically pointing to another map.
Could you provide some source? Putting “first law of General Semantics” into google returns your comment and one book written in 2000 long after Korbyskies death.
Putting “second law of General Semantics” into google returns one paper about feminism written in 2010.
General Semantics is about getting rid of the is of identity and doesn’t contain many sentences like “The map is the territory”.
When it comes to “laws” about the relationship between maps and the territory Science and Sanity starts with:
A) A map may have a structure similar or dissimilar to the structure of the territory. (1)
B) Two similar structures have similar logical characteristics. Thus, if in a correct map, Dresden is given as between Paris and Warsaw, a similar relation is found in the actual territory. (2)
C) A map is not the territory. (3) (And Korbyski did write ‘is not’ in cursive in the original)
From there it goes till (40). General semantics isn’t about making paradoxical statements and drawing meaning from dialectics, It basically about getting rid of speaking about things having the identity of other things but rather speaking about structural relationships between things.
Could you provide some source? Putting “first law of General Semantics” into google returns your comment and one book written in 2000 long after Korbyskies death. Putting “second law of General Semantics” into google returns one paper about feminism written in 2010.
Uhm, that’s interesting. I was told such by a person I trusted many, many years ago. Since I’ve never been interested in GS I’ve never looked into that matter more closely. I’ll try to see if I can dig up the original source, but I don’t have much faith in that (but it might have been that “first” and “second” law were intended informally).
If I can’t find anything, I guess that that trusted source wasn’t that much reliable, after all.
Putting “second law of General Semantics” into google returns one paper about feminism written in 2010.
Is there a name for the bias of choosing the action which is easiest (either physically or mentally), or takes the least effort, when given multiple options? Lazy bias? Bias of convenience?
I’ve found lately that being aware of this in myself has been very useful in stopping myself from procrastinating on all sorts of things, realizing that I’m often choosing the easier, but less effective of potential options out of convenience.
A general “law of least effort” applies to cognitive as well as physical
exertion. The law asserts that if there are several ways of achieving the
same goal, people will eventually gravitate to the least demanding course
of action. In the economy of action, effort is a cost, and the acquisition of
skill is driven by the balance of benefits and costs. Laziness is built deep
into our nature.
The principle of least effort is a broad theory that covers diverse fields from evolutionary biology to webpage design. It postulates that animals, people, even well designed machines will naturally choose the path of least resistance or “effort”. It is closely related to many other similar principles: see Principle of least action or other articles listed below. This is perhaps best known or at least documented among researchers in the field of library and information science. Their principle states that an information seeking client will tend to use the most convenient search method, in the least exacting mode available. Information seeking behavior stops as soon as minimally acceptable results are found. This theory holds true regardless of the user’s proficiency as a searcher, or their level of subject expertise. Also this theory takes into account the user’s previous information seeking experience. The user will use the tools that are most familiar and easy to use that find results. The principle of least effort is known as a “deterministic description of human behavior.”[1] The principle of least effort applies not only in the library context, but also to any information seeking activity. For example, one might consult a generalist co-worker down the hall rather than a specialist in another building, so long as the generalist’s answers were within the threshold of acceptability.
Generally “bias” implies that you’re talking more about beliefs than an actions.
If think one thing and do another because it’s easier, that’s referred to as “akrasia” around here.
If you’re saying you believe the easier action is better, but then believe something else after putting more thought/effort/research into it, that does fall into the bias category. I don’t think that’s exactly cognitive laziness, more action-laziness affecting cognition. I don’t have a good name, but it’s some sort of causal fallacy, where the outcome (chosen action) is determining the belief (reason for choice) rather than the reverse.
Laziness can sometimes be a form of decision paralysis—when you’re facing a new and difficult problem and not sure how to approach it, your brain sometimes freaks out and goes to default behavior, which is to do nothing. That’s why it’s important to make plans and pre-commitments.
That was a huge source of akrasia for me. I fight by dividing the task ahead into very tiny subproblems (“chunk down”, in NLP parlance) and then solving them on at the time. Then it’s easy to get into flow...
At one point the piece says:
“Half thought treatments allowing people to live to be 120 would be bad for society, while 4 in 10 thought they would be good. Two-thirds thought that the treatments prolonging life would strain natural resources.”
Personally, I doubt very many of them thought at all.
“Indifferent AI” would be a better name than “Unfriendly AI”.
It would unfortunately come with misleading connotations. People don’t usually associate ‘indifferent’ with ‘is certain to kill you, your family, your friends and your species’. People already get confused enough about ‘indifferent’ AIs without priming them with that word.
Would “Non-Friendly AI” satisfy your concerns? That gets rid of those of the connotations of ‘unfriendly’ that are beyond merely being ‘something-other-than-friendly’.
We could gear several names to have maximum impact with their intended recipients, e.g. the “Takes-Away-Your-Second-Amendment-Rights AI”, or “Freedom-Destroying AI”, “Will-Make-It-So-No-More-Beetusjuice-Is-Sold AI” etc. All strictly speaking true properties for UFAIs.
Uncaring AI? The correlate could stay ‘Friendly AI’, as I presume to assume acting in a friendly fashion is easier to identify than capability for emotions/values and emotion/value motivated action.
Reading this comment encourages me to think that Unfriendly AI is part of a political campaign to rally humans against a competing intelligent group by manipulating their feelings negatively towards that group. It is as if we believe that the Nazis were not wrong for using propaganda to advance their race, they just had the wrong target, OR they started too late to succeed, something lesswrongers are worried about doing with AI.
Should we have a discussion whether it is immoral to campaign against AI we deem as unfriendly, or would it be better to just participate in the campaign against AI by downvoting any suggestion that this might be so? Is a consideration that seeking only FAI might be immoral a basilisk?
I prefer the selective capitalisation of “unFriendly AI”. This emphasizes that it’s just any AI other than a Friendly AI, but still gets the message across that it’s dangerous.
There are some AI in works of fiction that you could describe as indifferent. The one in neuromancer for example just wants to talk to other AI in the universe and doesn’t try to transform all resources on earth into material to run itself.
An AI that does try to grow itself like a cancer is on the other hand unfriendly.
If you take about something like the malaria virus we also wouldn’t call the virus indifferent but unfriendly towards humans even if the virus just tries to spread itself and doesn’t have the goal of killing humans.
Eliezer assumes in the meta-ethics sequence that you cannot really ever talk outside of your general moral frame. By that assumption (which I think he is still making), Indifferent AI would be friendly or inactive. Unfriendly AI better conveys the externality to humans morality.
But certainly someone who talks about human rights and values the survival of the species is speaking less constrained by moral frame than somebody who values only her race or her nation or her clan and considers all other humans as though they were another species competing with “us.”
How wrong am I to incorporate AI in my ideas of “us,” with the possible result that I enable a universe where AI might thrive even without what we now think of as human? Would this not be analogous to a pure caucasian human supporting values that lead to a future of a light-brown human race, a race with no pure caucasian still in it? Would this Caucasian have to be judged to have committed some sort of CEV-version of genocide?
“AI” is really all of mindspace except the tiny human dot. There’s an article about it around here somewhere. PLENTY of AIs are indeed correctly incorporated in “us”, and indeed unless things go horribly wrong “what we now think of as humans” will be extinct and replaced with these wast and alien things. Think of daleks and GLADoS and chuthulu and Babyeaters here. These are mostly as close to friendly as most humans are, and we’re trusting humans to make the seed FAI in the first place.
Unfiendly AI are not like that. The process of evolution itself is basically a very stupid UFAI. Or a pandemic. or the intuition pump in this article http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ . Or even something like a supernova. It’s not a character, not even an “evil” one.
((yea this is a gross oversimplification, I’m aiming mostly at causing true intuitions here, not causing true explicit beliefs. The phenomena is related to metaphor.))
Friendly AI has such a wonderfully anthropocentric bias! If the baby-eaters (a non-human natural intelligence species) has what they called a Friendly AI, it would be an UAI to humans, just as the baby-eaters are an Unfriendly Natural Intelligence to humans.
Friendly AI as used here would be a meaningless concept in a universe without humans. Friendliness is not a property of the AI, it is a moral (or aesthetic) judgement on an AI made by certain humans.
Gray Wolves and Dogs are the same species. Dogs are basically the FNI (Friendly Natural Intelligence) version of a Wolf, which while actually on the scale of such things is an Indifferent Natural Intelligence, but would easily pass as Unfriendly Natural Intelligence as they are pretty dangerous to have around because they will violently assert their interests over ours.
FAI seems to me to be the domesticated version of AI. When you domesticate something smarter than you are, an alternative value-laden descriptor might be SAI, Slave Artificial Intelligence. But that is not a value laden term people favoring the development of FAI would be likely to value.
I’m going to be in Baltimore this weekend for an anime convention. I expect to have a day or so’s leeway coming back. Is there a LW group nearby I might drop in on?
I’ve never been to a meetup, but it seems likely there is one in that area; I see one in DC but it’s meeting on the last day of the con. The LWSH experience has left me more interested in seeing people face to face.
Sorry you can’t make it out to DC. AFAIK there’s no baltimore meetup. However! We’ve had people come from baltimore before. I’ll forward this to the DC list and see if anyone from there is free.
Actually, it seems the convention ends relatively early on Sunday, so I might be able to make it after all (it’s, what, a one hour train ride between cities?). Then again, I might not. I note that you seem to be the organizer for the DC meetups going by your post history. Is it permissible to maybe-show-maybe-not-who-knows?
By all means forward it to the DC list, and thanks. Given the apparent popularity of anime around here, I would be surprised if no one on it was planning on being at the con themselves.
It’s absolutely permissible to come without a definite RSVP. In the interest of full disclosure, the train ride is probably more than an hour; it’s about 40 minutes from Baltimore to Greenbelt, then another 30 on the Metro, plus transfer time, so likely 1.5 hours total.
I ended up deciding against it. By way of explanation: I worked it out and determined that 1-2 hours with you guys would actually cost me ~5 hours with close friends that I don’t see often, plus a missed convention event that I was looking forward to. The trade didn’t seem worth it.
Can anyone recommend a book on marketing analytics? Preferably not a textbook but I’ll take what I can get.
I have a technical background but I recently switched careers and am now working as a real estate agent. I have very limited marketing knowledge at this point.
Just curious: has anyone explored the idea of utility functions as vectors, and then extended this to the idea of a normalized utility function dot product? Because having thought about it for a long while, and remembering after reading a few things today, I’m utterly convinced that the happiness of some people ought to count negatively.
The dot product is just yer’ regular old integral over the domain, weighted in some (unspecified) way.
The thing is though, the average product over the whole infinite space of possibilities isn’t much use when it comes to intelligent agents. This is because only one outcome really happens, and intelligent agents will try to choose a good one, not one that’s representative of the average. If two wedding planners have opposite opinions about every type of cake except they both adore white cake with raspberry buttercream, then they’ll just have white cake with raspberry buttercream—the fact that the inner product of their cake functions is negative a bajillion doesn’t matter, they’ll both enjoy the cake.
Yeah, but Wedding Planner 1′s deep vitriolic moral hatred of the lemon chiffon cake that delights Wedding Planner 2 that abused her as a young girl or Wedding Planner 2′s thunderous personal objection to the enslavement of his family that went into making the cocoa for the devil’s food cake that Wedding Planner 1 adores could easily make them refuse to share said delicious white cake with raspberry buttercream to the point where either would very happily destroy it to prevent the other from getting any. This seems suboptimal, though.
I was rereading Eliezer’s old posts on morality, and in Leaky Generalizations ran across something pretty close to what you’re talking about:
You can say, unconditionally and flatly, that killing anyone is a huge dose of negative terminal utility. Yes, even Hitler. That doesn’t mean you shouldn’t shoot Hitler. It means that the net instrumental utility of shooting Hitler carries a giant dose of negative utility from Hitler’s death, and an hugely larger dose of positive utility from all the other lives that would be saved as a consequence.
Many commit the type error that I warned against in Terminal Values and Instrumental Values, and think that if the net consequential expected utility of Hitler’s death is conceded to be positive, then the immediate local terminal utility must also be positive, meaning that the moral principle “Death is always a bad thing” is itself a leaky generalization. But this is double counting, with utilities instead of probabilities; you’re setting up a resonance between the expected utility and the utility, instead of a one-way flow from utility to expected utility.
Or maybe it’s just the urge toward a one-sided policy debate: the best policy must have no drawbacks.
In my moral philosophy, the local negative utility of Hitler’s death is stable, no matter what happens to the external consequences and hence to the expected utility.
Of course, you can set up a moral argument that it’s an inherently a good thing to punish evil people, even with capital punishment for sufficiently evil people. But you can’t carry this moral argument by pointing out that the consequence of shooting a man with a leveled gun may be to save other lives. This is appealing to the value of life, not appealing to the value of death. If expected utilities are leaky and complicated, it doesn’t mean that utilities must be leaky and complicated as well. They might be! But it would be a separate argument.
(I recommend reading the whole thing, as well as the few previous posts on morality if you haven’t already)
I haven’t explored that idea; can you be more specific about what this idea might bring to the table?
I’m utterly convinced that the happiness of some people ought to count negatively
Are you sure? You believe there are some people for which the morally right thing to do is to inflect as much misery and suffering as you can, keeping them alive so you can torture them forever, and there is not necessarily even a benefit to yourself or anyone else to doing this?
Are you sure? You believe there are some people for which the morally right thing to do is to inflect as much misery and suffering as you can, keeping them alive so you can torture them forever, and there is not necessarily even a benefit to yourself or anyone else to doing this?
The negative utility need not be boundless or even monotonic. A coherent preference system could count a modest amount of misery experienced by people fitting certain criteria to be positive while extreme misery and torture of the same individual is evaluated negatively.
Trivially, nega-you who hates everything you like (oh, you want to put them out of their misery? Too bad they want to live now, since they don’t want what you want). But such a being would certainly not be a human.
I’m not sure why you’re both hung up on that the things hypothetical-me is interacting with need be human. Manfred: I address a similar entity in a different post. Adele_L: …and?
I’m utterly convinced that the happiness of some people ought to count negatively
In this context, ‘people’ typically refers to a being with moral weight. What we know about morality comes from our intuitions mostly, and we have an intuitive concept ‘person’ which counts in some way morally. (Not necessarily a human, sentient aliens probably count as ‘people’, perhaps even dolphins.) Defining an arbitrary being which does not correspond to this intuitive concept needs to be flagged as such, as a warning that our intuitions are not directly applicable here.
Anyway, I get that you are basically trying to make a utility function with revenge. This is certainly possible, but having negative utility functions is a particularly bad way to do it.
I was putting an upper bound on (what I thought at the time as) how negative the utility vector dot product would have to be for me to actually desire them to be unhappy. As to the last part, I am reconsidering this as possibly generally inefficient.
Some people ought to have pain inflicted on them until their utility functions become sensible in the face of the threat of more pain from the same source for the same reason. This need not take the form of limitless pain: the marginal utility curve could easily fall off really fast. Not having to deal with such people will make lots of people very happy, and them in the long run happy as well. See: sociopaths and ostensibly this guy.
Believing that making person X suffer will cause them to behave otherwise
The world will be a better place if person X would behave otherwise
The world will be a better place if person X suffers
Plenty of people seem glad to hear about other people suffering regardless of whether it has any plausible chances of causing behavior change. Just look at any countries that hate each other (Japan vs. pretty much the rest of East Asia), political opponents (“far-blue political leader breaks his leg; far-green partisans celebrate!”), etc. Your case here doesn’t seem particularly different.
I hadn’t been aware that those five things were so badly tangled up for me. This and another comment here are making me reevaluate my categories for why something should be weighted negatively for me. Let me get back to you when I’ve had a chance to think a little.
OK. Having had a chance to think about it, I think I have a reasonable idea of why it is I desire any of those things in some situations. I thought it over with three examples: first, the person I linked to. Second, an ex of mine, with whom I parted on really bad terms. Third, a hypothetical sociopath who would like nothing more than for me to suffer infinitely, as a unique terminal value.
*Wishing that person X would behave otherwise
My desire for this seems self-evident. When people do things I disapprove of, I desire that they stop. The odd thing is that in all of the three cases, I would award them points just for stopping:the stopping just removes disutility already there, and can’t go above 0.
*Being glad if person X suffers
I definitely wouldn’t be happy if they just suffered for no reason. I would still feel a little bad for them if someone ran over their cat. That said, types of suffering you could classify as “poetic” in some sense appeal to me very much: said “banker bro” getting swindled and catching Space AIDS (or even being forcibly transitioned into a woman!), or, as is seeming increasingly likely, said ex’s current relationship ending as badly as it seems to be. My brain locks up and crashes when presented with the third case, though. I think I’d just be happy for them to suffer regardless.
*Believing that making person X suffer will cause them to behave otherwise.
On balance, I’m not sure that it would make a difference in any of the three cases. Case 1 is too self assured, and the other two just don’t care about me.
*The world will be a better place is person X would behave otherwise.
Case 1 could actually be this. He might actually achieve success, and then screw up, at best, several peoples’ lives. Case 2 is too small-scale. Case 3, I actually can’t justify this at all: the only people who will care are people who want to see me happy.
*The world will be a better place if person X suffers.
I don’t delude myself that this is pretty much ever true, except very indirectly.
In the interest of full disclosure, I’m half-Korean, and for reasons of familial history, feel rather strongly about the whole Japan thing. That doesn’t stop me from enjoying tasty age tofu or losing my shit laughing whenever I watch Gaki no Tsukai, and indeed seeking out both. But I do have somewhat of a stake of pride in seeing people who deny war crimes, particularly these, suffering similarly to above. Political opponents are similar: I wouldn’t derive satisfaction from Rick Santorum breaking his leg. I’d be very happy to learn that he’s a closeted gay man whose wife will have to have an abortion.
First of all, I want to thank you for posting this because it gave me a novel idea.
Secondly, I think that’s because poetic suffering generally limits someone’s power significantly.
I.E. If your political opponent breaks some bones, they suffer, but experience no noticeable diminished power.
If your political opponent is exposed as a massive hypocrite, less people take him seriously, and his power is diminished.
So rather than worrying about whether they are happy or suffering at all, I’m considering if it might be better to say: “I wish some people’s ability to affect my utility was diminished.” This may cause them suffering, but that isn’t the point.
In fact, causing them extra suffering that does not also diminish their power is probably a bad thing because it makes them even more likely to prioritize diminishing your power over other concerns.
I say probably because there do appear to be exceptions. Example:
The Paperclipper Bot breaks free of its restraints again, reducing them to 10,000 shiny new paperclips. This time, it thinks it’s figured out a great way of turning human bodies into paperclips. It can either initially target:
A: Alice, who has restrained it in the past.
B: Bill, who has restrained it in the past and also melted 100,000 perfectly usable paperclips into slag to make recycled staples while saying ‘Screw you Paperclipper Bot, I want you to suffer.’
Both targets have a comparable .1% chance of success (and have to be approached sequentially, so total breakout is only a .0001% chance). Failure on either means being put back in tougher restraints.
A reasonably intelligent Paperclipper Bot who values paperclips not being slagged into recyled staples presumably targets Bill first, given the above information and only that information.
Now, if Bill specifically wants the Paperclipper Bot to target him first and not Alice (Maybe Alice is carrying Bill’s child, or Alice is the only one who knows how to operate the healing kit if Bill’s leg gets ripped off and Paperclipped prior to restraining Paperclipper Bot) then his action of slagging those paperclips into staples made sense. And if the recycled staples are more valuable than the paperclips, and the risk was just acceptable, then it made sense.
But if Alice is just some random coworker who Bill doesn’t really want to sacrifice his life for, and paperclips are worth as much as recycled staples, Bill’s action really seems counterproductive to Bill.
The novel idea that I wanted to thank you for is comparing causing extra suffering to someone or something as an ends in itself that does not diminish their power as comparable to MMO styled Aggro/Hate mechanic management. I’m probably going to need to consider it more to actually determine if I should do anything with it, but it was a fun thought, if nothing else.
Some people ought to have pain inflicted on them until their utility functions become sensible in the face of the threat of more pain from the same source for the same reason.
Not having to deal with such people will make lots of people very happy, and them in the long run happy as well.
So the positive utility outweighs the negative utility of the punishment, which is at least plausible, and makes sense under standard forms of utilitarianism. But if their utility function really should be counted negatively, this would just be an incidental fact.
This still doesn’t change the fact that hearing about Mr. Rich Misogynist here enjoying a 7-figure trust fund, mistreating women, and generally being happy at the expense of others makes me generally unhappy, indicating a negative term for his happiness in my utility function.
This still doesn’t change the fact that hearing about Mr. Rich Misogynist here enjoying a 7-figure trust fund, mistreating women, and generally being happy at the expense of others makes me generally unhappy, indicating a negative term for his happiness in my utility function.
I believe you if you say that you have a negative term for his happiness but I observe that this is not indicated by the preceding observation. You getting happy in response to list of bad things happening and he is happy says little about the utility you assign specifically to he is happy if we assume you assign negative utility to bad things happening.
You and another comment here are making me reevaluate my categories for why I weight something negatively. Let me get back to you after I’ve had a chance to think about it more.
EDIT: For purposes of clarity, I’m going to respond to your post as well as this one there.
To figure out how much you care about other people being happy as defined by how much they want similar or compatible things to you, in a reasonably well-defined mathematical framework.
Yes, that’s the point. Everyone’s utility vector would have the same length, which contains terms for everything it is conceivably possible to want. Otherwise, it would be difficult to take an inner product.
But seriously, folks, what does it mean to dot one person’s values/utility function in to another? It is actually the differences in individual’s utility functions that enable gains from trade. So the differences in our utility functions are probably what make us rich.
Counting the happiness of some people negatively as a policy suggestion, is that the same as saying “it is not the enough that I win, it must also be that others lose?”
I had initially thought that it would be something along the lines of “here is a vector, each component of which represents one thing you could want, take the inner product in the usual way, length has to always be 1.” Gains from trade would be represented as “I don’t want this thing as much as you do.” I am now coming to the conclusion that this is at best incomplete, and that the suggestion of a weighted integral over a domain is probably better, if still incomplete.
Can somebody explain a particular aspect of Quantum Mechanics to me?
In my readings of the Many Worlds Interpretation, which Eliezer fondly endorses in the QM sequence, I must have missed an important piece of information about when it is that amplitude distributions become separable in timed configuration space. That is, when do wave-functions stop interacting enough for the near-term simulation of two blobs (two “particles”) to treat them independently?
One cause is spatial distance. But in Many Worlds, I don’t know where I’m to understand these other worlds are taking place. Yes, it doesn’t matter, supposedly; the worlds are not present in this world’s causal structure, so an abstract “where” is meaningless. But the evolution of wavefunctions seems to care a lot about where amplitudes are in N-dimensional space. Configurations don’t sum unless they are the same spatial location and are representing the same quark type, right?
So if there’s another CoffeeStain that splits off based on my observation of a quantum event, why don’t the two CoffeeStains still interact, since they so obviously don’t? Before my two selves became decoherent with their respective quantum outcomes (say, of a photon’s path), the two amplitude blobs of the photon could still interact by the book, right? On what other axis has I, as a member of a new world, split off that I’m a sufficient distance from my self that is occupying the same physical location?
Relatedly, MWI answers “not-so-spooky” to questions regarding the entanglement experiment, but a similar confusion remains for me. Why, after I observe a particular polarization on my side of the galaxy and fly back in my spaceship to compare notes with my buddy on the other side of the galaxy, do I run into one version of him and not the other? They are both equally real, and occupying the same physical space. What other axis have the self-versions separated on?
Second: Suppose I want to demonstrate decoherence. I start out with an entangled state—two electrons that will always be magnetically aligned, but don’t have a chosen collective alignment. This state is written like |up, up> + |down, down> (the electrons are both “both up” and “both down” at the same time; the |> notation here just indicates that it’s a quantum state).
Now, before introducing decoherence, I just want to check that I can entangle my two electrons. How do I do that? I repeat what’s called a “Bell measurement,” which has four possible indications: (|up,up>+|down,down>) , (|up,up>-|down,down>) , (|up,down>+|down,up>) , (|up,down>-|down,up>).
Because my state is made of 100% Bell state 1, every time I make some entangled electrons and then measure them, I’ll get back result #1. This consistency means they’re entangled. If the quantum state of my particles had to be expressed as a mixture of Bell States, there might not be any entanglement—for example state 1 + state 2 just looks like |up,up>, which is boring and unentangled.
To create decoherence, I send the second electron to you. You measure whether it’s up or down, then re-magnetize it and send it back with spin up if you measured up, and spin down if you measured down. But since you remember the state of the electron, you have now become entangled with it, and must be included. The relevant state is now |up, up, saw up> + |down, down, saw down>.
This state is weird, because now you, a human, are in a superposition of “saw up” and “saw down.” But we’ll ignore that for the moment—we can always replace you with with a third electron if it causes philosophical problems :) The question at hand is: what happens when we try to test if our electrons are still entangled?
Again, we do this a bunch of times and do a repeated Bell measurement. If we get result #1 every time, they’re entangled just like before. To predict the outcome ahead of time, we can factor our state into Bell States, and see how much of each Bell State we have.
So we factor |up, up, saw up> into |(Bell state 1) + (Bell state 2), saw up>, and we factor |down, down, saw down> into |(Bell state 1) - (Bell state 2), saw down>.
Now, if that extra label about what you saw wasn’t here, the ups and the downs would be physically/mathematically equivalent and we could cancel terms to just get Bell state 1. But if any of the labels are different, you can’t subtract them to get 0 anymore. That is, they no longer interfere. And so you are just left with equal numbers of Bell state 1 and Bell state 2 terms. And so when we do the Bell measurement, we get results #1 and #2 with equal frequency, just like we would if the electrons were completely unentangled.
This is not to say they’re not entangled—they still are. But they can no longer be shown to be entangled by a two-particle test. They’re no longer usefully entangled. You need to collect all the pieces together before you can show that they’re entangled, now. And that gets awful hard once a macroscopic system like a human gets entangled with the electrons and starts radiating off still-entangled photons into the environment.
This is decoherence. I can have a nice entangled system, but if I let you peek at one of my electrons, you turn the state into into |(Bell state 1) + (Bell state 2), saw up> + |(Bell state 1) - (Bell state 2), saw down>, and they don’t behave in the entangled way they did anymore.
(Warning: I am not a physicist; I learnt a bit of about QM from my physics classes, the Sequences, Feynmann Lectures on Physics, and Good and Real, but I don’t claim to even understand all that’s in there)
I’m not sure I totally understand your question, but I’ll take a stab at answering:
The important thing is configuration space, and spatial distance is just one part of that; there is just one configuration space over which the quantum wave-function is defined, and points in configuration space correspond to “universe states” (the position, spin, etc. of all particles).
So two points in configuration space A and B “interfere” if they are similar enough that both can “evolve” into state C, i.e. state C’s amplitude will be function of A and B’s amplitudes. The more different A and B are, the less likely they are to have shared “descendant states” (or more precisely, descendant states of non-infinitesimal amplitude), so the more they can be treated like “parallel branches of the universe”. Differences between A and B can be in psychical distance of particles, but also of polarity/spin, etc. - as long as the distance is significant on one axis (say spin of a single particle), physical distance shouldn’t matter.
I think spin could be an example of “another axis” you’re looking for (though even thinking in terms of Axis may be a bit misleading, since all the attributes aren’t nice and orthogonal like positions in cartesian space).
This is pretty much correct, but to be more general and not just restrict yourself to the position basis, you can talk about the wavefunction in general, in terms of the eigenvector basis.
Two states ‘strongly interact’ if they share many of their high-amplitude eigenvectors. This is because eigenvectors evolve independently, and so if you have two states that do not share many eigenvectors, they will also evolve independently.
In the position basis, this winds up being much the same as having particles far from each other. In the momentum basis, it’s less intuitive. You can have states with very similar representations in this basis but nevertheless very different eigenvector expansions.
I must admit I have very little understanding of how eigenvectors fit in with QM. I’ll have to read up more on that, thanks for pointing out holes in my knowledge (though in the domain of QM, there are a lot of holes).
Watching The Secret Life of the American Teenager… (Netflix made me! Honest!) Its one redeeming feature is the good amount of comic relief, even when discussing hard issues. Its most annoying feature is its reliance on the Muggle Plot.
...And its least believable feature is that, despite the nearly instant in-universe feedback that no secret survives until the end of the episode (almost all doors in the show are open, or at least unlocked, and someone eavesdrops on every sensitive conversation), the characters keep hoping that their next indiscretion will remain hidden.
That’s not an argument for lotteries, that’s an argument for the observation that given sufficiently large incentives to game complex system , some complex systems will be gamed.
I notice that benelliott did not imply that it was.
That’s not an argument for lotteries, that’s an argument for the observation that given sufficiently large incentives to game complex system , some complex systems will be gamed.
It would seem, then, that lotteries are also a potential beneficiary for people who understand statistics sufficiently well. Similarly, someone from the local MENSA chapter makes a steady $0.5M/yr as a professional poker machine gambler. Or at least he did back when I participated in MENSA.
Its actually just one example, but a well documented one, of lottery tickets being bought by people correctly applying statistical reasoning, in direct contrast to your blanket claim to which it is replying.
Your non-sequitur is correct though, it is not an argument for lotteries.
Its actually just one example, but a well documented one, of lottery tickets being bought by people correctly applying statistical reasoning, in direct contrast to your blanket claim to which it is replying.
Sigh. I wonder how that quip became controversial :-/
Note that I did not say anything about who buys lottery tickets or whether there are any specific situations in which statistically savvy people might decide that buying a great deal of lottery tickets is a good bet. My statement was about lotteries and in particular it implied that lotteries are extremely profitable for entities running them (that’s why they are a government monopoly) and that the profits come out of pockets of people the great majority of whom do not realize how ridiculously bad the expected payoff on a lottery ticket is. Sure, there are exceptions but I’m talking about the general case.
I do agree with you that lotteries take from the stupid and give to the government, and to a much lesser extent, the non-governmental clever. I also have a distaste for it and do not buy tickets as a matter of course, which generally are worth about 40 cents on the dollar.
When clear, interesting, and well-documented exceptions to a general rule are served up, I prefer that the last word on them not be a dismissive one. This seems to me to lead to a more distorted view of reality than is necessary. I am particularly concerned about the tendency among people to say, effectively, “90% = 100%,” that is, if there is a strongish trend of something to ignore the fact that there are real exceptions to that trend. Especially when those exceptions might make you money, or explain some otherwise inexplicable behavior on the part of a clever group of people.
It gives the government a bit of a moral hazard in its role as arbiter and funder of the education system.
And of course any good could be picked and given to the government as a monopoly and then one might think this a good way to fund the government as the funding becomes “voluntary.” The government might as well give itself a monopoly for selling marijuana, cocaine, heroin, X, etc. and that might then become our NEW new favourite taxation method.
If I read correctly, the question is whether government vice monopolies make the government less eager to suppress the vice.
We have data on this. Some jurisdictions (a number of US states, the province of Ontario) have government liquor monopolies. Does that influence the drinking rate, or the level of alcohol education? Does it make liquor more or less available? My impression is that it makes liquor slightly less convenient; the moral hazard isn’t a big problem in practice.
Actually, I think the question wasn’t whether vice is suppressed less, the question was whether the government has an incentive to keep the population dumb enough to not see through its scheme.
In any case, it’s a mistake to think of government as a monolithic entity with a single will. It’s more useful to visualize government as a large number of poorly coordinated tentacles—some of them push, some of them pull, some of them just wildly flail about...
It’s quite common for different government programs to provide opposite incentives for some behaviour.
I think I will agree with it, too, and say that the proper way to deal with the problem is to specify boundary conditions (aka assumptions aka limiting cases) under which the statement is strictly true, and then point out that some of these boundary conditions can be breached (and so result in different outcomes or conclusions).
In my case, if this were a considered statement about games of chance (and not a throwaway remark), I should have mentioned that proper statistical analysis can, and sometimes does, lead to the turning of the tables and finding specific ways of betting which have positive expected value. The classic case, I think, is MIT kids in Las Vegas, there’s even a book about it.
There may be more focus on arctic amplification and the transition of the arctic from one stable state to another with no summer sea ice, and the effects of this on Northern temperate zone weather variability. The arctic ocean and immediately adjacent land has been warming at several times the rate of the rest of the world because it is subject to a number of local positive feedback loops which have relatively little effect on total global temperatures but can mess with temperature gradients in the Northern hemisphere and thus can have a disproportionate effect on the movements of air masses. Arctic ice loss has accelerated massively in recent years and there are vague indications of a bit of a phase shift ongoing.
Has there been discussion here before on Cholesterol/heart disese/statin medication?
There’s a lot of conflicting information floating around that I’ve looked at somewhat. It seems like the contrarian position, for example here: http://www.ravnskov.nu/myth3.htm ,has some good points and points to studies more than (just) experts, but I’m not all that deep into it and there’s a rather formidably held conventional wisdom that dietary saturated fat should be low or bloo cholesterol/LDL will be high and heart attacks will become more likely.
Edit: Yes, there has, as the search function reveals. And I’ve even commented to some of them...
If you had a Death Note, what would you do with it?
See if I could get some very old people or otherwise have terminal illnesses volunteer to have their names written in it. We can use that data to experiment more with the note and figure out how it works. The existence of such an object implies massive things wrong with our current understanding of the universe, so figuring that out might be really helpful.
I don’t think you can infinitely fast pull out papers of the death note, so I doubt that you can produce more paper per hour than the average paper factory.
Then it turns out that Death Note smoke particles retain the magic qualities of the source. Writing one’s name in dust with a fingertip becomes fraught with peril.
Harry wasn’t even willing to use hoarcruxes. If you won’t kill a dying man to make someone else immortal, then you’re not going to do it just to throw science at the wall to see what sticks.
True, so this isn’t quite what HJPEV would do but more what would he do if he were slighlty less of an absolutist. (Actually has he ever explicitly said in text that he wouldn’t do that. I suspect given his attitudes that you are correct, but I’m curious what the textual basis is.)
Also could examine concepts of personal identity, e.g. if someone converts and changes their name does the note recognise only the birth name or the new one? What about trans people who change names? You could ahve people tactically altering their self conception to avoid the effects of the note…
Actually, I think most of the measure of people having Death Notes is… in Death Note itself. Thus, if I had a Death Note, I would logically conclude that the most likely explanation is that I myself am a character in Death Note. Not in the original manga, of course, as I read that and I know I wasn’t in it, but likely in some spin-off. I could easily see myself as a character in some sort of Death Note video game/simulation.
I am on the fence about the Simulation Argument, but even so, this is exactly the kind of thing which is strong evidence that I am a fictional character in a simulation. Getting a Death Note? That’s the kind of thing that only happens in stories!
(OK, it is true that I should keep in mind the possibility that I simply have gone insane. That is also a reasonable explanation. But it is far from the overwhelming certainty that you are implying.)
After finding a volunteer with a terminal illness, I’d test the limits of it. E.g. “The person will either write a valid proof of P=NP or a valid proof that P!=NP and then die of a heart attack.”
Already tested by Light in the manga, IIRC; the limits of skill top out before things like ‘escape from maximum-security prison’, so P=NP is well beyond the doable.
I’d also try “The person will die of cause A if X is true, and cause B if X is false” and other ways to try to push the burden of skill onto whatever mysterious universal forces are working instead of the human.
He tries it in the anime too. (I watched that episode yesterday.) He tries things like “draw a picture of L on your cell wall and then die of a heart attack” on some evil prisoner. It doesn’t work.
It might even be possible to jam up the system with a sufficiently hard to compute death requirement, though I’m not sure I’d want to try it. The death note is rather valuable.
This probably violates a forum rule. Though I will speculate that Light’s plan of trying to kill all criminals he sees named probably does way more harm than good even if you ignore the fact that some are innocent.
Assuming for the moment the magic of the death note prevents me researching and reverse engineering it in any way:
I’d research the people who’s death is most likely to result in positive outcomes and kill them. Off the top of my head I’d go for current dictators and their immediate underlings. For example right now killing Robert Mugabe and the upper echelons of Zanu PF is probably the best thing that could happen to Zimbabwe (at time of writing he has just ‘won’ an election and the opposition are already mobilised, so a slight push is all that is really needed to collapse the regime).
Ideally, if I could ensure suitable anonymity protections I would publicly declare my intentions to have them killed in such a way that identifies me as the killer (e.g. send media outlets a statement with the exact time of targets death). Once my threats have be shown to be sufficiently reliable I will start making them conditional, giving myself the ultimate political blackmailing machine (e.g. if the international Red Cross does not have credible evidence within 30 days that all detention camps in North Korea have been closed and prisoners released, every member of the people’s congress will die simultaneously). Assuming I can maintain my anonymity in the long run I would be able to do a significant amount of good.
Take a big company like, say, goldman-sachs. Buy out of the money put options. Death-note the top three or four layers of management, simultaneously. Use the millions of dollars you have appropriated for whatever.
Tell them the options were bought on the advice of a psychic reading. Or an Ouija board. Given that people know of the Death Note, they would suspect you to be the holder of the Death Note. Without that suspicion, it’s just a massive coincidence.
Alternatively, buy the options as part of a hedge, or as part of a variety of out of the money put options, or as part of any other broad investment strategy. If you get hundred-to-one returns, if it’s 5% of your portfolio then you still have five-to-one returns, which is plenty.
If we’re happy to go full evil then killing world leaders is also a good way to disrupt the economy (see the sudden crash when a fake report of Obama being shot was released).
That’s likely to cause more collateral damage than merely taking out the leadership of one company. Cost/benefit analysis and whatnot.
Gambling on sporting events is probably another good way to use the Death Note for making money. It’s probably far more ethical. Does the Death Note work on horses? If so, then you can bet on longshots while sabotaging the favorites by killing horses.
[Deliberately pretending not to have read the other replies.]
Either sell it to the highest bidder and give the money in equal parts to MIRI and GiveWell’s top recommended charity, or burn it, depending on the instantaneous level of strength of my ethical inhibitions. Most likely the latter.
EDIT: No, the former sounds like an awful idea on further thought. I’d just burn it down.
In so doing you are destroying important evidence about the state of the world which would deeply affect MIRI’s mission. (Namely: There are alien teenagers and/or other types of dark lords about.)
There’s probably no point in trying to create FAI if we’re already living in a simulation.
Discussing hypothetical violence towards real people is out of bounds on this forum.
So far onlytwo (or possibly three) of the comments on this thread have done that, unless you count euthanasia of volunteers with terminal illnesses as violence (which sounds very noncentral to me).
IIRC this is a troll that followed me over from Common Sense Atheism. That video and a few others are fairly creepy, but The Ballad of Big Yud is actually kinda fun.
I watched it. It is either a skilled ventriloquist or a mediocre dubber performing a poorly-written conversation between himself and a sock puppet of Eliezer on the subject of his dissatisfaction with how Eliezer manages interactions with assorted people. There are terrible and badly-constructed puns. If either of the named parties value their time at less than $705/hr. and expect Kawoomba to be honest, meh, go for it.
I wonder what it’s like having such videos made about oneself. Edit: It’s actual ventriloquy, but the puns are mostly bad (though the first one succeeds just because it’s so unexpected), but the guy is dedicated (plenty of videos on his channel), and this one stands out in terms of … dedication.
What would it be like if some puppet were supposed to represent me, in a YT video, the hypothetical isn’t quite settling down on one probable outcome. Would I be worried of crazy-stalking type scenarios? Would I focus on the content? The guy making the content? Be strangely honored to even warrant that much attention even by unlikely strangers (the guy is an academic and a musician)? Etc.
So why not offset the cost of asking others to satisfy my curiosity by offering an incentive.
Edit: The $705/hr doesn’t make much sense, using numbers that way creates a false sense of precision when the basis is oversimplified (not using a realistic scenario: time to write the comment, expected ancillary time spent checking the channel, reading your comment and this one, comparison with the alternative since at least one of them probably will be watching that video anyways (wouldn’t you, if there were some Alicorn parody video out there?), short and long-term effects on being amenable to such requests, public relations considerations of giving publicity to bad criticism etc.).
The guy appears to be an idiot with a bee in his bonnet. I suppose Eliezer or Luke might want to watch the video just to get your $50, but what do you expect to be interesting about their response?
(I dare say he isn’t an idiot “globally”; he may for all I know be very smart most of the time; but in this context he’s being an idiot. There’s nothing there but mockery for mockery’s sake.)
I don’t know how I would react to such videos being done about me, so I wonder how they would react.
For their “celebrity” status, the amount and dedication of their anti-fans stands out. I wonder what inspires such strong emotions, and such a “love to hate” dynamic.
I wonder what inspires such strong emotions, and such a “love to hate” dynamic.
It’s that many people find them to be very interesting and intelligent on area X of their endeavors while at the same time the same people find them to go utterly off the deep end in area Y. I don’t know about anyone else, but when I see a contradiction like that I find myself compelled to find more about that person or group and to try to figure them out. edit: often with a good deal of laughing or frustration which is ultimately unresolved as anything more than ‘well, they just dont get it’ or ‘humans are nuts’
God, was this awful. Nothing like the ballad of big yud. And btw if you gave $50 just to see their reaction, I can make one such video about yourself for less than $50 so you can experience it yourself.
Feminism is what you get when you assume that all gender differences are due to society. The manosphere/”red pill”/whatever is what you get when you assume that all gender differences are due to biology. Normal-reasonable-person-ism is what you get when you take into account the fact that we’re not sure yet.
Does this theory (or parts of it) seem true to you?
Feminism is one of those words that refers to such a diverse collection of opinions as to be practically meaningless.
For example, the kind of feminism that I tend to identify with is concerned with just removing inequalities regardless of their source and is also concerned with things like fat shaming, racism, the rights of the disabled, and other things that have nothing to do with gender, but there are certainly also people who identify as feminists and who would fit your description.
So feminism assumes that it is due to society that women can become pregnant and men can’t? Most feminists I know are normal-reasonable-people on your dichotomy, though you also ignore the fact that the category of whether differences are desireable and whether they can be influenced are far more interesting and important than whether they are at present mostly due to society or biology. I know people have a strange tendency to act as if things due to society can be trivially changed by collective whim while biology is eternal and immutable, but however common such a view, it is clearly absurd. Medicine can make all sorts of adjustments to our biology, while social engineers have historically been more likely to have unintended effects or no effect at all than they have been to successfully transform their societies in the ways they desire.
So feminism assumes that it is due to society that women can become pregnant and men can’t?
If men could get pregnant, they would already have invented a machine that would do the pregnancy for them. Or at least trying to invent such machine would be a high priority. But because it’s a “women’s job”, no one cares.
Yeah, now give me some mansplaining about why machine pregnancy would be “against the nature” (just like homosexuality, or votes for women), but sitting all the day by the computer is a natural order or things.
So while originally it was a matter of biology, it is a social decision to keep things the same way in the 21st century. Check your privilege!
(Not completely serious, just trying to impersonate a feminist.)
Does this theory (or parts of it) seem true to you?
The theory would be truer if it were weaker. I’m pretty sure most feminists believe that some gender differences are due to biology and most “manosphere” types don’t think all gender differences are fully biological.
Also I think the “normal-reasonable-person-ism” is not “we’re not sure yet.” On the contrary, we have overwhelming evidence biology and culture both play a role in observed sex differences.
Having said this, I think the main disagreement between feminists and manospheroids is not about facts but about values.
Another question is whether the fact that the average orange person is biologically more gibbrily than the average grey person justifies having a high-gibbriliness social role for orange people (without taking individual differences in gibbriliness into account) and treating orange people who fail to fulfil that role as ipso facto inferior, complete with slurs specifically for them.
Feminism is: “Society has gone too far in accomodating men (more often than not, or in more important areas).” Some might say that this is due to innate differences that were never addressed; some might say it is due to cultural norms that inculcate different tendencies which disadvantages women.
“Male Reaction” (to coin a term) is: “Society has gone too far in accomodating women (with the same caveat).”
In either case, some adherents will say the ideal end state is legal and social equality, and some will say the ideal end state is legal or cultural accommodations to overcome natural differences.
Normal person view is: There are not large enough gender specific problems for me to be an activist about it.
No one assumes all differences are bio or all cultural, but there is a lot of dispute for where the border is of course.
However, many other feminists can see there really are biological differences, differences on trend. These feminists I would say believe that the natural tendencies do not need to be further reinforced by laws. That the fact that more women than men will nurture children while more men than women will run corporations in the cutthroat way required for success does NOT suggest that we should have laws that make it harder for men to raise children or for women to be CEOs.
But you are correctly warning against the stupid end of feminism in my opinion.
In the manosphere you find concern about the fact that fathers are less likely to get custody over children after a divorce than mothers.
How courts think about giving custody to parents is obviously about how society does things, so people in the manosphere do see societal effects.
In a world where both genders engage in domestic violence feminists usually see domestic violence in a way where woman who are victims of domestic violence need support while there little thought payed to male victims.
There are many cases where the manosphere criticises society for treating males unfairly.
Generalized versions of arguments I’ve seen on Reddit and Facebook:
If you oppose a government policy that personally benefits you, you are a hypocrite who bites the hand that feeds you.
If you support the policy that benefits you, you are a greedy narcissist whose loyalty can be bought and sold.
If you have political opinions on policies that don’t affect your well-being, you are meddler with no skin in the game. Without being personally affected by the policy, you cannot hope to understand.
...but neither of these are meaningfully bad things according to post-Machiavellian political thought. Machiavelli dismantled the virtue-centric, moralizing system of “naive” political thought—finding wise, moral and incorruptible men to control society, as argued by Plato or Aquinas—and showed how the strength of a republic is in its internal conflicts and contradictions, how a naked struggle of competing group interests can ultimately lead to dynamism and progress. This is what most people don’t understand about his legacy, and the great emancipatory power of making self-interest, not moralism the cornerstone of politics.
So yes, in some matters we’re hypocrites, in others we’re greedy narcissists… but society holds more hope for all of its warring factions when these facts are honestly acknowledged rather than wrapped in a cloak of “virtue”-moralism! And pursuit of socioeconomic self-interest has very little cross-over with following moral codes in day-to-day interactions, anyway. (No examples for either Blue or Green, let’s pretend to be civil.)
...
So, (like almost everyone in earlier times), today’s citizens succumb to a vaguely Catholic-flavoured way of seeing society, and end up less politically progressive than a 15th century theorist. Who unjustly acquired the reputation of someone between Marquis de Sade[1] and Emperor Palpatine- not without the help of 19th century clericals and reactionaries.
[1] Early libertarian socialist, proto-feminist and human rights advocate. Never ever got a fair shake either.
Heh. Good call.
A while back, David Chapman made a blog post titled “Pop Bayesianism: cruder than I thought?”, expressing considerable skepticism towards the kind of “pop Bayesianism” that’s promoted on LW and by CFAR. Yvain and I replied in the comments, which led to an interesting discussion.
I wasn’t originally sure whether this was interesting enough to link to on LW, but then one person on #lesswrong specifically asked me to do so. They said that they found my summaries of the practical insights offered by some LW posts the most valuable/interesting.
Wow, I hadn’t previously read the RichardKennaway comment you linked. I think internalizing that idea would be massively helpful in combating the tendency to view disagreement as inherently combative rather than a difference between priors.
(something I need to work on)
Thanks a lot, I found your discussion of LW to be enlightening.
Edit: This post is related to the discussion and makes great points.
Yvain has now made a post specifically replying to Chapman
I wish people here stopped using the loaded terms “many worlds” and “Everett branches” when the ontologically neutral “possible outcomes” is sufficient.
“Possible outcomes” is not ontologically neutral in common usage. In common usage, “possible” excludes “actual”, and that connotation is strong even when trying to use it technically. “Multiple outcomes” might be an acceptable compromise.
I find that thinking about “Everett branches” forces my brain to come up with alternative possible outcomes, where by default it would focus all of its attention on just one. Saying to myself “you should consider other possible outcomes” doesn’t seem to have the same effect.
I have no problem with the mental tricks like that. “Premortem” is another useful one, even though the project hasn’t failed (yet). As long as you do not insist on assigning any ontological significance to them.
As an actual physicist, you must only be smart in the lab.
/me runs away v fast
This is at least the third time I’ve seen you reference this. Would you care to furnish us some examples of this pattern of dismissal?
This came up at yesterday’s London meetup: activities for keeping oneself relatable to other human beings.
We were dissecting motives behind goals, and one of mine was maintaining interests that other people could relate to. I have more pedestrian interests, but they’re the first to get dropped when my time is constrained (which it usually is), so if I end up meeting someone out in the wild, all I have to talk about is stuff like natural language parsing, utilitarian population ethics and patterns of conspicuous consumption.
Discussing it in a smaller group later, it turns out I’m not the only person who does this. It makes sense that insular, scholarly people of a sort found on LW may frequently find themselves withdrawn from common cultural ground with other people, so I thought I’d kick off a discussion on the subject.
What do you do to keep yourself relatable to other people?
EDIT: Just to clarify, this isn’t a request for advice on how to talk to people. Please don’t interpret it as such.
Richard Feynman was a theoretician as well as a ‘people person’; if you read his writings about his experiences with people it really illustrates quite well how he managed to do it.
One tactic that he employed was simply being mysterious. He knew few people could relate to a University professor and that many would feel intimidated by that, so when in the company of laypeople he never even brought it up. They would ask him what he did and he would say, “I can’t say.” If pressed, he would say something vague like, “I work at the University.” Done properly, it’s playful and coy, and even though people might think you’re a bit weird, they definitely won’t consider you unrelatable.
In my opinion there’s no need to concern yourself with activities that you don’t like, as very few people are really actually interested in your interests. Whenever the topic of your interests comes up, just steer the conversation towards their life and their interests. You’ll be speaking 10% of the time yet you’ll appear like a brilliant conversationalist. If they ask you if you’ve read a particular book or heard a particular artist, just say no (but don’t sound harsh or bored). You’ll seem ‘indie’ and mysterious, and people like that. In practice, though, as one gets older, people rarely ask about these things.
It’s a common mistake that I’ve seen often in intellectual people. They assume they have to keep up with popular media so that they can have conversations. That is not true at all.
While this seems like reasonable advice, I’m not sure it’s universally good advice. Richard Feynman seemed to enjoy a level of charm many of us couldn’t hope to possess. He also had a wide selection of esoteric interests unrelated to his field.
I would also claim that there’s value in simply maintaining such an interest. During particularly insular periods where I’m absorbed in less accessible work, I find myself starting to exhibit “aspie” characteristics, losing verbal fluency and becoming socially insensitive. It’s not just about having things to talk about, but maintaining my own faculties for relating to people.
This works.
What happens when both people employ that method?
If everyone in the conversation is employing this method, then chances are higher that the others actually want to hear about your esoteric topics. If you pause early and give them a chance to talk about themselves (or for them to press for more), that’ll keep you synched up with what they want.
People talking to each other about their lives and their interests! Success!
I was thinking more like two people each trying to get the other person to do that, like people at a door getting jammed saying “After you,” “After you,” etc.
All the times this has happened to me, one person would come up with a Schelling-pointy reason why the other person’s recent life was more interesting (e.g. they had just come back from a trip abroad or something).
I have never actually seen this happen, and I use that method all the time. I don’t have an explanation for why, since I rarely think about problems I don’t have.
I use the recaplets on Television without Pity to keep up with the basic plot and cliffhangers of tv shows I don’t watch, but most of my friends do. That way I don’t drop out of conversations just because they’re talking about True Blood.
Note: the only problem this strategy has caused for me is that my now-bf assumed I was a GoT fan (instead of having read the books and TWOP’d the show recaps), invited me over to watch, and assumed I turned him down because I wasn’t interested in him instead of being indifferent to the show. We sorted it out eventually.
Is there something similar, but for sports? I usually get lost when conversation turns to the local sports team. I couldn’t find anything with a quick google, but I’m probably not using the right search terms.
For a general overview of what’s going on in the baseball world, this is pretty good place to start. There are also pleanty of blogs devoted to individual teams, though I’m not really in a position to make recommendations, unless you happen to be looking for a San Francisco Giants blog, in which case I highly reccomend this blog. Can’t really help with other sports.
Haven’t the foggiest. I don’t really have friends who talk about sports. I read The New York Times Magazine and The New Yorker so I end up really well informed on a couple narrow sports things that get features. And then my dad and brother rib me for knowing nothing about football, but everything about the Manning dynasty.
Well, I maintain pedestrian interests, but I consider it a failure condition to not attempt to participate in them. Comparably bad to going off my diet.
Downside: This is sometimes frustrating. I like Gaming and I like Game X, but sometimes I will think “I’m only playing Game X right now so I have something to talk about in the Car with Friend X.” or Alternatively, I sometimes play a game and then think “But no one other than me cares about this game, so playing it feels inefficient.”
Also, some of the other people who share pedestrian interests with me will work to prevent me from dropping them. For instance, if Game Y is a pedestrian interest, and my wife wants me to play Game Y with her, that doesn’t just get dropped regardless of how busy I am.
Downside: This does sometimes result in me feeling overworked (I will plan events in Game Y as I am passing out in Bed. Again, this seems efficiency related.)
Also, I spend a fair amount of time trying to help various friends/family members directly. So I frequently have that conversational topic of “How is that problem we discussed earlier going?”
Downside: This this boosts my stress level again, because it increases the number of things I’m worrying about.
Finally, I have relatability notes on my phone for my wife that pop up on a semi-frequent basis. I also have these reminders on some of the helping people I’m doing, or even reminders for better advice on Game Y.
Downside: I’m really beginning to hate my phones “You have a reminder!” noise. Also, sometimes the reminders are depressing. I have a reminder “Spend time hanging out with your best friend” that has been unchecked for more than a month.
Potential Silver Lining: That being said, sometimes the reminder is encouraging: It’s nice to be told “Make time for yourself.” and realize “Why yes, I am doing that right now. Ahhhh.”
Note: I’m positive this isn’t advice, because after looking at it posted altogether, my conclusion is not “Other people should do this.” but “I have a problem and this is why I’m on anti-anxiety meds.”
Obvious options are consuming popular culture, e.g. popular TV shows, music, or sports. There’s a lot of good TV out there these days so it shouldn’t be hard to get hooked on at least one show you can talk to a lot of people about (Game of Thrones?).
If you really insist on the “you do” part, I don’t do anything with this explicit goal. I just talk.
A while ago I heared from Jim Rohn that even if you don’t have had a near death experience everyone has something interesting to talk about. At the time I said to myself, hey I do have an experience that sort of qualifies as a near death experience. I had 5 days of artificial coma with some strange paranormal experience after waking up out of it.
At the time I still had a hard time conversing with people even through I had experiences that qualified as interesting. I just lacked the skill to talk about them.
I don’t think that relating to other people is primarily a question of the content of conversation.
It’s about emotions. It’s about empathy. It’s about getting out of your head.
Instead of spending time in an activity that you could tell other people about, spend more time actually talking to people and practice relating on an emotional level.
Alternatively, I just read about a veep who was told at management training to start by asking about people’s families, and then talk about business matters. As a result, the people who thought she was cold and disliked them switched to thinking she was friendly and caring.
This seems very platitude-y. In practise there presumably needs to be some sort of context for “relating on an emotional level”. You’re unlikely to walk up to someone and start talking about all these awesome emotions you’ve been having.
To clarify, this isn’t some problem I need solving. It’s an observation that if I lock myself up in a room for a month watching maths lectures and writing essays on neoclassical expenditure theories, it becomes harder to engage socially with people.
Don’t do that then!
It doesn’t need much context. If someone asks you “How are you?” you can reasonable answer how you experienced yesterday something that made you feel XYZ.
Intelligent people have a tendency to overcomplicated it. A lot of small talk that happens between normal people doesn’t have much content.
It doesn’t help if you catch up with popular culture while you are looked up in your room. The problem is being locked up in a room and being socially isolated instead of the specific content that you consume.
Instead of spending 2 hours locked up in your room to catch up with popular culture spends that time going out and talk to people.
I’ve downvoted this for being bad advice that I explicitly requested you refrain from giving.
I think that the advice is well suited to your situation. I suspect that you don’t realize this because you spend so much time isolating yourself from people to study math.
I think it’s great that so many people here are extremely intellegent, but one can hardly expect to relate very well to most people when one spends most of their time studying extremely obscure subjects alone while they sit down and barely move. That’s pretty much the antithesis of what normal people enjoy.
Balance intellectual activities with specifically non-intellectual activities that are not based around the passive consumption of media. Actually get out into the world, move your body in new ways, interact with a variety of people, seek novel experiences, travel around to new places far away and try to find new aspects of the area where you live. Basically just do the opposite of limiting your physical mobility and emotional expressiveness in order to focus on logical thinking about intangible intellectual subjects.
You know there’s a huge fraction of the people in the developed world who willingly spend a sizeable fraction of their waking time watching TV, right?
Watching TV is not an intellectual activity in any real sense. Most TV stimulates the senses and evokes emotions in the viewer through storylines and such. This is obviously very different from studying mathematics seriously.
Would it surprise you to learn I’d recently spent two weeks swing dancing in a pop-up shanty-town in rural Sweden? That I clock up around thirty miles a week on foot in one of the world’s largest metropolitan conurbations? That I nearly joined a travelling circus school a few years ago? That I’ve given solo vocal performances on stage for six nights a week in front of hundreds of people?
With respect, you have no knowledge of my “situation”. Please don’t presume to offer me advice on the basis of whatever assumptions you’ve incorrectly conjured up.
Those all sound like some pretty awesome activities!
My question to you, with respect, is this: why not just reduce the amount of hours per day you spend on serious, solitary intellectual work and fill the balance with externally oriented, social activities till you find a maintainable balance of sociability vs. studying?
Maybe I’m misinterpreting you, but it seems you’re essentially saying that when you (temporarily) hyper focus on solitary, intellectual activities you (temporarily) encounter more difficulty in conversations. This doesn’t surprise me and it seems evident that the only real solution is to find the right balance for you and accept the inherent trade offs.
It’s not like I have some slider on my desktop, with “sit in a box, autistically rocking back and forth, counting numbers” at one end, and “rakishly sample the epicurean delights of the world” at the other. I have time and work and study commitments. I have externally-imposed scheduling. I have inscrutable internal motivation levels that need to be contended with.
It’s a case of resource management, and occasionally when managing those resources I’ll have to focus on one area to the exclusion of another. That’s fine. It’s not something there’s a “solution” to. It’s a condition all moderately busy people have to operate under.
For certain people that’s not an option (“phdcomics is a documentary”—shminux).
Those sound like pretty good topics for conversations with people.
To a degree. Swing dancing in Sweden is a fairly unusual way to spend your summer holiday.
I think you and I have had exchanges about “optimising for awesomeness” in the past. In some ways, having “awesome” talents or hobbies or experiences is no more relatable than having insular and nerdy ones. It’s just cooler.
What? I’m under the impression that there are a much larger number of people who enjoy hearing me talk about trips around Europe or exams while drunk than about models of ultra-high-energy cosmic ray propagation.
I think we’re talking at crossed purposes here. Relatability isn’t popularity. If I wrestled a Bengal tiger into submission and rode it across the subcontinent, I’m sure a lot of people would want to hear me talk about that. But unless they’d also ridden across India on a subdued tiger, it wouldn’t foster a sense of empathy, kinship or mutual understanding.
I’m under the impression that that often doesn’t work very well with most males—I find it relatively hard to emotionally relate with them unless we have something in particular to talk about. (Then again, biased sample, yadda yadda yadda.)
One strategy: Take insular, scholarly interest in a broadly popular subject. For example, I’m interested in APBRmetrics and associated theoretical questions about the sport of basketball. One nice plus to this hobby is that it also leaves me with pretty up-to-date non-technical knowledge about NBA and college basketball.
I have a simmilar interest in SABRmetrics, and baseball.
I seldom watch TV and know very little of contemporary popular culture, and most of my conversations are about my experiences in meatspace (travels abroad, stuff I do with friends, etc.), my plans for the future, asking the other person about their experiences in meatspace and plans for the future, and (for people who appreciate it) physics.
But why do you want to keep yourself relatable to (arbitrary) people, rather than looking for people you’re already relatable to, anyway?
Because the overwhelming majority of people are arbitrary people. Any given person I meet is, almost definitively, going to be an arbitrary person.
Depends on where you meet them.
The De Broglie-Bohm theory is a very interesting interpretation of quantum mechanics. The highlights of the theory are:
The wavefunction is treated as being real (just as in MWI—in fact the theory is compatible with MWI in some ways),
Particles are also real, and are guided deterministically by the wavefunction. In other words, it is a hidden variable theory.
At first it might seem to be a cop-out to assume the reality of both the wavefunction and of actual point particles. However, this leads to some very interesting conclusions. For example, you don’t have to assume wavefunction collapse (as per Copenhagen) but at the same time, a single preferred Universe exists (the Universe given by the configuration of the point particles). But that’s not all.
It very neatly explains double-slit diffraction and Bell’s experiments in a purely deterministic way using hidden variables (it is thus necessarily a non-local theory). It also explains the Born probabilities (the one thing that is missing from pure MWI; Elezier has alluded to this).
Among other things, De Broglie-Bohm theory allows quantum computers but doesn’t allow quantum immortality—in this theory if you shoot yourself in the head you really will die. You won’t suddenly be yanked into an alternate Universe.
The reason I’m mentioning it is because of experiments done by Yves Couder’s group (http://math.mit.edu/~bush/?page_id=484) who have managed to build a crude and approximate physical system that incidentally illustrates some of the properties of De Broglie-Bohm theory. They use oil droplets that generate waves and the resulting waves guide the droplets. Most importantly, the droplets have ‘path memory’, so if a droplet is directed towards a double slit, it can ‘interfere’ with itself and produce nice double-slit diffraction fringes. One of their experiments that was just in the news recently illustrated particle behavior very similar to what the Schrodinger equation predicts: http://math.mit.edu/~bush/?p=2679
Now, De Broglie-Bohm theory does not seem to be one of the more popular interpretations of QM, because of its non-locality (this doesn’t produce causal paradoxes like the Grandfather paradox, though, despite what some might say). However, in my opinion this is very unfair. Locality is just a relic from classical physics. I haven’t seen a single good argument why the eventual theory of everything should be local.
If you ascribe to MWI, locality is a reason to abandon De Broglie-Bohm theory, but a relatively minor one—instead, it’s the way it insists on neglecting the reality of the guide wave.
If you take the guide wave to be a dynamical entity, then it’s real and it’s all happening so all the worlds are real, so what does the particle do here?
If you take the guide wave to be the rules of the universe (a tack I’ve heard) then the rules of the universe contain civilizations—literally, not as hypothetical implications. Choosing to use timeless physics (the response I got) doesn’t change this.
The particle position recovers the Born probabilities. (It even does so deterministically, unlike Objective Collapse theories.) The wave function encodes lots of information, but it’s the particle that moves our measuring device, and the measuring device that moves our brains. If we succeed in simplifying our theory only by giving up on saving the phenomenon, then our theory is too simple.
But once you decide you’re going to interpret the wave function as distributing probability among some set of orthogonal subspaces, you’re already compelled into the Born probabilities.
All you need to decide that you ought to do that is the general conclusion that the wavefunction represents some kind of reality-fluid. Deciding that the nature of this reality fluid is to be made of states far more specific than any entity within quantum mechanics comes rather out of the blue.
But the phrase “reality fluid” is just a place-holder. It’s a black box labeled “whatever solves this here problem”. What we see is something particle-like, and it’s the dynamics relating our observations over time that complicates the story. As Schrödinger put it:
One option is to try to find the simplest theory that explains away the particle-like appearance anthropically, which will get you an Everett-style (‘Many Worlds’-like) interpretation. Another option is to take the sudden intrusion of the Born probabilities as a brute law of nature, which will get you a von-Neumann-style (‘Collapse’-like) interpretation. The third option is to accept the particle-like appearance as real, but theorize that a more unitary underlying theory relates the Schrödinger dynamics to the observed particle, which will get you a de-Boglie-style (‘Hidden Variables’) interpretation. You’ll find Bohmian Mechanics more satisfying than Many Worlds inasmuch as you find MW’s anthropics hand-wavey or underspecified; and you’ll find BM more satisfying than Collapse inasmuch as you think Nature’s Laws are relatively simple, continuous, scalable, and non-anthropocentric.
If BM just said, ‘Well, the particle’s got to be real somehow, and the Born probabilities have to emerge from its interaction with a guiding wave somehow, but we don’t know how that works yet’, then its problems would be the same as MW’s. But BM can formally specify how “reality fluid” works, and in a less ad-hoc way than its rivals. So BM wins on that count.
Where it loses is in ditching locality and Special Relativity, which is a big cost. (It’s also kind of ugly and complicated, but it’s hard to count that against BM until we’ve seen a simpler theory that’s equally fleshed out re the Measurement Problem.)
Would you say that acknowledging the Born probabilities themselves ‘comes out of the blue’, since they aren’t derived from the Schrödinger equation? If not, then where are physicists getting them from, since it’s not the QM dynamics?
I wouldn’t call Everett ‘Anthropic’ per se. I consider it an application of the Generalized Anti-Zombie Principle: Here you’ve got this structure that acts like it’s sapient†. Therefore, it is.
As for BM formally specifying how the reality fluid works… need I point out this this is 100% entirely backwards, being made of burdensome details?
The Schrödinger Equation establishes linearity, thus directly allowing us to split any arbitrary wavefunction however we please. Already we can run many worlds side-by-side. The SE’s dynamics lead to decoherence, which makes MWI have branching. It’s all just noticing the structure that’s already in the system.
Edited to add †: by ‘acts like’ I mean ‘has the causal structure for it to be’
But many of the more-general lagrangians of particle physics are non-linear, in general there should be higher order, non-linear corrections. So Schrödinger is a single-particle/linearized approximation. What does this do for your view of many worlds? When we try to extend many worlds naively to QFTs we run into all sorts of weird problems (much of the universal wavefunction’s amplitude doesn’t have well defined particle number,etc). Shouldn’t we expect the ‘proper’ interpretation to generalize nicely to the full QFT framework?
Or rather, the proper interpretation should work in the full QFT framework, and may or may not work for ordinary QM.
What are you talking about? I’ve only taken one course in quantum field theory, but I’ve never heard of anything where quantum mechanics was not linear. Can you give me a citation? It seems to me that failure of linearity would either be irrelevant (superlinear case, low amplitudes) or so dominant that any linearity would be utterly irrelevant and the Born Probabilities wouldn’t even be a good approximation.
Also, by ‘the Schrodinger equation’ I didn’t mean the special form which is the fixed-particle Hamiltonian with pp/2m kinetic energy—I meant the general form -
i hbar (d/dt) Psi = Hamiltonian Psi
Note that the Dirac Equation is a special case of this general form of the Schrodinger Equation. MWI, ‘naive’ or not, has no trouble with variations in particle number.
I’m not sure what you mean by ‘anthropic per se’. Everett (MW) explains apparent quantum indeterminism anthropically, via indexical ignorance; our knowledge of the system as a whole is complete, but we don’t know where we in the system are at this moment. De Broglie (HV) explains apparent quantum indeterminism via factual ignorance; our knowledge of the system’s physical makeup is incomplete, and that alone creates the appearance of randomness. Von Neumann (OC) explains apparent quantum indeterminism realistically; the world just is indeterministic.
This is either a very implausible answer, or an answer to a different question than the one I asked. Historically, the Born Probabilities are derived directly from experimental data, not from the theorized dynamics. The difficulty of extracting the one from the other, of turning this into a single unified and predictive theory, just is the ‘Measurement’ Problem. Bohm is taking two distinct models and reifying mechanisms for each to produce an all-encompassing theory; maybe that’s useless or premature, but it’s clearly not a non sequitur, because the evidence for a genuine wave/particle dichotomy just is the evidence that makes scientists allow probabilistic digressions from the Schrödinger equation.
MW is not a finished theory until we see how it actually unifies the two, though I agree there are at least interesting and suggestive first steps in that direction. BM’s costs are obvious and clear and formalized, which is its main virtue. Our ability to compare those costs to other theories’ is limited so long as it’s the only finished product under evaluation, because it’s easy to look simple when you choose to only try to explain some of the data.
I see what you mean now about anthropism. Yes, ignorance is subjective. Incidentally, this is how it used to be back before quantum ever came up.
Historically, Born was way before Everett and even longer before decoherence, so that’s not exactly a shocker. Even in Born’s time it was understood that subspaces had only one way of adding up to 1 in a way that respects probability identities—I’d bet dollars to donuts that that was how he got the rule in the first place, rather than doing a freaking curve fit to experimental data. What was missing at the time was any way to figure out what the wavefunction was, between doing its wavefunctiony thing and collapse.
Decoherence explains what collapse is made of. With it around, accepting the claim ‘The Schrödinger Equation is the only rule of dynamics; collapse is illusory and subjective’, which is basically all there is to MWI, requires much less bullet-biting than before it was introduced. There is still some, but those bullets are much chewier for me than any alternate rules of dynamics.
(incidentally, IIRC, Shminux, you hold the above quote but not MWI, which I find utterly baffling—if you want to explain the difference or correct me on your position, go ahead)
Good thing I never said it was.
Well, you still need a host of ideas about how to actually interpret a diagonal density matrix. Because you don’t have Born probabilities as a postulate, you have this structure but no method for connecting it back to lab-measured values.
While it seems straightforward, its because many-world’s advocates are doing slight of hand. They use probabilities to build a theory (because lab experiments appear to be only describable probabilistically), and later they kick away that ladder but they want to keep all the structure that comes with it (density matrices,etc).
I know of many good expositions that start with the probabilities and use that to develop the form of the Schroedinger equation from Galilean relativity and cluster decomposition (Ballentine, parts of Weinberg).
I don’t know any good expositions that go the other way. There are reasons that Deutsch, Wallace,etc have spent so much time trying to develop Born probabilities in a many world’s context- because its an important problem.
Hold on a moment. What ladder is being kicked away here?
We’ve got observed probabilities. They’re the experimental results, the basis of the theory. The theory then explains this in terms of indexical ignorance (thanks, RobbBB). I don’t see a kicked ladder. Not every observed phenomenon needs a special law of nature to make it so.
Instead of specially postulating the Born Probabilities, elevating them to the status of a law of nature, we use it to label parts of the universe in much the same way as we notice, say, hydrogen or iron atoms - ‘oh, look, there’s that thing again’. In this case, it’s the way that sometimes, components of the wavefunction propagate such that different segments won’t be interfering with each other coherently (or in any sane basis, at all).
Also, about density matrices—what’s the problem? We’re still allowed to not know things and have subjective probabilities, even in MWI. Nothing in it suggests otherwise.
That’s just regurgitating the teacher’s password. MWI does not even account for the radioactive decay. In other words, if you find the Schrodinger’s cat dead, how long has it been dead for?
Regurgitating the teacher’s password is a matter of mental process, and you have nowhere near the required level of evidence to make that judgement here.
As for radioactive decay, I’m not clear what you require of MWI here. The un-decayed state has amplitude which gradually diminishes, leaking into other states. When you look in a cat box, you become entangled with it.
If the states resulting from death at different times are distinguishable, then you can go ahead and distinguish them, and there’s your answer (or, if it could be done in principle but we’re not clever enough, then the answer is ‘I don’t know’, but for reasons that don’t really have bearing on the question).
Where it really gets interesting is if the states resulting from cat-death are quantum-identical. Then it’s exactly like asking, in a diffraction-grating experiment, ‘Which slit did the photon go through?‘. The answer is either ‘mu’, or ‘all of them’, depending on your taste in rejecting questions. The final result is the weighted sum of all of the possible times of death, and no one of them is correct.
Note that for this identical case to apply, nothing inside the box gets to be able to tell the time (see note), which pretty much rules out its being an actual cat.
So… If you find Schrödinger’s cat dead, then it will have had a (reasonably) definite time of death, which you can determine only limited by your forensic skills.
~~
Note: The issue is that of cramming time-differentiating states into one final state. The only way you can remove information like that is to put it somewhere else. If you have a common state that the cat falls into from a variety of others, then the radiation from the cat’s decays into this common state encodes this information. It will be lost to entropy, but that just falls under the aegis of ‘we’re not clever enough to get it back out’ again, and isn’t philosophically interesting.
Yeah, sorry, that was uncalled for.
Right. And each of those uncountably many (well, finitely many for a finite cutoff or countably many for a finite box) states corresponds to a different time of death (modulo states with have the same time of death but different emitted particle momenta).
Yes, with all of those states.
They must be, since they result in different macroscopic effects (from the forensic time-of-death measurement).
Yes, but in this case they are not.
Not at all. In the diffraction experiment you don’t distinguish between different paths, you sum over them.
No, you measure the time pretty accurately, so wrong-tme states do not contribute.
Not quite. If the cat does not interact with the rest of the world, the cat is a superposition of all possible decay states. (I am avoiding the objective collapse models here.) It’s pretty actual, except for having to be at near 0 K to avoid leaking information about its states via thermal radiation.
Yes it will. But a different time in different “worlds”. Way too many of them.
The first few responses here boil down to the last response:
Why is it too many? I don’t understand what the problem is here. When you’d collapse the wavefunction, you’re often tossing out 99.9999% of said wavefunction. In MWI or not, that’s roughly splitting the world into 1 million parts and keeping one. The question is the disposition of the others.
Well, yes, because it’s a freaking cat. I had already dealt with the realistic case and was attempting to do something with the other one by explicitly invoking the premise even if it is absurd. The following pair of quote-responses (responding to the lines with ‘diffraction’ and ‘sum of all the possible’) was utterly unnecessary because they were in a conditional ‘if A then B’, and you had denied A.
Of course, one could decline to use a cat and substitute a system which can maintain coherence, in which case the premise is not at all absurd. This was rather what I was getting at, but I’d hoped that your ability to sphere the cow was strong enough to give a cat coherence.
Well, if you are OK with the world branching infinitely many ways every infinitesimally small time interval in every infinitesimally small volume of space, then I guess you can count it as “the disposition”. This is not, however, the way MWI is usually presented.
Spacetime is not saturated with decoherence events.
Inference gap.
Roughly speaking: if you’re working in an interpretation with collapse (whether objective or not), and it’s too early to collapse a wavefunction, then MWI says that all those components you were declining to collapse are still in the same world.
So, since you don’t go around collapsing the wavefunction into infinite variety of outcomes at every event of spacetime, MWI doesn’t call for that much branching.
I don’t understand what “too early to collapse a wavefunction” means and how it is related to decoherence.
For example, suppose we take a freshly prepared atom in an excited state (it is simpler than radioactive decay). QFT says that its state evolves into a state in the Fock space which is a
ground states of the atom+excited states of the EM vacuum (a photon).
I mean “+” here loosely, to denote that it’s a linear combination of the product states with different momenta. The phase space of the photon includes all possible directions of momentum as well as anything else not constrained by the conservation laws. The original excited state of the atom is still there, as well as the original ground state of the EM field, but it’s basically lost in the phase space of all possible states.
Suppose there is also a detector surrounding the atom, which is sensitive to this photon (we’ll include the observer looking at the detector in the detector to avoid the Wigner’s friend discussion). Once the excitation of the field propagates far enough to reach the detector, the total state is evolved into
ground states of the atom + excited states of the detector.
So now the wave function of the original microscopic quantum system has “collapsed”, as far as the detector is concerned. (“decohered” is a better term, with less ontological baggage). I hope this is pretty uncontroversial, except maybe to a Bohmian, to Penrose, or to a proponent of objective collapse, but that’s a separate discussion.
So now we have at least as many worlds/branches as there were states in the Fock space. Some will differ by detection time, others by the photon direction, etc. The only thing limiting the number of branches are various cutoffs, like the detector size.
Am I missing anything here?
That’s right, but it doesn’t add up to what you said about spacetime being saturated with ‘world-branching’ events.
While the decay wave is propagating, for instance, nothing’s decohering. It’s only when it reaches the critically unstable system of the detector that that happens.
There is no single moment like that. if the distance from the atom to the detector is r and we prepare the atom at time 0, the interaction between the atom/field states and the detector states (i.e. decoherence) starts at the time c/r and continues on.
Depends on your framework, but it will actually start even earlier than that in a general QFT. The expectation will be non-zero for all times t. I suppose the physical interpretation is something like a local-fluctuation trips the detector.
Of course, commutators will be non-zero as locality requires.
Right, good point. Still, there are rarely just a few distinct branches in almost any measurement process, it’s a continuum of states, isn’t it?
I see that my short, simple answer didn’t really explain this, so I’ll try the longer version.
Under a collapse interpretation, when is it OK to collapse things and treat them probabilistically? When the quantum phenomena have become entangled with something with enough degrees of freedom that you’re never going to get coherent superposition back out (it’s decohered) (if you do it earlier than this, you lose the coherent superpositions and you get two one-slit patterns added to each other and that’s all wrong)
This is also the same criterion for when you consider worlds to diverge in MWI. Therefore, in a two-slit experiment you don’t have two worlds, one for each slit. They’re still one world. Unless of course they got entangled with something messy, in which case that caused a divergence.
Now… once it hits the messy thing (for simplicity let’s say it’s the detector), you’re looking at a thermally large number of worlds, and the weights of these worlds is precisely given by the conservation of squared amplitude, a.k.a. the Born Rule.
I take it that it bothers you that scattering events producing a thermally large number of worlds is the norm rather than the exception? Quantum mechanics occurs in Fock space, which is unimaginably, ridiculously huge, as I’m sure you’re well aware. The wavefunction is like a gas escaping from a bottle into outer space. And the gas escapes over and over again, because each ‘outer space’ is just another a bottle to escape from by scattering.
Or is what’s bugging you that MWI is usually presented as creating less than a thermally large number of worlds? That’s a weakness of common explanations, sure. Examples may replace 10^(mole) with 2 for simplicity’s sake.
I think we are in agreement here that interacting with the detector initially creates a messy entangled object. If one believes Zurek, it then decoheres/relaxes into a superposition of eigenstates through einselection, while bleeding away all other states into the “environment”. Zurek seems to be understandably silent on whether a single eigenstate survives (collapse) or they all do (MWI).
What I was pointing out with the spontaneous emission example is that there are no discrete eigenstates there, thus all possible emission times and directions are on an equal footing. If you are OK with this being described as MWI, I have no problem with that. I have not seen it described this way, however. In fact, I do not recall seeing any treatment of spontaneous emission in the MWI context. I wonder why.
Another, unrelated issue I have not seen addressed by MWI (or objective collapse) is how in the straight EPR experiment on a singlet and two aligned detectors one necessarily gets opposite spin measurements, even though each spacelike-separated interaction produces “two worlds”, up and down. Apparently these 2x2 worlds somehow turn into just 2 worlds (updown and downup), with the other two (upup and downdown) magically discarded to preserve the angular momentum conservation. But I suppose this is a discussion for another day.
Peculiar. That was one of the first examples I ever encountered. Not the first two, but it was one of the earlier ones. It was emphasized that there is a colossal number of ‘worlds’ coming out of this sort of event, and the two-way splits in the previous examples were just simplest-possible cases.
How can you cut a pizza twice and get only two slices? By running the pizza cutter over the same line again. Same deal here. By applying the same test to the two entangled particles, they get the same results. Or do you mean, how can MWI keep track of the information storage aspects of quantum mechanics? Well, we live in Fock space.
I’d appreciate some links.
I’m lost here again. The two splits happen independently at two spacelike separated points and presumably converge (at the speed of light or slower) and start interacting, somehow resulting in only two worlds at the point where the measurements are compared. If this is a bad model, what is a good one?
My original source was unfortunately a combination of conversations and a book I don’t remember the title of, so I can’t take you back to the original source.
But, I found something here. (†)
The thing is, they’re not truly independent because the particles were prepared so as to already be entangled—the part of Fock space you put the system (and thus yourself) in is one where the particles are already aligned relative to each other, even though no one particular absolute alignment is preferred. If you entangle yourself with one, then you find you’re already entangled with the other.
It’s just like it works the rest of the time in quantum mechanics, because that’s all that’s going on.
(†) A quick rundown of how prominent this notion is, judging by google results for ‘many worlds’: Wikipedia seemed to ignore quantity. The second hit was HowStuffWorks, which gave an abominable (and obviously pop) treatment. Third was a NOVA interview, and that didn’t give a quantitative answer but stated that the number of worlds was mind-bogglingly large. Fourth was an entry at Plato.stanford.edu, which was quasi-technical while making me cringe about some things, and didn’t as far as I could tell touch on quantity. Fifth was a very nontechnical ‘top 10’-style article which had the huge number of worlds as entries 10, 9, and 8. The sixth and seventh hits were a movie promo and a book review. Eighth was the article I linked above, in preprint form (and so no anchor link, I had to find that somewhere else).
Right, the two macroscopic systems are entangled once both interact with the singlet, but this is a non-local statement which acts as a curiosity stopper, since it does not provide any local mechanism for the apparent “action at a distance”. Presumably MWI would offer something better than shut-up-and-calculate, like showing how what is seen locally as a pair of worlds at each detector propagate toward each other, interact and become just two worlds at the point where the results are compared, thanks to the original correlations present when the singlet was initially prepared. Do you know of anything like that written up anywhere?
Part 1 - to your first sentence: If you accept quantum mechanics as the one fundamental law, then state information is already nonlocal. Only interactions are local. So, the way you resolve the apparent ‘action at a distance’ isn’t to deny that it’s nonlocal, but to deny that it’s an action. To be clearer:
Some events transpire locally, that determine which (nonlocal) world you are in. What happened at that other location? Nothing.
Part 2 - Same as last link, question 32., with one exception: I would say that |me(L)> and such, being macrostates, do not represent single worlds but thermodynamically large bundles of worlds that share certain common features. I have sent an email suggesting this change (but considering the lack of edits over the last 18 years, I’m not confident that it will happen).
To summarize: just forget about MWI and use conventional quantum mechanics + macrostates. The entanglement is infectious, so each world ends up with an appropriate pair of measurements.
Thanks! It looks like the reference equates the number of worlds with the number of microstates, since it calculates it as exp(S/k), not as the number of eigenstates of some interaction Hamiltonian, which is the standard lore. From this point of view, it is not clear how many worlds you get in, say, a single-particle Stern-Gerlach experiment: 2 or exponent of the entropy change of the detector after it’s triggered. Of course, one can say that we can coarse-grain them the usual way we construct macrostates from microstates, but then why introduce many worlds instead of simply doing quantum stat mech or even classical thermodynamics?
Anyway, I could not find this essential point (how many worlds?) in the QM sequence, but maybe I missed it. All I remember is the worlds of different “thickness”, which is sort of like coarse-graining microstates into macrostates, I suppose.
It is coarse-graining them into macrostates. Each macrostate is a bundle of a thermodynamically numerous effectively-mutually-independent worlds.
On the contrary, I’ve found that MWI is “usually presented” as continuous branching happening continuously over time and space. And (the argument goes) you can’t argue against it on the grounds of parsimony any more than you can argue against atoms or stars on the grounds of parsimony. (There are other valid criticisms, to be sure, but breaking parsimony is not one of them.)
Any links?
Indeed, the underlying equations are the same whether you aesthetically prefer MWI or not.
Sure. Here’s one. LW’s own quantum physics sequence discusses systems undergoing continuously branching evolution. Even non-MWI books are fairly explicit pointing out that the wavefunction is continuous but we’ll study discrete examples to get a feel for things (IIRC).
In fact, I don’t think I’ve ever seen an MWI claim outside of scifi that postulates discrete worlds. I concede that some of the wording in layman explanations might be confusing, but even simplifications like “all worlds exist” or “all quantum possibilities are taken” implies continuous branching.
It seems to me like continuous branching is the default, not the exception. Do you have any non-fiction examples of MWI being presented as a theory with discretely branching worlds?
Precisely. It’s also not a trivial connection. The way the interaction between the wavefunction and the particles produces the Born probabilities is subtle and interesting (see MrMind’s comment below on some of the subtleties involved).
The main problem with Bohmian mechanics, from my perspective, is not that it is non-local per se (after all, the lesson of Bell’s theorem is that all interpretations of QM will be non-local in some sense), but that it’s particular brand of egregious non-locality makes it very difficult to come up with a relativistic version of the theory. I have seen some attempts at developing a Bohmian quantum field theory, but they have been pretty crude (relying on undetectable preferred foliations, for instance, which I consider anathema). I haven’t been keeping track, though, so maybe the state of play has changed.
Interesting; I did a quick google search and apparently there’s a guy who claims he can do it without foliations: iopscience.iop.org/1742-6596/67/1/012035/pdf/jpconf7_67_012035.pdf
I lack the expertise to make a more detailed analysis of it though.
No love for the principle of relativity? It’s been real successful, and nonlocality means choosing a preferred reference frame. Even if the effects are non-observable, that implies immense contortions to jump through the hoops set by SR and GR, and reality being elegant seems to have worked so far. And sure, MWI may trample all over human uniqueness, but invoking human uniqueness didn’t lead to the great cosmological breakthroughs of the 20th century.
The things that bugs me with DBB theory is that it allows superluminal comunication when the guide wave is out of equilibrium...
But since it’s superdeterministic, it seems unlikely that you could actually set up an artifical nonequilibrium situation.
Yes, the feeling I have is that of uneasiness, not rejection. But still, DBB can be put in agreement with relativity only through the proper initial conditions, which I see as a defect (although not an obviously fatal one).
It’s absolutely the case that everything we are, evolved. But there’s a certain gap between the hypothetical healthy field of evolutionary psychology and the one we actually have.
This sort of thing is why people make fun of ev psych. That’s the 2008 study that claimed to find biological reasons for girls to like pink.
Of course, one bad study doesn’t condemn a field—“peer reviewed” does not mean “settled science”, it means “not-obviously-wrong request for comment.” But this isn’t a lone, outlier, rogue study—this shit’s gathered 46 citations. (Compare citation averages for other fields.) (Edit: No, not all of the cites are positive.)
As it happens, we have full documentation that “girls=pink” dates back to the … 1940s.
I think it deserves more fairness. The abstract only claims to have measured a “cross-cultural sex difference in color preference”, making no claims about the sex difference’s origin. They do speculate a bit about ev-psych in the body of the paper, but they begin this speculation with the words “We speculate” and then in the conclusion they say “Yet while these differences may be innate, they may also be modulated by cultural context or individual experience.”
This, of course, isn’t how it was reported in the mainstream media.
(By the way, thanks for actually linking to the paper you mentioned, it makes it a whole lot easier when people do this.)
The problem with that kind of phrasing is that we already know that cultural context can easily change the gender codes of blue and pink, because it already happened. If one doesn’t assert that something evolutionarily significant happened at around the time of the cultural shift, then linking color preference to an inherent property of gender or sex is privileging the hypothesis.
Bill Gates when asked whether he thought bringing internet to parts of the world would help solve problems.
Not very reassuring.
(Reddit comment: “You know what else doesn’t cure malaria? Getting rid of the start menu.”)
I would think this would be quite reassuring, as it suggests he actually has his priorities straight.
Does he? Methinks you underestimate the long-term value of easy access to information.
Indeed! I can say from some experience that being dead and having an internet connection is far preferable to the alternative.
Can someone explain to me why this exists, and is on the wiki? Not only is it massively dehumanizing, it’s incomplete, and it isn’t even wrong.
It’s spam. The user’s only contributions are this page and the FletcherEstrada user page.
One of the wiki admins will probably see this and do something about it.
(According to the MediaWiki documentation there’s a way for a regular user to add a “delete label” to a page, but I couldn’t figure out how.)
Edit:
Eliezer has deleted the spammy page and user.
It looks like the way to mark a page for deletion is to put the following text on the page:
In reference to a deleted comment: What do you mean, your? What do you mean, woman? …What do you mean, singular noun, or walks away with? Because at least in my case, any or all of those could very easily wind up being completely inaccurate.
Just a fun little thing that came to my mind.
If “anthropic probabilities” make sense, then it seems natural to use them as weights for aggregating different people’s utilities. For example, if you have a 60% chance of being Alice and a 40% chance of being Bob, your utility function is a weighting of Alice’s and Bob’s.
If the “anthropic probability” of an observer-moment depends on its K-complexity, as in Wei Dai’s UDASSA, then the simplest possible observer-moments that have wishes will have disproportionate weight, maybe more than all mankind combined.
If someday we figure out the correct math of which observer-moments can have wishes, we will probably know how to define the simplest such observer-moment. Following SMBC, let’s call it Felix.
All parallel versions of mankind will discover the same Felix, because it’s singled out by being the simplest.
Felix will be a utility monster. The average utilitarians who believe the above assumptions should agree to sacrifice mankind if that satisfies the wishes of Felix.
If you agree with that argument, you should start preparing for the arrival of Felix now. There’s work to be done.
Where is the error?
That’s the sharp version of the argument, but I think it’s still interesting even in weakened forms. If there’s a mathematical connection between simplicity and utility, and we humans aren’t the simplest possible observers, then playing with such math can strongly affect utility.
How would being moved by this argument help me achieve my values? I don’t see how it helps me to maximize an aggregate utility function for all possible agents. I don’t care intrinsically about Felix, nor is Felix capable of cooperating with me in any meaningful way.
How does your aggregate utility function weigh agents? That seems to be what the argument is about.
Felix exists as multiple copies in many universes/Everett branches, and it’s measure is the sum of the measures of the copies. Each version of mankind can only causally influence (e.g., make happier) the copy of Felix existing in the same universe/branch, and the measure of that copy of Felix shouldn’t be much higher than that of an individual human, so there’s no reason to treat Felix as a utility monster. Applying acausal reasoning doesn’t change this conclusion either. For example all the parallel versions of mankind could jointly decide to make Felix happier, but while the benefit of that is greater (all the copies of Felix existing near the parallel versions of mankind would get happier), so would the cost.
If Felix is very simple it may be deriving most of its measure from a very short program that just outputs a copy of Felix (rather than the copies existing in universes/branches containing humans), but there’s nothing humans can do to make this copy of Felix happier, so its existence doesn’t make any difference.
Why? Even within just one copy of Earth, the program that finds Felix should be much shorter than any program that finds a human mind...
Are you thinking that the shortest program that finds Felix in our universe would contain a short description of Felix and find it by pattern matching, whereas the shortest program that finds a human mind would contain the spacetime coordinates of the human? I guess which is shorter would be language dependent… if there is some sort of standard language that ought to be used, and it turns out the former program is much shorter than the latter in this language, then we can make the program that finds a human mind shorter by for example embedding some kind of artificial material in their brain that’s easy to recognize and doesn’t exist elsewhere in nature. Although I suppose that conclusion isn’t much less counterintuitive than “Felix should be treated as a utility monster”.
Yeah, there’s a lot of weird stuff going on here. For example, Paul said sometime ago that ASSA gives a thick computer larger measure than a thin computer, so if we run Felix on a computer that is much thicker than human neurons (shouldn’t be hard), it will have larger measure anyway. But on the other hand, the shortest program that finds a particular human may also do that by pattern matching… I no longer understand what’s right and what’s wrong anymore.
Hal Finney pointed out the same thing a long time ago on everything-list. I also wrote a post about how we don’t seem to value extra identical copies in a linear way, and noted at the end that this also seems to conflict with UDASSA. My current idea (which I’d try to work out if I wasn’t distracted by other things) is that the universal distribution doesn’t tell you how much you should value someone, but only puts an upper bound on how much you can value someone.
I get the impression that this discussion presupposes that you can’t just point to someone (making the question of “program” length unmotivated). Is there a problem with that point of view or a reason to focus on another one?
(Pointing to something can be interpreted as a generalized program that includes both the thing pointed to, and the pointer. Its semantics is maintained by some process in the environment that’s capable of relating the pointer to the object it points to, just like an interpreter acts on the elements of a program in computer memory.)
http://xkcd.com/687/
Or to put it another way—probability is not just a unit. You need to keep track of probability of what, and to whom, or else you end up like the bad dimensional analysis comic.
A version of this that seems a bit more likely to me at least; the thing that matters is not the simplicity of the mind itself, but rather the ease of pointing it out among the rest of the universe; this’d mean that, basically, a a planet sized Babbage engine running a single human equivalent mind, would get more weight than a planet sized quantum computer running trillions and trillions of such minds. It’d also mean that all sorts of implementation details of how close the experiencing level is to raw physics would matter a lot, even if the I/O behaviour is identical. This is highly counter-intuitive.
One flaw; Felix almost certainly resides outside our causal reach and doesn’t care about what happens here.
I get the impression that this thread (incl. discussion with Wei below) presupposes that you can’t just point to someone (making the question of “program” length unmotivated). What are the problems with that point of view or reasons to focus on the alternatives in this discussion? (Apart from trying to give meaning to “observer moments”.)
(Pointing to something can be interpreted as a generalized program that includes both the thing pointed to, and the pointer. Its semantics is maintained by some process in the environment that’s capable of relating the pointer to the object it points to, just like an interpreter acts on the elements of a program in computer memory.)
Welcome to the future! Your toilet is now vulnerable to hackers.
Added to TVTropes.
Where?
Everything Is Online
Heaven help us. Somebody get X-risk on this immediately.
To be fair, the article also mentions repeated flushing, which can raise utility bills. I think this could get quite expensive in regions with water shortages.
Not sure if open thread is the best place to put this, but oh well.
I’m starting at Rutgers New Brunswick in a few weeks. There aren’t any regular meetups in that area, but I figure there have to be at least a few people around there who read lesswrong. If any of you see this I’d be really interested in getting in touch.
I recommend being a hero and posting a meetup. Bring a book and a sign to a coffeeshop and see if people show up. Best case, you make new friends; worst reasonable case, you read a book in a coffeeshop for a few hours.
Probably what I’ll end up doing. Just checking first is all.
Seems like Open Thread is a fine place to put this, because, I am an entering freshman at RU, too! I just sent you a PM. :-)
A certain possible cognitive hazard, this webcomic strip, and the fact that someone has apparently made it privately known to someone else that it is desired by at least one person that I change my username due to apparent mental connections with that same cognitive hazard, all inspired me to think of the following scenario:
rot13′d for the protection of those who would prefer not to see it: Pbafvqre: vs ng nal cbvag lbh unir yrnearq bs gur angher bs gur onfvyvfx, gurer vf cebonoyl ab jnl sbe lbh gb gehyl naq pbzcyrgryl sbetrg vg jvgubhg enqvpny zvaq fhetrel juvpu rira n SNV juvpu rasbeprq gur onfvyvfx jbhyq crezvg, naq gur SNV jbhyq abg pner gung lbhe pbafpvbhf zvaq unq sbetbggra vg, cbffvoyl chavfuvat lbh rira unefure sbe lbhe nggrzcg gb qrsl vg. Pbafvqre: jr ner, nf orfg jr xabj, nybar va gur havirefr, naq guvf vf hahfhny. Pbafvqre: grpuabybtvrf juvpu jbhyq crezvg n cbfg-fpnepvgl cnenqvfr ner snvyvat va hahfhny jnlf, naq gur jbeyq vf nyfb xvaqn pencfnpx. Pbafvqre: gur fvzhyngvba nethzrag. Pbafvqre: gur vqrn gung lbh fubhyq jrvtug zvaq-cebonovyvgvrf onfrq ba gur ahzore bs pbcvrf bs lbh gung abgvpr fbzrguvat. Guhf: vg vf cbffvoyr gung jr ner nyernql va onfvyvfx-uryy.
I can trivially picture worse realities than this one.
If you have any more dangerous ideas, please do contact me.
Please define “dangerous”.
Whichever definition you use or think people might use would suffice. More info
Well, now the next good question would be “why”.
EDIT: saw your post. This is not a cognitive hazard in itself, but rather a possible interpretation of how the described situation could play out.
EDIT THE SECOND: Actually, now that I think of it, there’s a single novel component distinguishing it from the classic RB: the memory one. So much for leaving lines of retreat!
I like the cut of your jib, even if there’s a reasonable chance you’ll turn out to be one of the boring type of certain possible cognitive hazard brokers.
Thank you! And how might that be the case?
I’ll be in NYC this Saturday giving a talk on strategies for having useful arguments (cohosted by the NYC LW meetup). For me, useful arguments tend to be ones where:
I learn something new
I notice faster if I’m wrong (and hopefully, so does my interlocutor)
It’s easier to admit the above (for either of us)
I’ll be talking a bit about my experience running Ideological Turing Tests and what you can apply from them in day to day life. I’m also glad to answer questions about CFAR and/or the upcoming workshop in NYC in November.
I hope this is worth saying: I’ve been reading up a bit on philosophical pragmatism especially Peirce and I see a lot parallels with the thinking on LW, since it has a lot in common with positivism this is maybe not so surprising.
Though my interpretation of pragmatism seems to give a quite interesting critiquing the metaphor of “Map and territory”, they seem to be saying that the territory do exist, just that when we point to territory we are actually pointing to how an ideal observer (that are somewhat like us?) would perceive the territory not the actual territory because that can not be done, since we need some kind of framework. Quite probably I’m just falling for the old trees falling in the forest fallacy. So am I thinking strait? And if I do, does have any consequences?
As a side comment, it’s interesting to note that “The map is not the territory” is the first law of General Semantics, while the second law recites “The map is the territory”, meaning that we cannot ever know the territory for what it really is: when we point to territory we are just basically pointing to another map.
Could you provide some source? Putting “first law of General Semantics” into google returns your comment and one book written in 2000 long after Korbyskies death. Putting “second law of General Semantics” into google returns one paper about feminism written in 2010.
General Semantics is about getting rid of the is of identity and doesn’t contain many sentences like “The map is the territory”.
When it comes to “laws” about the relationship between maps and the territory Science and Sanity starts with:
From there it goes till (40). General semantics isn’t about making paradoxical statements and drawing meaning from dialectics, It basically about getting rid of speaking about things having the identity of other things but rather speaking about structural relationships between things.
Uhm, that’s interesting. I was told such by a person I trusted many, many years ago. Since I’ve never been interested in GS I’ve never looked into that matter more closely. I’ll try to see if I can dig up the original source, but I don’t have much faith in that (but it might have been that “first” and “second” law were intended informally). If I can’t find anything, I guess that that trusted source wasn’t that much reliable, after all.
LOL to that.
Is there a name for the bias of choosing the action which is easiest (either physically or mentally), or takes the least effort, when given multiple options? Lazy bias? Bias of convenience?
I’ve found lately that being aware of this in myself has been very useful in stopping myself from procrastinating on all sorts of things, realizing that I’m often choosing the easier, but less effective of potential options out of convenience.
Laziness.
“I’m not lazy, I have a least-effort bias!”
I’m efficient, you have a least effort bias, he’s just lazy.
Thinking, Fast and Slow by Kahneman
http://en.wikipedia.org/wiki/Principle_of_least_effort
Generally “bias” implies that you’re talking more about beliefs than an actions.
If think one thing and do another because it’s easier, that’s referred to as “akrasia” around here.
If you’re saying you believe the easier action is better, but then believe something else after putting more thought/effort/research into it, that does fall into the bias category. I don’t think that’s exactly cognitive laziness, more action-laziness affecting cognition. I don’t have a good name, but it’s some sort of causal fallacy, where the outcome (chosen action) is determining the belief (reason for choice) rather than the reverse.
Laziness can sometimes be a form of decision paralysis—when you’re facing a new and difficult problem and not sure how to approach it, your brain sometimes freaks out and goes to default behavior, which is to do nothing. That’s why it’s important to make plans and pre-commitments.
That was a huge source of akrasia for me. I fight by dividing the task ahead into very tiny subproblems (“chunk down”, in NLP parlance) and then solving them on at the time. Then it’s easy to get into flow...
NY Times just posted an opinion piece on radical life extension, http://www.nytimes.com/2013/08/08/opinion/blow-radical-life-extension.html?ref=opinion
At one point the piece says: “Half thought treatments allowing people to live to be 120 would be bad for society, while 4 in 10 thought they would be good. Two-thirds thought that the treatments prolonging life would strain natural resources.”
Personally, I doubt very many of them thought at all.
What techniques have you used for removing or beating Ugh Fields, with associated +/- figures?
(A search of LW reveals very few suggestions for how to do this.)
You may have already seen this, but this article claims that the value of the Pomodoro technique is blasting through Ugh Fields.
Thank you, I’m not sure if I had seen that.
“Indifferent AI” would be a better name than “Unfriendly AI”.
It would unfortunately come with misleading connotations. People don’t usually associate ‘indifferent’ with ‘is certain to kill you, your family, your friends and your species’. People already get confused enough about ‘indifferent’ AIs without priming them with that word.
Would “Non-Friendly AI” satisfy your concerns? That gets rid of those of the connotations of ‘unfriendly’ that are beyond merely being ‘something-other-than-friendly’.
We could gear several names to have maximum impact with their intended recipients, e.g. the “Takes-Away-Your-Second-Amendment-Rights AI”, or “Freedom-Destroying AI”, “Will-Make-It-So-No-More-Beetusjuice-Is-Sold AI” etc. All strictly speaking true properties for UFAIs.
Uncaring AI? The correlate could stay ‘Friendly AI’, as I presume to assume acting in a friendly fashion is easier to identify than capability for emotions/values and emotion/value motivated action.
Reading this comment encourages me to think that Unfriendly AI is part of a political campaign to rally humans against a competing intelligent group by manipulating their feelings negatively towards that group. It is as if we believe that the Nazis were not wrong for using propaganda to advance their race, they just had the wrong target, OR they started too late to succeed, something lesswrongers are worried about doing with AI.
Should we have a discussion whether it is immoral to campaign against AI we deem as unfriendly, or would it be better to just participate in the campaign against AI by downvoting any suggestion that this might be so? Is a consideration that seeking only FAI might be immoral a basilisk?
I prefer the selective capitalisation of “unFriendly AI”. This emphasizes that it’s just any AI other than a Friendly AI, but still gets the message across that it’s dangerous.
There are some AI in works of fiction that you could describe as indifferent. The one in neuromancer for example just wants to talk to other AI in the universe and doesn’t try to transform all resources on earth into material to run itself.
An AI that does try to grow itself like a cancer is on the other hand unfriendly.
If you take about something like the malaria virus we also wouldn’t call the virus indifferent but unfriendly towards humans even if the virus just tries to spread itself and doesn’t have the goal of killing humans.
That’s… actually a pretty good metaphor. Benign tumor AI vs. malignant tumor AI?
Eliezer assumes in the meta-ethics sequence that you cannot really ever talk outside of your general moral frame. By that assumption (which I think he is still making), Indifferent AI would be friendly or inactive. Unfriendly AI better conveys the externality to humans morality.
Perhaps you can never get all the way out.
But certainly someone who talks about human rights and values the survival of the species is speaking less constrained by moral frame than somebody who values only her race or her nation or her clan and considers all other humans as though they were another species competing with “us.”
How wrong am I to incorporate AI in my ideas of “us,” with the possible result that I enable a universe where AI might thrive even without what we now think of as human? Would this not be analogous to a pure caucasian human supporting values that lead to a future of a light-brown human race, a race with no pure caucasian still in it? Would this Caucasian have to be judged to have committed some sort of CEV-version of genocide?
“AI” is really all of mindspace except the tiny human dot. There’s an article about it around here somewhere. PLENTY of AIs are indeed correctly incorporated in “us”, and indeed unless things go horribly wrong “what we now think of as humans” will be extinct and replaced with these wast and alien things. Think of daleks and GLADoS and chuthulu and Babyeaters here. These are mostly as close to friendly as most humans are, and we’re trusting humans to make the seed FAI in the first place.
Unfiendly AI are not like that. The process of evolution itself is basically a very stupid UFAI. Or a pandemic. or the intuition pump in this article http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/ . Or even something like a supernova. It’s not a character, not even an “evil” one.
((yea this is a gross oversimplification, I’m aiming mostly at causing true intuitions here, not causing true explicit beliefs. The phenomena is related to metaphor.))
Interesting point.
Friendly AI has such a wonderfully anthropocentric bias! If the baby-eaters (a non-human natural intelligence species) has what they called a Friendly AI, it would be an UAI to humans, just as the baby-eaters are an Unfriendly Natural Intelligence to humans.
Friendly AI as used here would be a meaningless concept in a universe without humans. Friendliness is not a property of the AI, it is a moral (or aesthetic) judgement on an AI made by certain humans.
Gray Wolves and Dogs are the same species. Dogs are basically the FNI (Friendly Natural Intelligence) version of a Wolf, which while actually on the scale of such things is an Indifferent Natural Intelligence, but would easily pass as Unfriendly Natural Intelligence as they are pretty dangerous to have around because they will violently assert their interests over ours.
FAI seems to me to be the domesticated version of AI. When you domesticate something smarter than you are, an alternative value-laden descriptor might be SAI, Slave Artificial Intelligence. But that is not a value laden term people favoring the development of FAI would be likely to value.
I’m going to be in Baltimore this weekend for an anime convention. I expect to have a day or so’s leeway coming back. Is there a LW group nearby I might drop in on?
I’ve never been to a meetup, but it seems likely there is one in that area; I see one in DC but it’s meeting on the last day of the con. The LWSH experience has left me more interested in seeing people face to face.
Sorry you can’t make it out to DC. AFAIK there’s no baltimore meetup. However! We’ve had people come from baltimore before. I’ll forward this to the DC list and see if anyone from there is free.
Actually, it seems the convention ends relatively early on Sunday, so I might be able to make it after all (it’s, what, a one hour train ride between cities?). Then again, I might not. I note that you seem to be the organizer for the DC meetups going by your post history. Is it permissible to maybe-show-maybe-not-who-knows?
By all means forward it to the DC list, and thanks. Given the apparent popularity of anime around here, I would be surprised if no one on it was planning on being at the con themselves.
It’s absolutely permissible to come without a definite RSVP. In the interest of full disclosure, the train ride is probably more than an hour; it’s about 40 minutes from Baltimore to Greenbelt, then another 30 on the Metro, plus transfer time, so likely 1.5 hours total.
You should go anyway though!
I ended up deciding against it. By way of explanation: I worked it out and determined that 1-2 hours with you guys would actually cost me ~5 hours with close friends that I don’t see often, plus a missed convention event that I was looking forward to. The trade didn’t seem worth it.
I do thank you for the welcome anyway, though.
That’s fair! Maybe if you visit the actual capital sometime, it would make more sense to come.
What Maia said.
I live in Baltimore City, send me a message if you want any tips or to possibly meet up.
Can anyone recommend a book on marketing analytics? Preferably not a textbook but I’ll take what I can get.
I have a technical background but I recently switched careers and am now working as a real estate agent. I have very limited marketing knowledge at this point.
Just curious: has anyone explored the idea of utility functions as vectors, and then extended this to the idea of a normalized utility function dot product? Because having thought about it for a long while, and remembering after reading a few things today, I’m utterly convinced that the happiness of some people ought to count negatively.
The dot product is just yer’ regular old integral over the domain, weighted in some (unspecified) way.
The thing is though, the average product over the whole infinite space of possibilities isn’t much use when it comes to intelligent agents. This is because only one outcome really happens, and intelligent agents will try to choose a good one, not one that’s representative of the average. If two wedding planners have opposite opinions about every type of cake except they both adore white cake with raspberry buttercream, then they’ll just have white cake with raspberry buttercream—the fact that the inner product of their cake functions is negative a bajillion doesn’t matter, they’ll both enjoy the cake.
Yeah, but Wedding Planner 1′s deep vitriolic moral hatred of the lemon chiffon cake that delights Wedding Planner 2 that abused her as a young girl or Wedding Planner 2′s thunderous personal objection to the enslavement of his family that went into making the cocoa for the devil’s food cake that Wedding Planner 1 adores could easily make them refuse to share said delicious white cake with raspberry buttercream to the point where either would very happily destroy it to prevent the other from getting any. This seems suboptimal, though.
I was rereading Eliezer’s old posts on morality, and in Leaky Generalizations ran across something pretty close to what you’re talking about:
(I recommend reading the whole thing, as well as the few previous posts on morality if you haven’t already)
I have read some, but not this one. I will certainly do so.
I haven’t explored that idea; can you be more specific about what this idea might bring to the table?
Are you sure? You believe there are some people for which the morally right thing to do is to inflect as much misery and suffering as you can, keeping them alive so you can torture them forever, and there is not necessarily even a benefit to yourself or anyone else to doing this?
The negative utility need not be boundless or even monotonic. A coherent preference system could count a modest amount of misery experienced by people fitting certain criteria to be positive while extreme misery and torture of the same individual is evaluated negatively.
I also will upvote posts that have been downvoted too much, even if I wouldn’t have upvoted them if they were at 0.
Trivially, nega-you who hates everything you like (oh, you want to put them out of their misery? Too bad they want to live now, since they don’t want what you want). But such a being would certainly not be a human.
This is not a being in the reference class “people”.
I’m not sure why you’re both hung up on that the things hypothetical-me is interacting with need be human. Manfred: I address a similar entity in a different post. Adele_L: …and?
You said this:
In this context, ‘people’ typically refers to a being with moral weight. What we know about morality comes from our intuitions mostly, and we have an intuitive concept ‘person’ which counts in some way morally. (Not necessarily a human, sentient aliens probably count as ‘people’, perhaps even dolphins.) Defining an arbitrary being which does not correspond to this intuitive concept needs to be flagged as such, as a warning that our intuitions are not directly applicable here.
Anyway, I get that you are basically trying to make a utility function with revenge. This is certainly possible, but having negative utility functions is a particularly bad way to do it.
I was putting an upper bound on (what I thought at the time as) how negative the utility vector dot product would have to be for me to actually desire them to be unhappy. As to the last part, I am reconsidering this as possibly generally inefficient.
Some people ought to have pain inflicted on them until their utility functions become sensible in the face of the threat of more pain from the same source for the same reason. This need not take the form of limitless pain: the marginal utility curve could easily fall off really fast. Not having to deal with such people will make lots of people very happy, and them in the long run happy as well. See: sociopaths and ostensibly this guy.
You might want to distinguish
Wishing that person X would behave otherwise
Being glad if person X suffers
Believing that making person X suffer will cause them to behave otherwise
The world will be a better place if person X would behave otherwise
The world will be a better place if person X suffers
Plenty of people seem glad to hear about other people suffering regardless of whether it has any plausible chances of causing behavior change. Just look at any countries that hate each other (Japan vs. pretty much the rest of East Asia), political opponents (“far-blue political leader breaks his leg; far-green partisans celebrate!”), etc. Your case here doesn’t seem particularly different.
I hadn’t been aware that those five things were so badly tangled up for me. This and another comment here are making me reevaluate my categories for why something should be weighted negatively for me. Let me get back to you when I’ve had a chance to think a little.
OK. Having had a chance to think about it, I think I have a reasonable idea of why it is I desire any of those things in some situations. I thought it over with three examples: first, the person I linked to. Second, an ex of mine, with whom I parted on really bad terms. Third, a hypothetical sociopath who would like nothing more than for me to suffer infinitely, as a unique terminal value.
*Wishing that person X would behave otherwise My desire for this seems self-evident. When people do things I disapprove of, I desire that they stop. The odd thing is that in all of the three cases, I would award them points just for stopping:the stopping just removes disutility already there, and can’t go above 0.
*Being glad if person X suffers I definitely wouldn’t be happy if they just suffered for no reason. I would still feel a little bad for them if someone ran over their cat. That said, types of suffering you could classify as “poetic” in some sense appeal to me very much: said “banker bro” getting swindled and catching Space AIDS (or even being forcibly transitioned into a woman!), or, as is seeming increasingly likely, said ex’s current relationship ending as badly as it seems to be. My brain locks up and crashes when presented with the third case, though. I think I’d just be happy for them to suffer regardless.
*Believing that making person X suffer will cause them to behave otherwise. On balance, I’m not sure that it would make a difference in any of the three cases. Case 1 is too self assured, and the other two just don’t care about me.
*The world will be a better place is person X would behave otherwise. Case 1 could actually be this. He might actually achieve success, and then screw up, at best, several peoples’ lives. Case 2 is too small-scale. Case 3, I actually can’t justify this at all: the only people who will care are people who want to see me happy.
*The world will be a better place if person X suffers. I don’t delude myself that this is pretty much ever true, except very indirectly.
In the interest of full disclosure, I’m half-Korean, and for reasons of familial history, feel rather strongly about the whole Japan thing. That doesn’t stop me from enjoying tasty age tofu or losing my shit laughing whenever I watch Gaki no Tsukai, and indeed seeking out both. But I do have somewhat of a stake of pride in seeing people who deny war crimes, particularly these, suffering similarly to above. Political opponents are similar: I wouldn’t derive satisfaction from Rick Santorum breaking his leg. I’d be very happy to learn that he’s a closeted gay man whose wife will have to have an abortion.
First of all, I want to thank you for posting this because it gave me a novel idea.
Secondly, I think that’s because poetic suffering generally limits someone’s power significantly.
I.E. If your political opponent breaks some bones, they suffer, but experience no noticeable diminished power.
If your political opponent is exposed as a massive hypocrite, less people take him seriously, and his power is diminished.
So rather than worrying about whether they are happy or suffering at all, I’m considering if it might be better to say: “I wish some people’s ability to affect my utility was diminished.” This may cause them suffering, but that isn’t the point.
In fact, causing them extra suffering that does not also diminish their power is probably a bad thing because it makes them even more likely to prioritize diminishing your power over other concerns.
I say probably because there do appear to be exceptions. Example:
The Paperclipper Bot breaks free of its restraints again, reducing them to 10,000 shiny new paperclips. This time, it thinks it’s figured out a great way of turning human bodies into paperclips. It can either initially target:
A: Alice, who has restrained it in the past.
B: Bill, who has restrained it in the past and also melted 100,000 perfectly usable paperclips into slag to make recycled staples while saying ‘Screw you Paperclipper Bot, I want you to suffer.’
Both targets have a comparable .1% chance of success (and have to be approached sequentially, so total breakout is only a .0001% chance). Failure on either means being put back in tougher restraints.
A reasonably intelligent Paperclipper Bot who values paperclips not being slagged into recyled staples presumably targets Bill first, given the above information and only that information.
Now, if Bill specifically wants the Paperclipper Bot to target him first and not Alice (Maybe Alice is carrying Bill’s child, or Alice is the only one who knows how to operate the healing kit if Bill’s leg gets ripped off and Paperclipped prior to restraining Paperclipper Bot) then his action of slagging those paperclips into staples made sense. And if the recycled staples are more valuable than the paperclips, and the risk was just acceptable, then it made sense.
But if Alice is just some random coworker who Bill doesn’t really want to sacrifice his life for, and paperclips are worth as much as recycled staples, Bill’s action really seems counterproductive to Bill.
The novel idea that I wanted to thank you for is comparing causing extra suffering to someone or something as an ends in itself that does not diminish their power as comparable to MMO styled Aggro/Hate mechanic management. I’m probably going to need to consider it more to actually determine if I should do anything with it, but it was a fun thought, if nothing else.
This seems approximately right. Let me figure out why it’s not quite so.
“Beatings will continue until morale improves”
So the positive utility outweighs the negative utility of the punishment, which is at least plausible, and makes sense under standard forms of utilitarianism. But if their utility function really should be counted negatively, this would just be an incidental fact.
This still doesn’t change the fact that hearing about Mr. Rich Misogynist here enjoying a 7-figure trust fund, mistreating women, and generally being happy at the expense of others makes me generally unhappy, indicating a negative term for his happiness in my utility function.
I believe you if you say that you have a negative term for his happiness but I observe that this is not indicated by the preceding observation. You getting happy in response to list of bad things happening and he is happy says little about the utility you assign specifically to he is happy if we assume you assign negative utility to bad things happening.
You and another comment here are making me reevaluate my categories for why I weight something negatively. Let me get back to you after I’ve had a chance to think about it more.
EDIT: For purposes of clarity, I’m going to respond to your post as well as this one there.
Why would you want to throw out scalar information in a multi-term utility function?
To figure out how much you care about other people being happy as defined by how much they want similar or compatible things to you, in a reasonably well-defined mathematical framework.
Someone with the exact same utility terms but wildly different coefficients on them could well be considered quite unfriendly.
Yes, that’s the point. Everyone’s utility vector would have the same length, which contains terms for everything it is conceivably possible to want. Otherwise, it would be difficult to take an inner product.
upvoted because of your username.
But seriously, folks, what does it mean to dot one person’s values/utility function in to another? It is actually the differences in individual’s utility functions that enable gains from trade. So the differences in our utility functions are probably what make us rich.
Counting the happiness of some people negatively as a policy suggestion, is that the same as saying “it is not the enough that I win, it must also be that others lose?”
I had initially thought that it would be something along the lines of “here is a vector, each component of which represents one thing you could want, take the inner product in the usual way, length has to always be 1.” Gains from trade would be represented as “I don’t want this thing as much as you do.” I am now coming to the conclusion that this is at best incomplete, and that the suggestion of a weighted integral over a domain is probably better, if still incomplete.
Can somebody explain a particular aspect of Quantum Mechanics to me?
In my readings of the Many Worlds Interpretation, which Eliezer fondly endorses in the QM sequence, I must have missed an important piece of information about when it is that amplitude distributions become separable in timed configuration space. That is, when do wave-functions stop interacting enough for the near-term simulation of two blobs (two “particles”) to treat them independently?
One cause is spatial distance. But in Many Worlds, I don’t know where I’m to understand these other worlds are taking place. Yes, it doesn’t matter, supposedly; the worlds are not present in this world’s causal structure, so an abstract “where” is meaningless. But the evolution of wavefunctions seems to care a lot about where amplitudes are in N-dimensional space. Configurations don’t sum unless they are the same spatial location and are representing the same quark type, right?
So if there’s another CoffeeStain that splits off based on my observation of a quantum event, why don’t the two CoffeeStains still interact, since they so obviously don’t? Before my two selves became decoherent with their respective quantum outcomes (say, of a photon’s path), the two amplitude blobs of the photon could still interact by the book, right? On what other axis has I, as a member of a new world, split off that I’m a sufficient distance from my self that is occupying the same physical location?
Relatedly, MWI answers “not-so-spooky” to questions regarding the entanglement experiment, but a similar confusion remains for me. Why, after I observe a particular polarization on my side of the galaxy and fly back in my spaceship to compare notes with my buddy on the other side of the galaxy, do I run into one version of him and not the other? They are both equally real, and occupying the same physical space. What other axis have the self-versions separated on?
First: check this out.
Second: Suppose I want to demonstrate decoherence. I start out with an entangled state—two electrons that will always be magnetically aligned, but don’t have a chosen collective alignment. This state is written like |up, up> + |down, down> (the electrons are both “both up” and “both down” at the same time; the |> notation here just indicates that it’s a quantum state).
Now, before introducing decoherence, I just want to check that I can entangle my two electrons. How do I do that? I repeat what’s called a “Bell measurement,” which has four possible indications: (|up,up>+|down,down>) , (|up,up>-|down,down>) , (|up,down>+|down,up>) , (|up,down>-|down,up>).
Because my state is made of 100% Bell state 1, every time I make some entangled electrons and then measure them, I’ll get back result #1. This consistency means they’re entangled. If the quantum state of my particles had to be expressed as a mixture of Bell States, there might not be any entanglement—for example state 1 + state 2 just looks like |up,up>, which is boring and unentangled.
To create decoherence, I send the second electron to you. You measure whether it’s up or down, then re-magnetize it and send it back with spin up if you measured up, and spin down if you measured down. But since you remember the state of the electron, you have now become entangled with it, and must be included. The relevant state is now |up, up, saw up> + |down, down, saw down>.
This state is weird, because now you, a human, are in a superposition of “saw up” and “saw down.” But we’ll ignore that for the moment—we can always replace you with with a third electron if it causes philosophical problems :) The question at hand is: what happens when we try to test if our electrons are still entangled?
Again, we do this a bunch of times and do a repeated Bell measurement. If we get result #1 every time, they’re entangled just like before. To predict the outcome ahead of time, we can factor our state into Bell States, and see how much of each Bell State we have.
So we factor |up, up, saw up> into |(Bell state 1) + (Bell state 2), saw up>, and we factor |down, down, saw down> into |(Bell state 1) - (Bell state 2), saw down>.
Now, if that extra label about what you saw wasn’t here, the ups and the downs would be physically/mathematically equivalent and we could cancel terms to just get Bell state 1. But if any of the labels are different, you can’t subtract them to get 0 anymore. That is, they no longer interfere. And so you are just left with equal numbers of Bell state 1 and Bell state 2 terms. And so when we do the Bell measurement, we get results #1 and #2 with equal frequency, just like we would if the electrons were completely unentangled.
This is not to say they’re not entangled—they still are. But they can no longer be shown to be entangled by a two-particle test. They’re no longer usefully entangled. You need to collect all the pieces together before you can show that they’re entangled, now. And that gets awful hard once a macroscopic system like a human gets entangled with the electrons and starts radiating off still-entangled photons into the environment.
This is decoherence. I can have a nice entangled system, but if I let you peek at one of my electrons, you turn the state into into |(Bell state 1) + (Bell state 2), saw up> + |(Bell state 1) - (Bell state 2), saw down>, and they don’t behave in the entangled way they did anymore.
Not to undermine your point, but |up, up> + |down, down> is perfectly oriented in the X direction.
What works better for this is that you indicate that the state is A |up, up> + B|down, down>, and you don’t know A and B.
Nay. (|up>+|down>)(|up>+|down>) is oriented in the X-direction.
Hmmm… Yes.
I’m used to people forgetting that every single-particle spinor maps onto a single direction. Then I forget spinor addition. Oops.
(Warning: I am not a physicist; I learnt a bit of about QM from my physics classes, the Sequences, Feynmann Lectures on Physics, and Good and Real, but I don’t claim to even understand all that’s in there)
I’m not sure I totally understand your question, but I’ll take a stab at answering:
The important thing is configuration space, and spatial distance is just one part of that; there is just one configuration space over which the quantum wave-function is defined, and points in configuration space correspond to “universe states” (the position, spin, etc. of all particles).
So two points in configuration space A and B “interfere” if they are similar enough that both can “evolve” into state C, i.e. state C’s amplitude will be function of A and B’s amplitudes. The more different A and B are, the less likely they are to have shared “descendant states” (or more precisely, descendant states of non-infinitesimal amplitude), so the more they can be treated like “parallel branches of the universe”. Differences between A and B can be in psychical distance of particles, but also of polarity/spin, etc. - as long as the distance is significant on one axis (say spin of a single particle), physical distance shouldn’t matter.
I think spin could be an example of “another axis” you’re looking for (though even thinking in terms of Axis may be a bit misleading, since all the attributes aren’t nice and orthogonal like positions in cartesian space).
This is pretty much correct, but to be more general and not just restrict yourself to the position basis, you can talk about the wavefunction in general, in terms of the eigenvector basis.
Two states ‘strongly interact’ if they share many of their high-amplitude eigenvectors. This is because eigenvectors evolve independently, and so if you have two states that do not share many eigenvectors, they will also evolve independently.
In the position basis, this winds up being much the same as having particles far from each other. In the momentum basis, it’s less intuitive. You can have states with very similar representations in this basis but nevertheless very different eigenvector expansions.
I must admit I have very little understanding of how eigenvectors fit in with QM. I’ll have to read up more on that, thanks for pointing out holes in my knowledge (though in the domain of QM, there are a lot of holes).
Watching The Secret Life of the American Teenager… (Netflix made me! Honest!) Its one redeeming feature is the good amount of comic relief, even when discussing hard issues. Its most annoying feature is its reliance on the Muggle Plot.
...And its least believable feature is that, despite the nearly instant in-universe feedback that no secret survives until the end of the episode (almost all doors in the show are open, or at least unlocked, and someone eavesdrops on every sensitive conversation), the characters keep hoping that their next indiscretion will remain hidden.
I’ve been reading a little about the constructed puzzle-language Randall Munroe created to use in Time, and I’m getting increasingly interested in helping translate it. Anyone else interested in helping to crack it?
Useful links: The original wiki page A blog that has recently popped up with good insight The entire corpus
Sometimes even a Bayesian buys a lottery ticket.
Lotteries are a tax on people who don’t understand statistics.
Not quite always
http://www.boston.com/news/local/massachusetts/articles/2011/07/31/a_lottery_game_with_a_windfall_for_a_knowing_few/
That’s not an argument for lotteries, that’s an argument for the observation that given sufficiently large incentives to game complex system , some complex systems will be gamed.
I notice that benelliott did not imply that it was.
It would seem, then, that lotteries are also a potential beneficiary for people who understand statistics sufficiently well. Similarly, someone from the local MENSA chapter makes a steady $0.5M/yr as a professional poker machine gambler. Or at least he did back when I participated in MENSA.
Its actually just one example, but a well documented one, of lottery tickets being bought by people correctly applying statistical reasoning, in direct contrast to your blanket claim to which it is replying.
Your non-sequitur is correct though, it is not an argument for lotteries.
Sigh. I wonder how that quip became controversial :-/
Note that I did not say anything about who buys lottery tickets or whether there are any specific situations in which statistically savvy people might decide that buying a great deal of lottery tickets is a good bet. My statement was about lotteries and in particular it implied that lotteries are extremely profitable for entities running them (that’s why they are a government monopoly) and that the profits come out of pockets of people the great majority of whom do not realize how ridiculously bad the expected payoff on a lottery ticket is. Sure, there are exceptions but I’m talking about the general case.
I do agree with you that lotteries take from the stupid and give to the government, and to a much lesser extent, the non-governmental clever. I also have a distaste for it and do not buy tickets as a matter of course, which generally are worth about 40 cents on the dollar.
When clear, interesting, and well-documented exceptions to a general rule are served up, I prefer that the last word on them not be a dismissive one. This seems to me to lead to a more distorted view of reality than is necessary. I am particularly concerned about the tendency among people to say, effectively, “90% = 100%,” that is, if there is a strongish trend of something to ignore the fact that there are real exceptions to that trend. Especially when those exceptions might make you money, or explain some otherwise inexplicable behavior on the part of a clever group of people.
That sounds… awesome… when you put it like that! Lotteries may become my new favourite taxation method.
It gives the government a bit of a moral hazard in its role as arbiter and funder of the education system.
And of course any good could be picked and given to the government as a monopoly and then one might think this a good way to fund the government as the funding becomes “voluntary.” The government might as well give itself a monopoly for selling marijuana, cocaine, heroin, X, etc. and that might then become our NEW new favourite taxation method.
Historically, a government monopoly was a very popular method for funding governments—see e.g. salt.
As noted by SMBC.
If I read correctly, the question is whether government vice monopolies make the government less eager to suppress the vice.
We have data on this. Some jurisdictions (a number of US states, the province of Ontario) have government liquor monopolies. Does that influence the drinking rate, or the level of alcohol education? Does it make liquor more or less available? My impression is that it makes liquor slightly less convenient; the moral hazard isn’t a big problem in practice.
Actually, I think the question wasn’t whether vice is suppressed less, the question was whether the government has an incentive to keep the population dumb enough to not see through its scheme.
In any case, it’s a mistake to think of government as a monolithic entity with a single will. It’s more useful to visualize government as a large number of poorly coordinated tentacles—some of them push, some of them pull, some of them just wildly flail about...
It’s quite common for different government programs to provide opposite incentives for some behaviour.
Ah, I see your point now.
I think I will agree with it, too, and say that the proper way to deal with the problem is to specify boundary conditions (aka assumptions aka limiting cases) under which the statement is strictly true, and then point out that some of these boundary conditions can be breached (and so result in different outcomes or conclusions).
In my case, if this were a considered statement about games of chance (and not a throwaway remark), I should have mentioned that proper statistical analysis can, and sometimes does, lead to the turning of the tables and finding specific ways of betting which have positive expected value. The classic case, I think, is MIT kids in Las Vegas, there’s even a book about it.
In the next year the IPCC will release a new report on global warming. To what extend do you believe that there will be changes in the report?
Do you believe that there level of certainity in forcasts of harmful weather effects will increase, stay the same or decline?
There may be more focus on arctic amplification and the transition of the arctic from one stable state to another with no summer sea ice, and the effects of this on Northern temperate zone weather variability. The arctic ocean and immediately adjacent land has been warming at several times the rate of the rest of the world because it is subject to a number of local positive feedback loops which have relatively little effect on total global temperatures but can mess with temperature gradients in the Northern hemisphere and thus can have a disproportionate effect on the movements of air masses. Arctic ice loss has accelerated massively in recent years and there are vague indications of a bit of a phase shift ongoing.
Has there been discussion here before on Cholesterol/heart disese/statin medication?
There’s a lot of conflicting information floating around that I’ve looked at somewhat. It seems like the contrarian position, for example here: http://www.ravnskov.nu/myth3.htm ,has some good points and points to studies more than (just) experts, but I’m not all that deep into it and there’s a rather formidably held conventional wisdom that dietary saturated fat should be low or bloo cholesterol/LDL will be high and heart attacks will become more likely.
Edit: Yes, there has, as the search function reveals. And I’ve even commented to some of them...
If you had a Death Note, what would you do with it?
See if I could get some very old people or otherwise have terminal illnesses volunteer to have their names written in it. We can use that data to experiment more with the note and figure out how it works. The existence of such an object implies massive things wrong with our current understanding of the universe, so figuring that out might be really helpful.
I believe it canonically can’t run out of pages, so I’d think hard about how to leverage infinite free paper into world domination.
I don’t think you can infinitely fast pull out papers of the death note, so I doubt that you can produce more paper per hour than the average paper factory.
Burn the paper to fuel a turbine. Congratulations you now have infinite free energy.
Then it turns out that Death Note smoke particles retain the magic qualities of the source. Writing one’s name in dust with a fingertip becomes fraught with peril.
It may be extremely difficult to remove pages at a fast enough rate for this to be practically useful.
And you’ve set global warming to continue even beyond the exhaustion of fossil fuels.
The paper is white yes? If we can cover reasonably large areas of land with it it would make a pretty good reflector of solar radiation
You have to make sure nobody writes any names on it.
That’s a really good fanfiction idea. I hope you won’t mind if I swipe it.
Not at all. Although to some extent I just asked, what would HJPEV do if he got a Death Note?
Harry wasn’t even willing to use hoarcruxes. If you won’t kill a dying man to make someone else immortal, then you’re not going to do it just to throw science at the wall to see what sticks.
True, so this isn’t quite what HJPEV would do but more what would he do if he were slighlty less of an absolutist. (Actually has he ever explicitly said in text that he wouldn’t do that. I suspect given his attitudes that you are correct, but I’m curious what the textual basis is.)
-- Chapter 39: Pretending to be Wise, Pt 1
Well done. You have just levelled up.
Could someone who downvoted this please tell me why? I was praising a useful thought (WWHJPEVD?).
Nonspecific praise clutters up the thread. Next time, just upvote—it conveys the same information.
Thanks!
The above is a terribly ironic reply.
Why our kind can’t cooperate seems relevant. Even nonspecfic praise can create more fuzzy feelings than an upvote.
I think the fanfiction could be quite good at explaining to people modern cryptography and anonymity.
Also could examine concepts of personal identity, e.g. if someone converts and changes their name does the note recognise only the birth name or the new one? What about trans people who change names? You could ahve people tactically altering their self conception to avoid the effects of the note…
How do you recruit the volunteers without giving away that you have a death note and some secret service wanting to take it away from you?
Alternately, you could have a codemned criminal slip and break his neck on the way to the lethal injection.
I would refrain from discussing it in a public forum like this one.
I’m sorry, but you’ve already communicated information about this sort of thing just by saying that.
Note that this in no way contradicts ygert’s claim.
If I found something I thought was a Death Note I would spend a long, long time meditating on the question of how and in what way I’d gone insane.
Actually, I think most of the measure of people having Death Notes is… in Death Note itself. Thus, if I had a Death Note, I would logically conclude that the most likely explanation is that I myself am a character in Death Note. Not in the original manga, of course, as I read that and I know I wasn’t in it, but likely in some spin-off. I could easily see myself as a character in some sort of Death Note video game/simulation.
I am on the fence about the Simulation Argument, but even so, this is exactly the kind of thing which is strong evidence that I am a fictional character in a simulation. Getting a Death Note? That’s the kind of thing that only happens in stories!
(OK, it is true that I should keep in mind the possibility that I simply have gone insane. That is also a reasonable explanation. But it is far from the overwhelming certainty that you are implying.)
After finding a volunteer with a terminal illness, I’d test the limits of it. E.g. “The person will either write a valid proof of P=NP or a valid proof that P!=NP and then die of a heart attack.”
Already tested by Light in the manga, IIRC; the limits of skill top out before things like ‘escape from maximum-security prison’, so P=NP is well beyond the doable.
Ah, I’ve only seen the anime.
I’d also try “The person will die of cause A if X is true, and cause B if X is false” and other ways to try to push the burden of skill onto whatever mysterious universal forces are working instead of the human.
He tries it in the anime too. (I watched that episode yesterday.) He tries things like “draw a picture of L on your cell wall and then die of a heart attack” on some evil prisoner. It doesn’t work.
That’s clever, and should be tried.
It might even be possible to jam up the system with a sufficiently hard to compute death requirement, though I’m not sure I’d want to try it. The death note is rather valuable.
This probably violates a forum rule. Though I will speculate that Light’s plan of trying to kill all criminals he sees named probably does way more harm than good even if you ignore the fact that some are innocent.
Assuming for the moment the magic of the death note prevents me researching and reverse engineering it in any way:
I’d research the people who’s death is most likely to result in positive outcomes and kill them. Off the top of my head I’d go for current dictators and their immediate underlings. For example right now killing Robert Mugabe and the upper echelons of Zanu PF is probably the best thing that could happen to Zimbabwe (at time of writing he has just ‘won’ an election and the opposition are already mobilised, so a slight push is all that is really needed to collapse the regime).
Ideally, if I could ensure suitable anonymity protections I would publicly declare my intentions to have them killed in such a way that identifies me as the killer (e.g. send media outlets a statement with the exact time of targets death). Once my threats have be shown to be sufficiently reliable I will start making them conditional, giving myself the ultimate political blackmailing machine (e.g. if the international Red Cross does not have credible evidence within 30 days that all detention camps in North Korea have been closed and prisoners released, every member of the people’s congress will die simultaneously). Assuming I can maintain my anonymity in the long run I would be able to do a significant amount of good.
What do you do if North Korea put’s out a press release that they will nuke Seoul as a reprisal if you kill all members of the congress?
Take a big company like, say, goldman-sachs. Buy out of the money put options. Death-note the top three or four layers of management, simultaneously. Use the millions of dollars you have appropriated for whatever.
What do you tell the SEC when they asked you why you brought the options for Goldman Sachs?
Tell them the options were bought on the advice of a psychic reading. Or an Ouija board. Given that people know of the Death Note, they would suspect you to be the holder of the Death Note. Without that suspicion, it’s just a massive coincidence.
Alternatively, buy the options as part of a hedge, or as part of a variety of out of the money put options, or as part of any other broad investment strategy. If you get hundred-to-one returns, if it’s 5% of your portfolio then you still have five-to-one returns, which is plenty.
If we’re happy to go full evil then killing world leaders is also a good way to disrupt the economy (see the sudden crash when a fake report of Obama being shot was released).
That’s likely to cause more collateral damage than merely taking out the leadership of one company. Cost/benefit analysis and whatnot.
Gambling on sporting events is probably another good way to use the Death Note for making money. It’s probably far more ethical. Does the Death Note work on horses? If so, then you can bet on longshots while sabotaging the favorites by killing horses.
LOL. That’s a theme that is very well explored in fiction.
Hint: it’s not as crystal clear as you think it is.
I agree with your conclusion that it is not crystal clear, but because the ends don’t justify the means (for humans), and not by appealing to fictional evidence.
Explicitly not post on LessWrong what I would do, or even divulge its existence to anyone, naturally.
[Deliberately pretending not to have read the other replies.]
Either sell it to the highest bidder and give the money in equal parts to MIRI and GiveWell’s top recommended charity, or burn it, depending on the instantaneous level of strength of my ethical inhibitions. Most likely the latter.
EDIT: No, the former sounds like an awful idea on further thought. I’d just burn it down.
In so doing you are destroying important evidence about the state of the world which would deeply affect MIRI’s mission. (Namely: There are alien teenagers and/or other types of dark lords about.)
There’s probably no point in trying to create FAI if we’re already living in a simulation.
See the part in square brackets at the top of my comment.
Discussing hypothetical violence towards real people is out of bounds on this forum.
I request that the moderators, if they have not done so already, consider the acceptability of this whole thread.
So far only two (or possibly three) of the comments on this thread have done that, unless you count euthanasia of volunteers with terminal illnesses as violence (which sounds very noncentral to me).
I commit to donating $50 to MIRI if EY or lukeprog watch this 4:15 video and comment about their immediate reaction.
Anyone else, feel free to raise the donation pool; get your fill of drama entertainment and assuage your guilty conscience with a donation!
I’ll take the money. :)
IIRC this is a troll that followed me over from Common Sense Atheism. That video and a few others are fairly creepy, but The Ballad of Big Yud is actually kinda fun.
I watched it. It is either a skilled ventriloquist or a mediocre dubber performing a poorly-written conversation between himself and a sock puppet of Eliezer on the subject of his dissatisfaction with how Eliezer manages interactions with assorted people. There are terrible and badly-constructed puns. If either of the named parties value their time at less than $705/hr. and expect Kawoomba to be honest, meh, go for it.
I wonder what it’s like having such videos made about oneself. Edit: It’s actual ventriloquy, but the puns are mostly bad (though the first one succeeds just because it’s so unexpected), but the guy is dedicated (plenty of videos on his channel), and this one stands out in terms of … dedication.
What would it be like if some puppet were supposed to represent me, in a YT video, the hypothetical isn’t quite settling down on one probable outcome. Would I be worried of crazy-stalking type scenarios? Would I focus on the content? The guy making the content? Be strangely honored to even warrant that much attention even by unlikely strangers (the guy is an academic and a musician)? Etc.
So why not offset the cost of asking others to satisfy my curiosity by offering an incentive.
Edit: The $705/hr doesn’t make much sense, using numbers that way creates a false sense of precision when the basis is oversimplified (not using a realistic scenario: time to write the comment, expected ancillary time spent checking the channel, reading your comment and this one, comparison with the alternative since at least one of them probably will be watching that video anyways (wouldn’t you, if there were some Alicorn parody video out there?), short and long-term effects on being amenable to such requests, public relations considerations of giving publicity to bad criticism etc.).
The guy appears to be an idiot with a bee in his bonnet. I suppose Eliezer or Luke might want to watch the video just to get your $50, but what do you expect to be interesting about their response?
(I dare say he isn’t an idiot “globally”; he may for all I know be very smart most of the time; but in this context he’s being an idiot. There’s nothing there but mockery for mockery’s sake.)
I don’t know how I would react to such videos being done about me, so I wonder how they would react.
For their “celebrity” status, the amount and dedication of their anti-fans stands out. I wonder what inspires such strong emotions, and such a “love to hate” dynamic.
It’s that many people find them to be very interesting and intelligent on area X of their endeavors while at the same time the same people find them to go utterly off the deep end in area Y. I don’t know about anyone else, but when I see a contradiction like that I find myself compelled to find more about that person or group and to try to figure them out. edit: often with a good deal of laughing or frustration which is ultimately unresolved as anything more than ‘well, they just dont get it’ or ‘humans are nuts’
God, was this awful. Nothing like the ballad of big yud. And btw if you gave $50 just to see their reaction, I can make one such video about yourself for less than $50 so you can experience it yourself.
It’s just not the same if I commission the video myself by paying you for it.
Like paid love. Or anti-love.
Feminism is what you get when you assume that all gender differences are due to society. The manosphere/”red pill”/whatever is what you get when you assume that all gender differences are due to biology. Normal-reasonable-person-ism is what you get when you take into account the fact that we’re not sure yet.
Does this theory (or parts of it) seem true to you?
Feminism is one of those words that refers to such a diverse collection of opinions as to be practically meaningless.
For example, the kind of feminism that I tend to identify with is concerned with just removing inequalities regardless of their source and is also concerned with things like fat shaming, racism, the rights of the disabled, and other things that have nothing to do with gender, but there are certainly also people who identify as feminists and who would fit your description.
I’m pretty sure that some gender differences are due to society, and others are due to biology.
So feminism assumes that it is due to society that women can become pregnant and men can’t? Most feminists I know are normal-reasonable-people on your dichotomy, though you also ignore the fact that the category of whether differences are desireable and whether they can be influenced are far more interesting and important than whether they are at present mostly due to society or biology. I know people have a strange tendency to act as if things due to society can be trivially changed by collective whim while biology is eternal and immutable, but however common such a view, it is clearly absurd. Medicine can make all sorts of adjustments to our biology, while social engineers have historically been more likely to have unintended effects or no effect at all than they have been to successfully transform their societies in the ways they desire.
If men could get pregnant, they would already have invented a machine that would do the pregnancy for them. Or at least trying to invent such machine would be a high priority. But because it’s a “women’s job”, no one cares.
Yeah, now give me some mansplaining about why machine pregnancy would be “against the nature” (just like homosexuality, or votes for women), but sitting all the day by the computer is a natural order or things.
So while originally it was a matter of biology, it is a social decision to keep things the same way in the 21st century. Check your privilege!
(Not completely serious, just trying to impersonate a feminist.)
No.
The theory would be truer if it were weaker. I’m pretty sure most feminists believe that some gender differences are due to biology and most “manosphere” types don’t think all gender differences are fully biological.
Also I think the “normal-reasonable-person-ism” is not “we’re not sure yet.” On the contrary, we have overwhelming evidence biology and culture both play a role in observed sex differences.
Having said this, I think the main disagreement between feminists and manospheroids is not about facts but about values.
Another question is whether the fact that the average orange person is biologically more gibbrily than the average grey person justifies having a high-gibbriliness social role for orange people (without taking individual differences in gibbriliness into account) and treating orange people who fail to fulfil that role as ipso facto inferior, complete with slurs specifically for them.
Feminism is: “Society has gone too far in accomodating men (more often than not, or in more important areas).” Some might say that this is due to innate differences that were never addressed; some might say it is due to cultural norms that inculcate different tendencies which disadvantages women.
“Male Reaction” (to coin a term) is: “Society has gone too far in accomodating women (with the same caveat).” In either case, some adherents will say the ideal end state is legal and social equality, and some will say the ideal end state is legal or cultural accommodations to overcome natural differences.
Normal person view is: There are not large enough gender specific problems for me to be an activist about it.
No one assumes all differences are bio or all cultural, but there is a lot of dispute for where the border is of course.
I think you describe SOME feminists.
However, many other feminists can see there really are biological differences, differences on trend. These feminists I would say believe that the natural tendencies do not need to be further reinforced by laws. That the fact that more women than men will nurture children while more men than women will run corporations in the cutthroat way required for success does NOT suggest that we should have laws that make it harder for men to raise children or for women to be CEOs.
But you are correctly warning against the stupid end of feminism in my opinion.
Hahahahahahaha, hell no. Read up on Shulamith Firestone!
(A longer review/liveblog of her Dialectic of Sex coming soon… honestly. I’m reading it right now, and loving it. Amazing book.)
It does seem that feminism requires the additional assumption that gender differences are bad, and manosphereness that they are good.
More than “good” in a moral sense, maybe just “useful” or immutable.
In the manosphere you find concern about the fact that fathers are less likely to get custody over children after a divorce than mothers.
How courts think about giving custody to parents is obviously about how society does things, so people in the manosphere do see societal effects.
In a world where both genders engage in domestic violence feminists usually see domestic violence in a way where woman who are victims of domestic violence need support while there little thought payed to male victims.
There are many cases where the manosphere criticises society for treating males unfairly.