Saturday.
To be clear, I liked the book, though I otherwise don’t like the guy’s writing.
Saturday.
To be clear, I liked the book, though I otherwise don’t like the guy’s writing.
Genre people and litfic people love flinging shit at each other, and it rarely makes much sense to a person actually familiar with the writing. Far as I can make out, it’s because of generalising from a little evidence—a lot of the insults make more sense when you look at the more-likely-to-be-recommended stuff (for example, Ian McEwan wrote a whole book which can be very easily strawmanned into “these poor people are really badly off; but you shouldn’t give in to the temptation to therefore dismiss all rich people”).
Even positive reviews that cross the divide are horribly condescending.
I usually deal with people who don’t have strong opinions either way, so I try to convince them. Given total non-compatibilists, what you do makes sense.
Also, it struck me today that this gives a way of one-boxing within CDT. If you naively blackbox prediction, you would get an expected utility table {{1000,0},{1e6+1e3,1e6}} where two-boxing always gives you 1000 dollars more.
But, once you realise that you might be a simulated version, the expected utility of one-boxing is 1e6 but of two-boxing is now is 5e5+1e3. So, one-box.
A similar analysis applies to the counterfactual mugging.
Further, this argument actually creates immunity to the response ‘I’ll just find a qubit arbitrarily far back in time and use the measurement result to decide.’ I think a self-respecting TDT would also have this immunity, but there’s a lot to be said for finding out where theories fail—and Newcomb’s problem (if you assume the argument about you-completeness) seems not to be such a place for CDT.
Disclaimer: My formal knowledge of CDT is from wikipedia and can be summarised as ‘choose A that maximises
Care to elaborate? Because otherwise I can say “it totally is!”, and we leave at that.
Basically, signals take time to travel. If it is ~.1 s, then predicting it that much earlier is just the statement that your computer has faster wiring.
However, if it is a minute earlier, we are forced to consider the possibility—even if we don’t want to—that something contradicting classical ideas of free will is at work (though we can’t throw out travel and processing time either).
For less than 85 pages, his main argument is in sections 3 and 4, ~20 pages.
No his thesis is that it is possible that even a maximal upload wouldn’t be human in the same way. His main argument goes like this:
a) There is no way to find out the universe’s initial state, thanks to no-cloning, the requirement of low entropy, and there being only one copy.
b) So we have to talk about uncertainty about wavefunctions—something he calls Knightian uncertainty (roughly, a probability distribution over probability distributions).
c) It is conceivable that particles in which the Knightian uncertainties linger (ie they have never spoken to anything macroscopic enough for decoherence to happen) mess around with us, and it is likely that our brain and only our brain is sensitive enough to one photon for that to mess around with how it would otherwise interact (he proposes Na-ion pathways).
d) We define “non-free” as something that can be predicted by a superintelligence without destroying the system (ie you can mess around with everything else if you want, though within reasonable bounds the interior of which we can see extensively).
e) Because of Knightian uncertainty it is impossible to predict people, if such an account is true.
My disagreements (well, not quite—more, why I’m still compatibilist after reading this):
a) predictability is different from determinism—his argument never contradicts determinism (modulo prob dists but we never gave a shit about that anyway) unless we consider Knightian uncertainties ontological rather than epistemic (and I should warn you that physics has a history of things suddenly making a jump from one to the other rather suddenly). And if it’s not deterministic, according to my interpretation of the word, we wouldn’t have free will any more.
b) this freedom is still basically random. It has more to do with your identification of personality than anything Penrose ever said, because these freebits only hit you rarely and only at one place in your brain - but when they do affect it they affect it randomly among considered possiblities,
I’d say I was rather benefitted by reading it, because it is a stellar example of steelmanning a seemingly (and really, I can say now that I’m done) incoherent position (well, or being the steel man of said position). Here’s a bit of his conclusion that seems relevant here:
To any “mystical” readers, who want human beings to be as free as possible from the mechanistic chains of cause and effect, I say: this picture represents the absolute maximum that I can see how to offer you, if I confine myself to speculations that I can imagine making contact with our current scientific understanding of the world. Perhaps it’s less than you want; on the other hand, it does seem like more than the usual compatibilist account offers! To any “rationalist” readers, who cheer when consciousness, free will, or similarly woolly notions get steamrolled by the advance of science, I say: you can feel vindicated, if you like, that despite searching (almost literally) to the ends of the universe, I wasn’t able to offer the “mystics” anything more than I was! And even what I do offer might be ruled out by future discoveries.
Yes, I agree with you—but when you tell some people that the question arises of what is in the big-money box after Omega leaves… and the answer is “if you’re considering this, nothing.”
A lot of others (non-LW people) I tell this to say it doesn’t sound right. The bit just shows you that the seeming closed-loop is not actually a closed loop in a very simple and intuitive way** (oh and it actually agrees with ‘there is no free will’), and also it made me think of the whole thing from a new light (maybe other things that look like closed loops can be shown not to be in similar ways).
** Anna Salamon’s cutting argument is very good too but a) it doesn’t make the closed-loop-seeming thing any less closed-loop-seeming and b) it’s hard to understand for most people and I’m guessing it will look like garbage to people who don’t default to compatibilist.
I like his causal answer to Newcomb’s problem:
In principle, you could base your decision of whether to one-box or two-box on anything you like: for example, on whether the name of some obscure childhood friend had an even or odd number of letters. However, this suggests that the problem of predicting whether you will one-box or two-box is “you-complete.” In other words, if the Predictor can solve this problem reliably, then it seems to me that it must possess a simulation of you so detailed as to constitute another copy of you (as discussed previously). But in that case, to whatever extent we want to think about Newcomb’s paradox in terms of a freely-willed decision at all, we need to imagine two entities separated in space and time—the “flesh-and-blood you,” and the simulated version being run by the Predictor—that are nevertheless “tethered together” and share common interests. If we think this way, then we can easily explain why one-boxing can be rational, even without backwards-in-time causation. Namely, as you contemplate whether to open one box or two, who’s to say that you’re not “actually” the simulation? If you are, then of course your decision can affect what the Predictor does in an ordinary, causal way.
While reading books. Always particular voices for every character. So much so, I can barely sit through adaptations of books I’ve read. And my opinion of a writer always drops a little bit when I meet hjm/her, and the voice in my head just makes more sense for that style.
I’d wager people who do well on tests are apt to be the same ones who get high marks on Cognos reports—i.e., the same prejudices affect what’s deemed valuable for both.
Well, fair enough.
What would that prove? That a society that values high IQ rewards people with high IQs.
You do realise that it’s rare for co-workers to know each other’s IQs? Obviously there’s a third thing that both IQ and success correlate with.
Is there reason to believe that social skills are more difficult to teach than math, or rationality?
Yes. They’re very hard to understand. It’s hard to teach something you don’t understand.
Auxiliarily, I’d expect that common sense would kick in and people would feel confident in contradicting teachers.
(That said, it seems likely to me that they can be taught.)
Doesn’t affect your main point, but this is SO not what the Buddhists were talking about (at least Indian and TIbetan Buddhism, which are the strains with which I have passing familiarity).
The Buddhist position on this stuff would be, “stop maximising a personal utility function; the fact that you feel you have to do that will lead you to bigger problems later on. [Why that is so is an involved discussion and I’ll get it horribly wrong if I try.] Instead, learn to let go of the whole dichotomy of me and not-me, regard your consciousness being a local* phenomenon as a transient accident, and the you’ll attain Nirvana—you’ll get outside the cycle of birth and death which is the only reason your consciousness has this local manifestation.” (Yes, I see the obvious mistake here, but think of the whole identifying the listener as someone separate as a stepping stone to a point where that becomes superfluous.)
(And all three keys are related. I imagine that if you fill in the thoughts about why they’re related, you’ll get as good an understanding as I, so I won’t go into it.)
*Local in the sense of ‘in your body,’ not the way it’s used in physics.
The hundreds cancel out.
Wow, that was stupid of me. Of course they do! And thanks.
Just tried one today: how safe are planes?
Last time I was at an airport, the screen had five flights, three-hour period. It was peak time, so multiplied only by fifteen, so 25 flights from Chennai airport per day.
~200 countries in the world, so guessed 500 adjusted airports (effective no. of airports of size of Chennai airport), giving 12500 flights a day and 3*10^6 flights a year.
One crash a year from my news memories, gives possibility of plane crash as 1⁄310^-6 ~ 310^-7.
Probability of dying in a plane crash is 3*10^-7 (source). At hundred dead passengers a flight, fatal crashes are ~ 10^-5. Off by two orders of magnitude.
PEP is not a force, in the sense that it’s not ‘dynamical:’ it can’t actually affect the Hamiltonian/Lagrangian of the world. And it’s not a symmetry either, it’s a consequence of the behaviour of ‘fields’ under rotations: see spin-statistics theorem. (Explanation of the field business: modern physics postulates that at every point in space and time there are a certain number of degrees of freedom, and we call them fields and ‘quantising’ gives us particles—and particles are just spatially localised excitations when you don’t look closely at them.)
The rest of the forces, however, do come from symmetries called local gauge symmetries; roughly, since the wavefn is a complex no, change the phase by some amount which depends on the point and then requiring that physics be invariant under this. (Even gravity, though only in classical field theory as of now: it can be found by a Lorentz transformation by a different amount at every point.)
This explanation is horrible, so sorry; but on the bright side, the math is simple enough that you may actually understand wikipedia on these things.
Oh, and can I latex in the comments?
Well, hello. I’m a first-year physics PhD student in India. Found this place through Yvain’s blog, which I found when I was linked there from a feminist blog. It’s great fun, and I’m happy I found a place where I can discuss stuff with people without anyone regularly playing with words (or, more accurately, where it’s acceptable to stop and define your words properly). So, one of my favourite things about this place is the fact that it’s based on the map to territory idea of truth and beliefs; I’ve been using it to insult people ever since I read it.
The post says I should say why I identify as a rationalist; I wouldn’t, personally, ’cause I never feel like being better at rationality is the point, and whatever you or I say the word means it stands to be misunderstood in this way. But as for why I’m interested in this place at all: better calibration, and the possibility of better communication.
Anyway, still going through the sequences (personally, I would prefer reading something more mathematical, but I can understand why these posts aren’t). I have a whole tab group in firefox for LW right now, because it went too far out of hand.
As for special personal interests, I’m ridiculously scatterbrained, and so haven’t garnered any non-trivial understanding of anything. One vaguely interesting thing I do enjoy doing is trying to charitably understand some mystical-looking stuff, like the Tao te Ching or Maya or art criticism (warning: my blog is a review blog, but you won’t find much of this if you click through, as there I just use the conventions post-justification and modify them whenever). My methodology: ask what questions they were thinking about to posit the answers they did, and then think about the questions myself. Collect more information, and update. Maybe I’ll even write about some of this once I have a better grasp of how to explicitly use the tools presented here.
Also, I have a question about Anki: is the web part defunct or something? I can’t find anything there. Whatever I search for, I get a blank page. (I was going to post in that page, but this is more likely to be replied to.)
Eliezer (if you see this): is there a reason you feel the need to talk about Everett branches or Tegmark duplicates every time you speak about the interpretation of probability, or is it just a physically realisable way to talk about an ensemble? (Though I’m not sure if you can call them physically realisable if you can’t observe them.)
In response to this, I want to roll back to saying that while you may not actually be simulated, having the programming to one-box is what causes there to be a million dollars in there. But, I guess that’s the basic intuition behind one-boxing/the nature of prediction anyway so nothing non-trivial is left (except the increased ability to explain it to non-LW people).
Also, the calculation here is wrong.