Qbrfa’g frrz jebat gb fnl gung vg’f ng yrnfg gur vapbzr ybfg, gubhtu, juvpu vf nyy lbh arrq gb bireqrgrezvar na nafjre.
MBlume
Is income before or after taxes?
Yeah, wouldn’t stay selected.
Done
I think you’re being oversensitive—if I said the NYC Swing Dancing Club had two babies, I don’t think anyone would bat an eye.
This is a really good post.
If I can bother your mathematical logician for just a moment...
Hey, are you conscious in the sense of being aware of your own awareness?
Also, now that Eliezer can’t ethically deinstantiate you, I’ve got a few more questions =)
You’ve given a not-isomorphic-to-numbers model for all the prefixes of the axioms. That said, I’m still not clear on why we need the second-to-last axiom (“Zero is the only number which is not the successor of any number.”) -- once you’ve got the final axiom (recursion), I can’t seem to visualize any not-isomorphic-to-numbers models.
Also, how does one go about proving that a particular set of axioms has all its models isomorphic? The fact that I can’t think of any alternatives is (obviously, given the above) not quite sufficient.
Oh, and I remember this story somebody on LW told, there were these numbers people talked about called...um, I’m just gonna call them mimsy numbers, and one day this mathematician comes to a seminar on mimsy numbers and presents a proof that all mimsy numbers have the Jaberwock property, and all the mathematicians nod and declare it a very fine finding, and then the next week, he comes back, and presents a proof that no mimsy numbers have the Jaberwock property, and then everyone suddenly loses interest in mimsy numbers...
Point being, nothing here definitely justifies thinking that there are numbers, because someone could come along tomorrow and prove ~(2+2=4) and we’d be done talking about “numbers”. But I feel really really confident that that won’t ever happen and I’m not quite sure how to say whence this confidence. I think this might be similar to your last question, but it seems to dodge RichardKennaway’s objection.
I’ve seen some (old) arguments about the meaning of axiomatizing which did not resolve in the answer, “Because otherwise you can’t talk about numbers as opposed to something else,” so AFAIK it’s theoretically possible that I’m the first to spell out that idea in exactly that way, but it’s an obvious-enough idea and there’s been enough debate by philosophically inclined mathematicians that I would be genuinely surprised to find this was the case.
If memory serves, Hofstadter uses roughly this explanation in GEB.
Central planning is pushing their goals into everyone’s individual incentive. Humans aren’t IGF maximizers, and will respond to financial incentives.
With central planning, more women than men makes sense, and this system has central planning. Everyone isn’t just trying to maximize IGF
Fable of the Dragon Tyrant would make a good animated short, I think.
OK, let’s say you’re looking down at a full printout of a block universe. Every physical fact for all times specified. Then let’s say you do Solomonoff induction on that printout—find the shortest program that will print it out. Then for every physical fact in your printout, you can find the nearest register in your program it was printed out of. And then you can imagine causal surgery—what happens to your program if cosmic rays change that register at that moment in the run. That gives you a way to construe counterfactuals, from which you can get causality.
ETA: There’s still some degrees of freedom in how this gets construed though. Like, what if the printout I’m compressing has all its info time-reversed—it starts out with details about what we’d call the future, then the present, then the past. Then I’d imagine that the shortest program that’d print that out would process everything forward, store it in an accumulator, then run a reversal on that accumulator to print it out, the problem being that the registers printed out from might be downstream from where the value was. It seems like you need some extra magic to be sure of what you mean by “pretend this fact here had gone the other way”
This question seems decision-theory complete. If you can reify causal graphs in situations where you’re in no state of uncertainty, then you should be able to reify them to questions like “what is the output of this computation here” and you can properly specify a wins-at-Newcomb’s-problem decision theory.
I am still trying to figure out how to Have Computers correctly, because they suffer from this weird constraint where they’re only really useful if I can carry them all over, but if I do that I lose them all the time.
(Symptomatically, I’m typing this on your broken/cast-off macbook =P)
a 15 minute break every 90 minutes
People can work for 90 minutes?! Like… without stopping?
Ah, gotcha =)
Sorry, what do you mean by “pass an ideological Turing test”? The version I’m familiar with gets passed by people, not definitions.
“Sexism” is a short code. Not only that, it’s a short code which has already been given a strong negative affective valence in modern society. Fights about its definition are fights about how to use that short code. They’re fights over a resource.
That code doesn’t even just point to a class of behaviors or institutions—it points to an argument, an argument of the form “these institutions favor this gender and that’s bad for these reasons”. Some people would like it to point more specifically to an argument that goes something like “If, on net, society gives more benefits to one gender, and puts more burdens on the other, then that’s unfair, and we should care about fairness.” Others would like it to point to “If someone makes a rule that applies differently to men and women, there’s a pretty strong burden of proof that they’re not making a suboptimal rule for stupid reasons. Someone should probably change that rule”. The fight is over which moral argument will come to mind quickly, will seem salient, because it has the short code “sexism”.
If I encounter a company where the men have a terrible dress code applied to them, but there’s one woman’s restroom for every three men’s restroom, the first argument might not have much to say, but the second might move me to action. Someone who wants me to be moved to action would want me to have the second argument pre-cached and available.
In particular, I’m not a fan of the first definition, because it motivates a great big argument. If there’s a background assumption that “sexism” points to problems to be solved, then the men and the women in the company might wind up in a long, drawn-out dispute over whose oppression is worse, and who is therefore a target of sexism, and deserving of aid. The latter definition pretty directly implies that both problems should be fixed if possible.
I’ll take a shot.
What we choose to measure affects what we choose to do. If I adopt the definition above, and I ask a wish machine to “minimize sexism”, maybe it finds that the cheapest thing to do is to ensure that for every example of institutional oppression of women, there’s an equal and opposite oppression of men. That’s...not actually what I want.
So let’s work backwards. Why do I want to reduce sexism? Well, thinking heuristically, if we accept as a given that men and women are interchangeable for many considerations, we can assume that anyone treating them differently is behaving suboptimally. In the office in the example, the dress code can’t be all that helpful to the work environment, or the women would be subject to it. Sexism can be treated as a pointer to “cheap opportunities to improve people’s lives”. The given definition cuts off that use.
Welcome ^_^