I highly recommend Hanabi. It’s a cooperative game about common knowledge.
Rules are here: http://www.cocktailgames.com/upload/produit/hanabi/regles/regle_en_hanabi.pdf
I highly recommend Hanabi. It’s a cooperative game about common knowledge.
Rules are here: http://www.cocktailgames.com/upload/produit/hanabi/regles/regle_en_hanabi.pdf
There’s another effect of “unpacking”, which is that it gets us around the conjunction/planning fallacy. Minimally, I would think that unpacking both the paths to failure and the paths to success is better than unpacking neither.
Well, for one thing, we don’t know how many round there are, ahead of time.
I don’t think that’s true. You may not believe that the set of functions is unique (in which case the notion of sets in bijection is no longer unique).
Oops, sorry! I misread. My bad. I would agree that they are all equivalent.
You reject the claim, but can you point out a flaw in their argument?
I claim that the answers to E, F, and G should indeed be the same, but H is not equivalent to them. This should be intuitive. Their line of argument does not claim H is equivalent to E/F/G—do the math out and you’ll see.
I agree that you should pay the same amount.
It feels as though you should be willing to pay twice as much in case 2, since you remove twice as much “death mass”. At this point, one might be confused by the apparent contradiction. Some are chalking it up to intuition being wrong (and the problem being misinformed) and others are rejecting the argument. But both seem clearly correct to me. And the resolution is simple—notice that your money is worth half as much in case 1, since you are living half as often!
Harry’s patronus also blocks a killing curse, in Azkaban (in HPMoR)
I think realistically, most people burn out if they don’t spend some time relaxing. If your argument had been more extreme, it might argue that people should sacrifice a couple of hours of sleep each day as well, right? But it’s plausible that for most people, going off of 6 hours of sleep per day will decrease their cognitive ability and productivity drastically. Or that exercising 1 hour a day has sufficient physical and mental health benefits to justify it. Could some amount of recreation be worth it? I think so. I think I couldn’t function without some recreation, although I agree that I should actively work to decrease the dependency.
But of course, you have a point. Everyone is wasting some time. Nobody is being perfectly altruistic, and making the best choices at all times. I don’t think anyone ever thought they were...
I think you can make $5/hr with Mechanical Turk.
I was in a similar situation as you, 4 years ago. I worked a fair bit harder, learned WAY more, and had a WAY better time in college. To be fair, my work ethic is still not very good, but I pretty much get A’s in the classes I care about and B’s in the ones I don’t.
I suspect that you will be fine—you’re probably smart enough that college won’t be as hard as you might think, and you’ll also be more motivated to dig up good work habits if you really need/want to.
I’m in my last year of studying CS/Math as an undergradate at MIT (I’m going to do a Master’s next year though). I’d really like some advice about what I should do after I graduate—Grad school? Industry? Any alternative?
I care a fair amount about reducing xrisk, but I am also fairly skeptical that there is much I can do about it right now.
I have job offers with Google and some tech start-ups, and I suspect I could get a job in finance if I tried. I personally have some desire to start a tech company one day. I’m not sure what the tradeoff between doing good work and making money is, but I suspect my main goal should be maximizing expected income. I’d try to use most my money to support people doing good things. (Though I’m not sure money is the limiting factor here. Perhaps discovering what to do would be better...)
I’m not sure whether I can get into a top graduate school—I have a 5.0 technical GPA (4.8 overall), but no research record or particularly good recommendations at the moment. I believe I am much better than most mathematicians/computer scientists, but I am also not sure what sort of research I could do that I would consider worthwhile. Realistically, I am at least 2 huge leaps down from being as good as, say, John Von Neumann (that is, people far from as good as him are far better than me). I also have really bad attention span, and tend not to think about problems for extended periods of time. I’m not sure this is worth factoring in, as it can probably be remedied. Even if I am not suited for solving the really hard theory problems out there, but minimally, I can code and do math better than most people in most labs. I’m basically open to going to graduate school in any field, as long as I’d make an impact, and hopefully have a comparative advantage.
I’ve talked to a number people about this, but I’m still pretty uncertain about what to do. I’d love to hear people’s thoughts. Thanks in advance!
I’m in the Cambridge area, and haven’t been attending meetups, but this seems like the most awesome possible way to start. What is the mailing list/point of contact to work things out with Cambridge meet-up guys?
Soundness: a semantic claim that given a specific notion of “true” as applies to a statement, e.g. truth in the model N of natural numbers, all the axioms of the theory are true. Automatically implies both consistency and omega-consistency. Requires a notion of the “intended model” or a “standard model” for the theory in which we consider the truth of propositions. For example, soundness is meaningless to talk about in the case of ZFC, which doesn’t have an intended model.
I looked it up, and it seems like what you’re referring to as soundness is called “arithmetic soundness.” The soundness I know doesn’t require a notion of standard model. It simply says that anything provable syntactically is also true in every model/interpretation. This is automatically true for any theory in first order logic. (Note that my version of soundness is the correct analogue of completeness, which is also true for any FOL theory, as Godel showed, less trivially.)
“If the oracle says yes, you know that the statement is true for standard integers because they’re one of the models of PA, therefore N is a standard integer, therefore T halts.”
So here, the oracle says something of the form “exists x such that T halts after x steps”. Omega-consistency guarantees that there is some actual standard number N so that “T halts after N steps” is provable/true. The existence of a standard model implies omega-consistency, so I might has well have gone with that, but I was just trying to be minimalist.
Sorry for the miscommunication. Are we on the same page now? I do think that saying “soundness” generally refers to the notion I said, though.
Oops sorry! Ignore what I said there. Anyways, the axioms aren’t necessarily r.e., but as far as I can tell, they don’t need to be.
I’m a little out of my depth here, so sorry if my comments don’t make sense.
I’m not an expert either, so I’m probably just being unclear
That’s supposed to be a r.e. set of axioms, not a single axiom, right? I can easily imagine the program that successively prints the axioms R(x) for all x in L, but how do you enumerate the axioms not R(x) for all x not in L, given that L is only r.e. and not recursive? Or am I missing some easy way to have the whole thing as a single axiom without pulling in the machinery for running arbitrary programs and such?
The axioms don’t need to be r.e. If they were, the oracle would never be more helpful than a halting oracle, no?
I don’t completely understand why there won’t be an accidental smart thing among all the silly things...
I don’t either. It’s just a strong intuition which I’m not sure I can justify, and which might be wrong.
ETA: By silly, I don’t necessarily mean as simple as the examples I gave. Basically if you have a formula phi(S(x), T, F), which holds for arbitrary sentences S(x), provably true T, and provably false S, then you can replace S(x) with R(x), T with R(x in L), and S with R(x not in L). Not sure if that was well explained, but yeah.
At least it looks like my answer is correct :). Also my proof should generalize, if it does work. So I would have guessed that Feferman’s (stronger) result was true, and I wouldn’t be surprised if the argument was along these lines, though maybe the details are harder.
I believe the answer to your question is yes. I’m going to just interpret “formal system” as “first order theory”, and then try to do the most straightforward thing.
Take a language L of intermediate degree, as constructed via the priority method. I’d like to just take the strings (or numbers) in this language to be the theory’s axioms. So let the theory have some 1-ary relation, call it R, as well as +, and constants 0 and 1. Assert that everything has a successor, just to get the “natural numbers” (without having multiplication though). Then just include the axiom that says R(x) for all x in L, and not R(x) for all x not in L.
It seems pretty clear that the only things this theory proves are things that FOL proves, silly things about the successor function, and silly things about R, like “forall x, R(x) → R(x)” and “(R(14) and R(12)) → R(17)” where R(14) is false. So an oracle for the logical implications of this theory has the same degree as an oracle for L.
Don’t feel like thinking about how to say/prove this part formally, but maybe someone can help (or correct) me. Also, for reference, Presburger arithmetic is basically arithmetic without multiplication, and is decidable.
Sure. You actually need something a bit stronger than soundness, in that you want omega-consistency, right?
I still don’t agree/understand with what you two are saying about having the standard integers as a model, or interepreting PA with its own axioms, though (or anything along the lines of needing to contain PA). I think this argument holds as long as the other formal system is recursively enumerable, and if PA is omega-consistent.
clipmenu lets you configure the history amount—I set it to 1000. also you can save snippets to paste (accessed via a different hotkey) - e.g. your email address, email templates, common code snippets in developer console
also, a possible alternative to karabiner for vim users is wasavi, which lets you use vim in browser textareas