The Pentium FDIV bug was actually discovered by someone writing code to compute prime numbers.
ScottMessick
Suggestions for Slytherin: Sun Tzu’s Art of War and some Nietzsche, maybe The Will to Power?
Suggestion for Ravenclaw: An Enquiry Concerning Human Understanding, David Hume.
The post seems to confuse the law of non-contradiction with the principle of explosion. To understand this point, it helps to know about minimal logic which is like intuitionistic logic but even weaker, as it treats ((false)) the same way as any other primitive predicate. Minimal logic rejects the principle of explosion as well as the law of the excluded middle (LEM, which the main post called TND).
The law of non-contradiction (LNC) is just
). (In the main post this is called ECQ, which I believe is erroneous; ECQ should refer to the principle of explosion (especially the second form).) The principle of explosion is either%20\to%20Q) or . These two forms are equivalent in minimal logic (due to the law of non-contradiction). As mentioned above, minimal logic has the law of non-contradiction, but not the principle of explosion, so this shows that they’re not equivalent in every circumstance. Rejecting the principle of explosion (especially the second form) is the defining feature of paraconsistent logics (a class into which many logics fall). Some of these still have the validity of the law of non-contradiction. Anti-intuitionistic logic does not, because LNC is dual to LEM, which is invalid intuitionistically.Ok, so I ended up taking a lot of time researching that nitpick so I could say it correctly. Anyway, I’m curious to see where this is going.
Super-upvoted.
I’m not going to say they haven’t been exposed to it, but I think quite few mathematicians have ever developed a basic appreciation and working understanding of the distinction between syntactic and semantic proofs.
Model theory is, very rarely, successfully applied to solve a well-known problem outside logic, but you would have to sample many random mathematicians before you could find one that could tell you exactly how, even if you restricted to only asking mathematical logicians.
I’d like to add that in the overwhelming majority of academic research in mathematical logic, the syntax-semantics distinction is not at all important, and syntax is suppressed as much as possible as an inconvenient thing to deal with. This is true even in model theory. Now, it is often needed to discuss formulas and theories, but a syntactical proof need not ever be considered. First-order logic is dominant, and the completeness theorem (together with soundness) shows that syntactic implication is equivalent to semantic implication.
If I had to summarize what modern research in mathematical logic is like, I’d say that it’s about increasingly elaborate notions of complexity (of problems or theorems or something else), and proving that certain things have certain degrees of complexity, or that the degrees of complexity themselves are laid out in a certain way.
There are however a healthy number of logicians in computer science academia who care a lot more about syntax, including proofs. These could be called mathematical logicians, but the two cultures are quite different.
(I am a math PhD student specializing in logic.)
The explanation “number of partners” question is problematic right now. It reads “0 for single, 1 for monogamous relationship, >1 for polyamorous relationship” which makes it sound like you must be monogamous if you happen to have 1 partner. I am polyamorous, have one partner and am looking for more.
In fact, I started wondering if it really meant “ideal number of partners”, in which case I’d be tempted to put the name of a large cardinal.
I continue to be surprised (I believe I commented on this last year) that under “Academic fields” pure mathematics is not listed on its own; it is also not clear to me that pure mathematics is a hard science; relatedly, are non-computer science engineering folk expected to write in answers?
I second this: please include pure mathematics. I imagine there are a fair few of us, and there’s no agreed upon way to categorize it. I remember being annoyed about this last year. (I’m pretty sure I marked “hard sciences”.)
I wonder how it would be if you asked instead “When should we say a statement is true?” instead of “What is truth?” and whether your classmates would think them the same (or at least closely related) questions.
I think this hypothesis is worth bearing in mind. However, it doesn’t explain advancedatheist’s observation that wealthy cryonicists are eager to put a lot of money in revival trusts (whose odds of success are dubious, even if cryonics works) rather than donate to improve cryonics research or the financial viability of cryonics organizations.
I was mainly worried that she would suffer information-theoretic death (or substantial degradation) before she could be cryopreserved.
What about the brain damage her tumor is causing?
This seems important and I’m a little surprised no one’s asked. How will her brain damage impact her chances of revival? (From the blog linked in the reddit post, it sounds like she is already experiencing symptoms.) Obviously she is quite mentally competent right now, but what about when she is declared legally dead? I am far from an expert and simply would like to hear some authoritative commentary on this. I am interested in donating but only if there’s a reasonable chance brain damage won’t make it superfluous.
This is a really good exposition of the two envelopes problem. I recall reading a lot about that when I first heard it, and didn’t feel that anything I read satisfactorily resolved it, which this does. I particularly liked the more precise recasting of the problem at the beginning.
(It sounds like some credit is also due to VincentYu.)
I haven’t read the article, but I want to point out that prisons are enormously costly. So there is still much to gain potentially even if the new system is only equally effective at deterrence and rehabilitation.
The fact that prisons are inhumane is another issue, of course.
I had long ago (but after being heavily influenced by Overcoming Bias) thought that signaling could be seen simply as a corollary to Bayes’ theorem. That is, when one says something, one knows that its effect on a listener will depend on the listener’s rational updating on the fact that one said it. If one wants the listener to behave as if X is true, one should say something that the listener would only expect in case X is true.
Thinking in this way, one quickly arrives at conclusions like “oh, so hard-to-fake signals are stronger” and “if everyone starts sending the same signal in the same way, that makes it a lot weaker”, which test quite well against observations of the real world.
Powerful corollary: we should expect signaling, along with these basic properties, to be prominent in any group of intelligent minds. For example, math departments and alien civilizations. (Non-example: solitary AI foom.)
I’m really glad you pointed out that SI’s strategy is not predicated on hard take-off. I don’t recall if this has been discussed elsewhere, but that’s something that always bothered me since I think hard take-off is relatively unlikely. (Admittedly, soft take-off still considerably diminishes my expected impact for SI and donating to it.)
But this elegant simplicity was, like so many other things, ruined by the Machiguenga Indians of eastern Peru.
Wait, is this a joke, or have the Machiguenga really provided counterexamples to lots of social science hypotheses?
These phrases are mainly used in near mode, or when trying to induce near mode. The phenomenon described in the quote is a feature (or bug) of far mode.
I have direct experience of someone highly intelligent, a prestigious academic type, dismissing SI out of hand because of its name. I would support changing the name.
Almost all the suggestions so far attempt to reflect the idea of safety or friendliness into the name. I think this might be a mistake, because for people who haven’t thought about it much, this invokes images of Hollywood). Instead, I propose having the name imply that SI does some kind of advanced, technical research involving AI and is prestigious, perhaps affiliated with a university (think IAS).
Center for Advanced AI Research (CAAIR)
Summary: Expanding on what maia wrote, I find it plausible that many people could produce good technical arguments against cryonics but don’t simply because they’re not writing about cryonics at all.
I was defending maia’s point that there are many people who are uninterested in cryonics and don’t think it will work. This class probably includes lots of people who have relevant expertise as well. So while there are a lot of people who develops strong anti-cryonics sentiments (and say so), I suspect they’re only a minority of the people who don’t think cryonics will work. So the fact that the bulk of anti-cryonics writings lack a tenable technical argument is only weak evidence that no one can produce one right now. It’s just that the people who can produce them aren’t interested enough to bother writing about cryonics at all.
I wholeheartedly agree that we should encourage people who may have them to write up strong technical arguments why cryonics won’t work.
I was disappointed to see my new favorite “pure” game Arimaa missing from Bostrom’s list. Arimaa was designed to be intuitive for humans but difficult for computers, making it a good test case. Indeed, I find it to be very fun, and computers do not seem to be able to play it very well. In particular, computers are nowhere close to beating top humans despite the fact that there has arguably been even more effort to make good computer players than good human players.
Arimaa’s branching factor dwarfs that of Go (which in turn beats every other commonly known example). Since a super-high branching factor is also a characteristic feature of general AI test problems, I think it remains plausible that simple, precisely defined games like Arimaa are good test cases for AI, as long as the branching factor keeps the game out of reach of brute force search.