Witty to be sure, but obviously false. The causal connection between baseball and the content (as opposed to the name) of the law is probably fairly tenuous. The number three is ubiquitous in all areas of human culture.
skepsci
Exactly. In fact, it was well known at the time that the Earth is round, and most educated people even knew the approximate size (which was calculated by Eratosthenes in the third century BCE). Columbus, on the other hand, used a much less accurate figure, which was off by a factor of 2.
The popular myth that Columbus was right and his contemporaries were wrong is the exact opposite of the truth.
Wouldn’t explaining why the statement is misleading be more productive than suppressing the misleading statement?
If a person somehow loses the associated good feelings, ice cream also ceases to be desirable. I still don’t see the difference between Monday and Tuesday.
I think I might have some idea what you mean about masochists not liking pain. Let me tell a different story, and you can tell me whether you agree...
Masochists like pain, but only in very specific environments, such as roleplaying fantasies. Within that environment, masochists like pain because of how it affects the overall experience of the fantasy. Outside that environment, masochists are just as pain-averse as the rest of the world.
Does that story jibe with your understanding?
The difference is between amateur and professional ratings. Amateur dan ratings, just like kyu ratings, are designed so that a difference of n ranks corresponds to suitability of a n-stone handicap, but pro dan ratings are more bunched together.
See Wikipedia:Go pro.
I would be very interested if anyone has good examples of this phenomenon.
There are a few “triads” mentioned in the intellectual hipster article, but the only one that really seems to me like a good example of this phenomenon is the “don’t care about Africa / give aid to Africa / don’t give aid to Africa” triad.
This advice is worse than useless. But coming from someone who was instrumental in the “Physicists have figured a way to efficiently eradicate humanity; let’s tell the politicians so they may facilitate!” movement, it’s not surprising.
Protip: the maxim “That which can be destroyed by the truth, should be” does not mean we should publish secrets that have a chance of ending global civilization.
So I should interpret Will’s “Omega = objective morality” comment as meaning “sufficiently wise agents sometimes cooperate, when cooperation is the best way to achieve their ends”? I don’t think so.
It’s also completely ridiculous, with a sample size of ~10 questions, to give the success rate and probability of being well calibrated as percentages with 12 decimals. Since the uncertainty in such a small sample is on the order of several percent, just round to the nearest percentage.
No, it’s an annual rate. You quote it as an annual rate, and it matches the annual rate I found by repeating your search. So you need to multiply by seven to get the rate of people committing suicide during the years they would, if a Hogwarts student, be attending Hogwarts.
Except that students stay at Hogwarts for 7 years, not one, which would put the suicide rate at Hogwarts at one per 14 years, not one per century (if wizards commit suicide at the same rate as muggles). If you assumed that Wizarding suicide attempts were 5 times as likely to be successful, that would put the rate at one suicide every 3 years.
Of course, it’s entirely possible that the wizarding resilience to illness and injury also makes them more resilient to mental illness, and that’s why suicide rates are lower.
Excellent.
It is trivial* to see that this game reduces to equivalent to a simple two party prisoners dilemma with full mutual information.
It only reduces to/is equivalent to a prisoner’s dilemma for certain utility functions (what you’re calling “values”). The prisoners’ dilemma is characterized by the fact that there is a dominant strategy equilibrium which is not Pareto optimal. But if the utility functions of the agents are such that the game is zero-sum, then this can’t be the case, as every outcome is Pareto optimal in a zero-sum game.
Furthermore, in a zero-sum game, no cooperation between all of the agents is possible. So it’s crazy to believe that an arbitrary set of sufficiently intelligent agents will cooperate to achieve a single “overgoal”. Collaboration is only possible if the agents’ preferences are such that collaboration can be mutually beneficial.
If you have beliefs about the matter already, push the “reset” button and erase that part of your map. You must feel that you don’t already know the answer.
It seems like a bad idea to intentionally blank part of your map. If you already know things, you shouldn’t forget what you already know. On the other hand, if you have reason to doubt what you think you know, you should blank the suspect parts of your map when you had reason to doubt them, and not artificially as part of a procedure for generating curiosity.
I think what you may be trying to say is that it is good practice to periodically rethink what you think you know, and make sure that A) you remember how you came to believe what you believe, and B) your conclusions still make sense in light of current evidence. However, when you do this, it is important not to get into the habit of quickly returning to the same conclusions for the same reasons. If you never change your conclusions while rethinking them, that’s probably a sign that you are too resistant to changing your mind.
The decisions produced by any decision theory are not objectively optimal; at best they might be objectively optimal for a specific utility function. A different utility function will produce different “optimal” behavior, such as tiling the universe with paperclips. (Why do you think Eliezer et al are spending so much effort trying to figure out how to design a utility function for an AI?)
I see the connection between omega and decision theories related to Solomonoff induction, but as the choice of utility function is more-or-less arbitrary, it doesn’t give you an objective morality.
The latter.
I’m very confused* about the alleged relationship between objective morality and Chaitin’s omega. Could you please clarify?
*Or rather, if I’m to be honest, I suspect that you may be confused.
It is bad luck to be superstitious.
-Andrew W. Mathis
If a bad law is applied in a racist way, surely that’s a problem with both the law itself and the justice system’s enforcement of it?
Outside of mathematical logic, some familiar examples include:
compactness vs. sequential compactness—generalizing from metric to topological spaces
product topology vs. box topology—generalizing from finite to infinite product spaces
finite-dimensional vs. finitely generated (and related notions, e.g. finitely cogenerated)—generalizing from vector spaces to modules
pointwise convergence vs. uniform convergence vs. norm-convergence vs. convergence in the weak topology vs....—generalizing from sequences of numbers to sequences of functions