Stephenson remains one of my favorites, even though I failed at several attempts to enjoy his Baroque Cycle series. Anathem is as good as his pre-Baroque-Cycle work.
simpleton
Poor kid. He’s a smart 12 year old who has some silly ideas, as smart 12 year olds often do, and now he’ll never be able to live them down because some reporter wrote a fluff piece about him. Hopefully he’ll grow up to be embarrassed by this, instead of turning into a crank.
His theories as quoted in the article don’t seem to be very coherent—I can’t even tell if he’s using the term “big bang” to mean the origin of the universe or a nova—so I don’t think there’s much of a claim to be evaluated here.
Of course, it’s very possible that the reporter butchered the quote. It’s a human interest article and it’s painfully obvious that the reporter parsed every word out of the kid’s mouth as science-as-attire, with no attempt to understand the content.
For those of you who watch Breaking Bad, the disaster at the end of Season 3 probably wouldn’t have happened if the US adopted a similar system.
When I saw that episode, my first thought was that it would be extraordinarily unlikely in the US, no matter how badly ATC messed up. TCAS has turned mid-air collisions between airliners into an almost nonexistent type of accident.
This does happen a lot among retail investors, and people don’t think about the reversal test nearly often enough.
There’s a closely related bias which could be called the Sunk Gain Fallacy: I know people who believe that if you buy a stock and it doubles in value, you should immediately sell half of it (regardless of your estimate of its future prospects), because “that way you’re gambling with someone else’s money”. These same people use mottos like “Nobody ever lost money taking a profit!” to justify grossly expected-value-destroying actions like early exercise of options.
However, a bias toward holding what you already own may be a useful form of hysteresis for a couple of reasons:
There are expenses, fees, and tax consequences associated with trading. Churning your investments is almost always a bad thing, especially since the market is mostly efficient and whatever you’re holding will tend to have the same expected value as anything else you could buy.
Human decisionmaking is noisy. If you wake up every morning and remake your investment portfolio de novo, the noise will dominate. If you discount your first-order conclusions and only change your strategy at infrequent intervals, after repeated consideration, or only when you have an exceptionally good reason, your strategy will tend towards monotonic improvement.
It’s common in certain types of polemic. People hold (or claim to hold) beliefs to signal group affiliation, and the more outlandishly improbable the beliefs become, the more effective they are as a signal.
It becomes a competition: Whoever professes beliefs which most strain credibility is the most loyal.
Sorry, I thought that post was a pretty good statement of the Friendliness problem, sans reference to the Singularity (or even any kind of self-optimization), but perhaps I misunderstood what you were looking for.
Argh. I’d actually been thinking about getting a 23andme test for the last week or so but was put off by the price. I saw this about 20 minutes too late (it apparently ended at midnight UTC).
In practice, you can rarely use GPLed software libraries for development unless you work for a nonprofit.
That’s a gross overgeneralization.
Yes.
The things Shalmanese is labeling “reason” and “evidence” seem to closely correspond to what have been previously been called the inside view and outside view, respectively (both of which are modes of reasoning, under the more common definition).
Quite the opposite, under the technical definition of simplicity in the context of Occam’s Razor.
- 29 Dec 2009 15:22 UTC; -4 points) 's comment on Decoherence is Simple by (
MWI completely fails if any such non-linearities are present, while other theories can handle them. [...] It can collapse with one experiment, and I’m not betting against such experiment happening in my lifetime at odds higher than 10:1.
So you’re saying MWI tells us what to anticipate more specifically (and therefore makes itself more falsifiable) than the alternatives, and that’s a point against it?
And the best workaround you can come up with is to walk away from the money entirely? I don’t buy it.
If you go through life acting as if your akrasia is so immutable that you have to walk away from huge wins like this, you’re selling yourself short.
Even if you’re right about yourself, you can just keep $1000 [edit: make that $3334, so as to have a higher expected value than a sure $500] and give the rest away before you have time to change your mind. Or put the whole million in an irrevocable trust. These aren’t even the good ideas; they’re just the trivial ones which are better than what you’re suggesting.
Being aware of that tendency should make it possible to avoid ruination without forgoing the money entirely (e.g. by investing it wisely and not spending down the principal on any radical lifestyle changes, or even by giving all of it away to some worthy cause).
Well, I wouldn’t rule out any of:
1) I and the AI are the only real optimization processes in the universe.
2) I-and-the-AI is the only real optimization process in the universe (but the AI half of this duo consistently makes better predictions than “I” do).
3) The concept of personal identity is unsalvageably confused.
If we have this incapability, what explains the abundant fiction in which nonhuman animals (both terrestrial and non) are capable of speech, and childhood anthropomorphization of animals?
That’s not anthropomorphization.
Can you teach me to talk to the stray cat in my neighborhood?
Sorry, you’re too old. Those childhood conversations you had with cats were real. You just started dismissing them as make-believe once your ability to doublethink was fully mature.
All of the really interesting stuff, from before you could doublethink at all, has been blocked out entirely by infantile amnesia.
I would believe that human cognition is much, much simpler than it feels from the inside—that there are no deep algorithms, and it’s all just cache lookups plus a handful of feedback loops which even a mere human programmer would call trivial.
I would believe that there’s no way to define “sentience” (without resorting to something ridiculously post hoc) which includes humans but excludes most other mammals.
I would believe in solipsism.
I can hardly think of any political, economic, or moral assertion I’d regard as implausible, except that one of the world’s extant religions is true (since that would have about as much internal consistency as “2 + 2 = 3”).
The actual quote didn’t contain the word “beat” at all. It was “Count be wrong, they fuck you up.”
The fact that we find ourselves in a world which has not ended is not evidence.
I don’t think Turing-completeness implies that.
Consider the similar statement: “If you loaded a Turing machine with a sufficiently long random tape, and let it run for enough clock ticks, an AI would be created.” This is clearly false: Although it’s possible to write an AI for such a machine, the right selection pressures don’t exist to produce one this way; the machine is overwhelmingly likely to just end up in an uninteresting infinite loop.
Likewise, the physics of Life are most likely too impoverished to support the evolution of anything more than very simple self-replicating patterns.