I’ve the impression that Harry actually has some kind of censor inside his head that prevents him from thinking about the sense of doom concerning Quirrel. He is never shown remembering it and reflecting on it, even though it should be a pretty damn conspicuous and important fact. EDIT: not never, as seen below, but the amount of thought he expends on the matter still seems to be weirdly little.
Kutta
I’m probably coming.
Also, will the meetup’s language be English? AlexeyM’s username suggests so.
The presentation and exercise booklets seem to be pretty awesome.
1) Here is a nice prove of Pythagorean theorem:
Typo: proof.
Most people you know are probably weak skeptics, and I would probably fit this definition in several ways. “Strong skeptics” are the people who write The Skeptics’ Encyclopedia, join the California Skeptics’ League, buy the Complete Works of James Randi, and introduce themselves at parties saying “Hi, I’m Ted, and I’m a skeptic!”. Of weak skeptics I approve entirely. But strong skeptics confused me for a long while. You don’t believe something exists. That seems like a pretty good reason not to be too concerned with it.
Yvain, 2004, source
Edit: authorial instance specified on popular demand.
Welcome to Less Wrong!
You might want to post your introduction in the current official “welcome” thread.
… then I am an ex-rationalist.
LW’s notion of rationality differs greatly from what you described. You may find our version more palatable.
Do you have evidence besides the username and the programming skill that it’s Norvig? I also entertained the idea that it’s him. At first I didn’t examine his code deeply, but its conciseness inspired me to create a 12-line semi-obfuscated Python solution. I posted a clarified version of that in the thread. What do you think about it? Also, could you tell me your Euler username so I could look for your solutions (provided that you actually post there)?
Now that you mentioned Norvig’s solution I investigated it and after correcting some typos I got it to run on my PC. I concluded that it works pretty much the same way as my solution (but mine’s considerably faster :) ).
By the way, it seems that the usual end-of-the-year SI fundraising is live now.
Evangelion
Maybe someone should do some study about that peculiar group of depressed and/or psychopathological people who were significantly mentally kicked by NGE. Of course it’s all anecdotal right now, but I really have the impression (especially after spending some time at EvaGeeks… ) that NGE produces a recurring pattern of effect on a cluster of people, moreover, that effect is much more dramatic than what is usual in art.
GEB is great as many things; as an introduction to formal systems, self reference, several computer science topics, Gödel’s first Incompleteness Theorem, and other stuff. Often it is also a unique and very entertaining hybrid of art and nonfiction. Without denying any of those merits, the book’s weakest point is actually the core message, quoted in OP as
GEB is a very personal attempt to say how it is that animate beings can come out of inanimate matter… GEB approaches [this question] by slowly building up an analogy that likens inanimate molecules to meaningless symbols, and further likens selves… to certain special swirly, twisty, vortex-like, and meaningful patterns that arise only in particular types of systems of meaningless symbols.
What Hofstadter does is is the following: he identifies self-awareness and self-reference as core features of consciousness and/or intelligence, and he embarks on a long journey across various fields in search of phenomena that also has something to do with self-reference. This is some kind of weird essentialism; Hofstadter tries to reduce extremely high-level features of complex minds to (superficially) similar features that arise in enormously simpler formal and physical systems. Hofstadter doesn’t believe in ontologically fundamental mental entities, so he’s far from classic essentialism, yet he believes in very low level “essences” of consciousness that percolate up to high-level minds. This abrupt jumping across levels of organizations reminds me a bit of those people who try to derive practical everyday epistemic implications from the First Incompleteness Theorem (or get dispirited because of some implied “inherent unknowability” of the world).
Now, to be fair, GEB considers medium levels of organization in its two chapters on AI, but GEB’s far less elaborate on those matters than on formal systems, for instance. The AI chapters are also the most outdated now, and even there Hofstadter’s not really trying to do any noteworthy reduction of minds but instead briefly ponders then-contemporary AI topics such as Turing tests, computer chess, SHRDLU, Bongard problems, symbol grounding, etc.
To be even fairer, valid reduction of high-level features of human minds is extremely difficult. Ev-psych and Cognitive Science can do it occasionally, but they don’t yet attempt reduction of general intelligence and consciousness itself. It is probably understandable that Hofstadter couldn’t see that far ahead into the future of cogsci, evpsych and AI. Eliezer Yudkowsky’s Level of Organization in General Intelligence is the only reductionist work I know that tries to wrestle all of it at once, and while it is of course not definitive or even fleshed-out, I think it represents the kind of mode of thinking that could possibly yield genuine insights about the mysteries of consciousness. In contrast, GEB never really enters that mode.
Wow, thanks. That’s probably the subjectively best feeling thing anyone’s said to me in 2011 so far.
In September I picked up programming. Following many people’s recommendations I chose the Project Euler + Python combination. So far it seems to be quite addictive (and effective). I’m currently at 90 solved problems, although I’m starting to feel a bit out of my (rather non-deep) depth, and thus I consider temporarily switching to investigating PyGame for a while and coding remakes of simple old games, while getting ahold of several CS and coding textbooks.
Pollan’s book is horrible. It is actually against science per se in nutrition, continuously bringing up the supposed holistic irreducibility of diets and emphasizing “common sense”, “tradition” and “what our grandparents ate” as primary guidelines. Pollan presents several cherry-picked past mistakes of nutrition science, and from that concludes that nutrition science in general is evil.
I am not fundamentally against heuristics derived from tradition and/or evolution, but Pollan seems to use such heuristics whimsically, mostly to push forward a personal agenda of vegetarianism, organic foods and an extremely warm and fuzzy philosophical baggage of food culture and lifestyle. Also, Pollan’s arguments are almost exclusively based on affect (nature = good, artificial = evil, people selling artificial food = monstrous, etc.). Actually, looking a bit into the book to refresh my memories, Pollan doesn’t even use traditions to make inferences about foods’ healthiness; they are merely convenient sources of positive affect.
I am a bit worried by the fact that this trailer has a robot squad infiltrating a warehouse with mannequins and antique recording devices, as opposed to things more unambiguously AI-Box-related. The synopsis also sounds rather wooey. Anyway, the full movie will be the judge of my worries.
My central point is contained in the sentence after that. A positive Singularity seems extremely human to me when contrasted to paperclip Singularities.
Re: Preface and Contents
I am somewhat irked by “the human era will be over” phrase. It is not given that current-type humans cease to exist after any Singularity. Also, a positive Singularity could be characterised as the beginning of the humane era, in which case it is somewhat inappropriate to refer to the era afterwards as non-human. In contrast to that, negative Singularities typically result in universes devoid of human-related things.
2008: Life extension → Immortality Institute → OB
That’s witholding potentially important information. Also, you still have to address other people’s erroneous beliefs about their points.
13 years off, 50% confidence.
-- Isuna Hasekura, Spice and Wolf vol. 5 (“servant” is justified by the medieval setting).