Apparently a team at Penn is doing this as well:
aaronsw
Wolfram 2002 argues that spacetime may actually be a discrete causal network and writes:
The idea that space might be defined by some sort of causal network of discrete elementary quantum events arose in various forms in work by Carl von Weizsäcker (ur-theory), John Wheeler (pregeometry), David Finkelstein (spacetime code), David Bohm (topochronology) and Roger Penrose (spin networks; see page 1055).
Later, in 10.9, he discusses using graphical causal models to fit observed data using Bayes’ rule. I don’t know if he ever connects the two points, though.
Philosophy posts are useful if they’re interesting whereas how-to’s are only useful if they work. While I greatly enjoy these posts, their effectiveness is admittedly speculative.
Doing hacker exercises every morning
Taking a cold shower every morning
Putting on pants
Lying flat on my back and closing my eyes until I consciously process all the things that are nagging at me at begin to feel more focused
Asking someone to coach me through getting started on something
Telling myself that doing something I don’t want to do will make me stronger
Squeezing a hand grip exerciser for as long as I can (inspired by Muraven 2010; mixed results with this one)
You?
It’s been two weeks. Can you post it now?
Has anyone seriously suggested you invented MWI? That possibility never even occurred to me.
The main insight of the book is very simple to state. However, the insight was so fundamental that it required me to update a great number of other beliefs I had, so I found being able to read a book’s worth of examples of it being applied over and over again was helpful and enjoyable. YMMV.
Unlike, say, wedrifid, whose highly-rated comment was just full of facts!
It seems a bit bizarre to say I’ve dismissed LessWrong given how much time I’ve spent here lately.
FWIW, I don’t think the Singularity Institute is woo and my current view is that giving money to lukeprog is probably a better idea that the vast majority of charitable contributions.
You started with an intent to associate SIAI with self delusion and then tried to find a way to package it as some kind of rationality related general point.
No, I’d love another example to use so that people don’t have this kind of emotional reaction. Please suggest one if you have one.
UPDATE: I thought of a better example on the train today and changed it.
- Aug 6, 2012, 5:39 PM; 15 points) 's comment on Self-skepticism: the first principle of rationality by (
- Aug 6, 2012, 5:43 PM; 2 points) 's comment on Self-skepticism: the first principle of rationality by (
- Jan 10, 2013, 1:30 PM; -4 points) 's comment on Self-skepticism: the first principle of rationality by (
Offhand, can you think of a specific test that you think ought to be applied to a specific idiosyncratic view?
Well, for example, if EY is so confident that he’s proven “MWI is obviously true—a proposition far simpler than the argument for supporting SIAI”, he should try presenting his argument to some skeptical physicists. Instead, it appears the physicists who have happened to run across his argument found it severely flawed.
How rational is it to think that you’ve found a proof most physicists are wrong and then never run it by any physicists to see if you’re right?
My read on your comment is: LWers don’t act humble, therefore they are crackpots.
I do not believe that.
As for why SI’s approach is dangerous, I think Holden put it well in the most upvoted post on the site.
I’m not trying to be inflammatory, I just find it striking.
Self-skepticism: the first principle of rationality
I think the biggest reason Less Wrong seems like a cult is because there’s very little self-skepticism; people seem remarkably confident that their idiosyncratic views must be correct (if the rest of the world disagrees, that’s just because they’re all dumb). There’s very little attempt to provide any “outside” evidence that this confidence is correctly-placed (e.g. by subjecting these idiosyncratic views to serious falsification tests).
Instead, when someone points this out, Eliezer fumes “do you know what pluralistic ignorance is, and Asch’s conformity experiment? … your smart friends and favorite SF writers are not remotely close to the rationality standards of Less Wrong”.
What’s especially amusing is that EY is able to keep this stuff up by systematically ignoring every bit of his own advice: telling people to take the outside view and then taking the inside one, telling people to look into the dark while he studiously avoids it, emphasizing the importance of AI safety while he embarks on an extremely dangerous way of building AI—you can do this with pretty much every entry in the sequences.
These are the sorts of things that make me think LessWrong is most interesting as a study in psychoceramics.
Yvain’s argument was that “x-rationality” (roughly the sort of thing that’s taught in the Sequences) isn’t practically helpful, not that nothing is. I certainly have read lots of things that have significantly helped me make better decisions and have a better map of the territory. None of them were x-rational. Claiming that x-rationality can’t have big effects because the world is too noisy, just seems like another excuse for avoiding reality.
I really enjoyed The Seven Habits of Highly Effective People. (By contrast, I tried reading some @pjeby stuff yesterday and it had all the problems you describe cranked up to 11 and I found it incredibly difficult to keep reading.)
I don’t think the selection bias thing would be a problem if the community was focused on high-priority instrumental rationality techniques, since at any level of effectiveness becoming more effective should be a reasonably high priority. (By contrast, if the community is focused on low-priority techniques it’s not that big a deal (that was my attitude toward OvercomingBias at the beginning) and when it gets focused on stuff like cryo/MWI/FAI I find that an active turnoff.)
I think there’s a decent chance epistemic rationality, ceteris paribus, makes you less likely to be traditionally successful My general impression from talking to very successful people is that very few of them are any good at figuring out what’s true; indeed, they often seem to have set up elaborate defense mechanisms to make sure no one accidentally tells them the truth.
Carol Dweck’s Mindset. While unfortunately it has the cover of a self-help book, it’s actually a summary of some fascinating psychology research which shows that a certain way of conceptualizing self-improvement tends to be unusually effective at it.
My suspicion isn’t because the recommended strategy has some benefits, it’s because it has no costs. It would not be surprising if an asteroid-prevention plan used NASA and nukes. It would be surprising if it didn’t require us to do anything particularly hard. What’s suspicious about SIAI is how often their strategic goals happen to be exactly the things you might suspect the people involved would enjoy doing anyway (e.g. writing blog posts promoting their ideas) instead of difficult things at which they might conspicuously fail.
Two people have been confused by the “arguing about ideas” phrase, so I changed it to “thinking about ideas”.
Why doesn’t Jackman get a Brier score? He claims it’s .00991: http://jackman.stanford.edu/blog/?p=2602