My expectation is that there would be a significant degree of similarity. This may be a testable hypothesis, but we’d have to be gathering the data.
DuncanF
Ah. I took the explicit rules for section 7 and my natural tendencies and didn’t pick up on the intent for section 8 until it was too late.
I took the survey. Skipped out at the “unreasonably long” section. Will it handle things properly if I return to it another day?
Note, if you ask me question that I can look up in two seconds flat, and the next question is “without checking sources, assess the probability of the last answer being correct” then I’m not sure you’re going to get the results you’re looking for. I consider the Internet as part of my partly trustable memory that I reference when I want to achieve success in the world I.e. all the time—but its not clear that’s a commonly held opinion.
Hmmm. My unease with this idea would be entirely resolved if the upvotes were cached until the user reached 1000 karma rather than merely prohibited/lost.
Consider EYs article on how we fail to co-operate; I’d like to be able to stand-up and say “yes, more of this please”. I don’t mind at all if the effect of that upvoting is delayed but if I reach 1000 karma I don’t expect to find the energy to go back over all the old threads to up vote those I liked in the past—so in that world my expression of support will be forever missing.
That said, something really is necessary—on more recent posts the comments have had such a disheartening effect that I was beginning to decide that I should only read articles.
Hello everyone
I’ve been lurking here for a while now but I thought it was about time I said “Hi”.
I found Less Wrong through HPMOR, which I read even though I never read Rowling’s books.
I’m currently working my way through the Sequences at a few a day. I’m about 30% through the 2006-2010 collection, and I can heartily recommend reading them in time order and on something like Kindle on your iPhone. ciphergoth’s version suited me quite well. I’ve been making notes as I go along and sooner or later there’ll be a huge set of comments and queries arriving all at once.
I have a long standing love of expressing my beliefs with respect to probability but reading through those first sequences has really sharpened my appreciation for the art.
I’ve been reading quite a lot of papers recently and had got the point where I had read enough to be really worried about p ~ 0.05 - which I reasoned at the time meant there was a good chance something I’d read recently was wrong… and now I need to take into account that the p-value might be a complete mess in the first place. Anyone have a figure for how many papers published at p ~ 0.05 have a Bayesian probability of less than that?
What else can I tell you? I was raised in the Church of England but I imagine I was fortunate in that representatives of the church told me whilst I was still young that it wasn’t possible to answer my questions. From comparison with the rest of the world that alone seemed to make the whole belief structure seem to be on pretty shaky ground.
I’m in the Cambridge area in the UK and have been lurking on their mailing list for a while but haven’t said hello there yet.
I’m in my late-thirties now and soon expecting to become a father for the first time. There is a shocking level of lack of rationality in and around childbirth and significant low-hanging fruit to be taken by being rational. I’ll post about this later. Any other parents found some easy gains by reading the science? I’d love to hear about it.
I’m a software engineer and until recently a project manager for bespoke software projects for small businesses. Right now I’m trying to get some iPhone apps off the ground to add to the passive income flow so that I can spend as much time with my new child as possible.
Topics of interest to me at the moment are:
The rationality and practicalities of changing to a passive income stream.
The practicalities of home schooling.
The practicalities of setting up some better memes for my child than the ones I finished my own childhood with.
Box B is already empty or already full [and will remain the same after I’ve picked it]
Do I have to believe that statement is completely and utterly true for this to be a meaningful exercise? It seems to me that I should treat that as dubious.
It seems to me that Omega is achieving a high rate of success by some unknown good method. If I believe Omega’s method is a hard-to-detect remote-controlled money vaporisation process then clearly I should one-box.
A super intelligence has many ways to get the results it wants.
I am inclined to think that I don’t know the mechanism with sufficient certainty that I should reason myself into two-boxing against the evidence to date.
Does it matter which undetectable unbelievable process Omega is using for me to pick my strategy? I don’t think it does—I have to acknowledge that I’m out of my depth with this alien and arguments against causality defiance or the impossibility of undetectable money vaporisers are not going to help me take the million.
Another tack: Omega isn’t a super intelligence—he’s got a ship, a plan, and a lot of time on his hands. He turns up on millions of worlds to play this game. His guesses are pretty lousy, he guesses right only x percent of the time. We are the only planet on which he’s consistently guessed right. We don’t know what x is in the full sample size. Looking at what his results are here, it looks good. Does it really seem rational to second guess the sample we see?
It seems to me that we have to accept some pretty wild statements and then start reasoning based on them for us to come to a losing strategy. If we doubt the premises to some degree then does it become clear that the most reasonable strategy is one-boxing?
That strikes me as really … odd.
To whom is the advice addressed? If something is actually untrue, and one has determined it to be untrue, then the task of being skeptical about it is finished.
I could probably find a loophole in the preceding statement, but it couldn’t possibly be what Bill James was referring to.
As for directing skepticism at [claims depending upon] things that are difficult to measure, well that seems like one step away from directing skepticism at claims depending on little evidence. Which is surely what we want to do. Again, there’s a loophole, but clearly not something Bill James was trying to point out.