A simple fix would be to not bother publishing a top contributors list.
MatthewW
It seems to me that the arguments so lucidly presented elsewhere on Less Wrong would say that the machine is conscious whether or not it is run, and indeed whether or not it is built in the first place: if the Turing machine outputs a philosophical paper on the question of consciousness of the same kind that human philosophers write, we’re supposed to take it as conscious.
For me, Go helped to highlight certain temptations to behave irrationally, which I think can carry over to real life.
One was the temptation to avoid thinking about parts of the board where I’d recently made a mistake.
And if I played a poor move and my opponent immediately refuted it, there was a temptation to try to avoid seeming foolish by dreaming up some unlikely scheme I might have had which would have made the exchange part of the plan.
Fair enough. I should have said “there are ideas which are useful heuristics in Go, but not in real life”, rather than talking about “sound reasoning”.
The “I’m committed now” one can be a genuinely useful heuristic in Go (though it’s better if you’re using it in the form “if I do this I will be committed”, rather than “oh dear, I’ve just noticed I’m committed”). “Spent so much effort” is in the sense of “given away so much”, rather than “taken so many moves trying”.
It also teaches “if you’re behind, try to rock the boat”, which probably isn’t great life advice.
You can think of “don’t play aji-keshi” as saying “leave actions which will close down your future options as late as possible”, which I think can be a useful lesson for real life (though of course the tricky part is working out how late ‘as possible’ is).
The first is certainly valid reasoning in Go, and I phrased it in a way that should make that obvious. But you can also phrase it as “I’ve spent so much effort trying to reach goal X that I’m committed now”, which is almost never sound in real life.
For the second, I’m not thinking so much of tewari as a fairly common kind of comment in professional game commentaries. I think there’s an implicit “and I surely haven’t made a mistake as disastrous as a two point loss” in there.
It’s probably still not sound reasoning, but for most players the best strategy for finding good moves relies more on ‘feel’ and a bag of heuristics than on reasoning. I’m not sure I’d count that as a way that Go differs from real life, though.
Seven stones is a large handicap. Perhaps they’re better than the average club player in English-speaking countries, but I think the average Korean club player is stronger than Zen.
On the other hand, there are some ways of thinking which are useful for Go but not for real life. One example is that damaging my opponent is as good as working for myself.
Another example is that, between equal players, the sunk costs fallacy is sometimes sound reasoning in Go. One form is “if I don’t achieve goal X I’ve lost the game anyway, so I might as well continue trying even though it’s looking unlikely”. Another form (for stronger players than me) is “if I play A, I will get a result that’s two points worse than I could have had if I played B earlier, so I can rule A out.”
Do you have a reference for the ‘discover that the previous version was incomplete’ part?
I’m not sure that’s a safe assumption (and if they were told, the article really should say so!)
If you did the experiment that way, you wouldn’t know whether or not the effect was just due to deficient arithmetic.
One group was told that under race-neutral conditions, the probability of a black student being admitted would decline from 42 percent to 13 percent and the probability of a white student being admitted would rise from 25 percent to 27 percent. The other group was told that under race-neutral admissions, the number of black students being admitted would decrease by 725 and the number of white students would increase by 725. These two framings were both saying the same thing, but you can probably guess the outcome: support for affirmative action was much higher in the percentage group.
These two framings aren’t saying the same thing at all. The proposed policy might be the same in both cases, but the information available to the two groups about its effects is different.
Further, it’s plausible that if you had a ‘budget’ of N prison places and M police officers for drink-driving deterrence, the most effective way to deploy it would be to arrange for a highish probability of an offender getting a short prison sentence, plus a low probability of getting a long sentence (because we know that a high probability of being caught has a large deterrent effect, and also that people overestimate the significance of a small chance of ‘winning the lottery’).
So the ‘high sentence only if you kill’ policy might turn out to be an efficient one (I don’t suppose the people who set sentencing policy are really thinking along this lines, though).
The article is saying that you can’t affect your sentence by showing skill at drunk driving, other than by using the (very indirect) evidence provided by showing that nobody died as a result.
I think it’s a sound point, given that the question is about identical behaviour giving different sentences.
If you’re told that two people have once driven over the limit, that A killed someone while B didn’t, and nothing more, what’s your level of credence that B is the more skilled drunk driver?
I think Hofstadter could fairly be described as an AI theorist.
When writing on the internet, it is best to describe children’s ages using years, not their position in your local education system.
So Nabarro explicitly says that he’s talking about a possibility and not making a prediction, and ABC News reports it as a prediction. This seems consistent with the media-manufactured scare model.