To push this thread in one particular direction within this parent post, let us consider interestingness as it relates to LWers.
Given that saying that which everyone agrees with (astrology is bullshit for the secular masses, something more sophisticated among LW but particular agreeable here) is likely to solicit karma, and something more cutting edge is likely to solicit both upvotes and downvotes, I suspect a karma score closest to zero among those who already have your high regard due to the content of their posts are those you want to keep an eye on if you want to learn and upgrade your paradigms and have been here for a while already.
Getting a karma score close to zero by balancing agreement and disagreement with the LW herd would mean having (or at least expressing) opinions uncorrelated with theirs. Do you think LWers’ opinions are as often wrong as right?
That’s ignoring the fact that one gets or loses karma for the perceived quality of what one writes, not simply for agreement or disagreement. For sure that doesn’t work perfectly, but it certainly looks to me as if comments with interesting ideas in them, and comments that are expressed particularly well, and meatier-than-average comments, all tend to get positive scores. Looking for someone with near-zero karma means preferring people who don’t systematically write interesting, well-expressed, meaty comments.
Thanks for the added sophistication of your contribution.
Getting a karma score close to zero by balancing agreement and disagreement with the LW herd would mean having (or at least expressing) opinions uncorrelated with theirs. Do you think LWers’ opinions are as often wrong as right?
If we consider right to be accurately predicting future events, I don’t have any compelling reason to believe a given LW karma giver is any better at predicting the future than not. And, this is after considering the LW annual surveys. From my subjective perspective, I don’t intuit any residual karma to explain after incorporating other factors I believe are predictive of karma, that could be attributed to the predictive accuracy of a post. However, given the paucity of data and the informality of my intuition, I could probably be swayed by a well designed experiment with even a small sample size.
That’s ignoring the fact that one gets or loses karma for the perceived quality of what one writes, not simply for agreement or disagreement.
Stylistic discrimination is a choice individual users can make at their peril or to their merit. I suppose they will find out. To someone who hasn’t found most elements of style (exceptions are notable and cluster around certain linguistic patterns of a number of prominent users which I would be interested to explore at some stage) predictive of personally useful posts on LW such as myself, the observation that you have made re: karma and style adds credence to my position that tendency towards zero can be positively informative. Of course, I’m not blind to the many other dynamics that can explain why else a given user’s karma may tend towards zero.
but it certainly looks to me as if comments with interesting ideas in them, and comments that are expressed particularly well, and meatier-than-average comments, all tend to get positive scores.
Through some quasi experimental factorial designed experimental posts I hold a particularly weak position that content must always expressed particular well but does not have to be meatier than average to get a positive score beyond a courtesy +1 to +3 or so, among other consideration that are far more sophisticated than they are useful let alone compelling to post.
I don’t have any compelling reason to believe a given LW karma giver is any better at predicting the future than not.
Tomorrow, sunrise where I am will be somewhere around 7.30. In the next year, there will not be a nuclear war. The human race will still exist a century from now. If I roll a hundred ordinary 6-sided dice, some of them will come up 1 but not more than 1⁄3 of them. The best computer I can buy two years from now for $1000 will be faster than the best I can currently buy for $1000, but not as much as twice as fast. Lawrence Lessig will not be the Democratic nominee for the US presidency in 2016. If you take two objects of similar shape, structure and material, one twice the size of the other, and thump both of them, the larger one will make a lower-pitched sound. Within the next month there will be articles in major UK newspapers saying unflattering things about both David Cameron and Jeremy Corbyn, and articles saying flattering things about both. This time next year, the total value of my pension funds will be between half and double what it is now.
I will be very surprised (and you should be, too) if more than one of those predictions is wrong. None of them is trivial. None of them was difficult to make. I am sure you would have no difficulty making a similar set of predictions with similar accuracy.
Even if you “consider right to be accurately predicting future events” (which is, at best, a controversial definition), LW readers—and people generally, in fact—are pretty good at being right.
other factors I believe are predictive of karma
What are those other factors, that predict karma well enough that knowing whether what’s said is right yields no further improvement in karma prediction on top of them?
very well said but you’re missing my point. I wanted to emphasise any given LW user. Although particular LW users are very good at predicting, they do not appear to be the same ones who take the effort to vote, on aggregate
What are those other factors, that predict karma well enough that knowing whether what’s said is right yields no further improvement in karma prediction on top of them?
In my model, karma is already overdetermined, and it’s not a very good model, but similar factors that describe human behaviours as is cannon on LW go into that model. I may ellaborate in the future but like I said, probably not worth anyone’s time and I’d rather not clarify on it myself than do another thing.
Nuclear war could do us a lot of damage, but it’s pretty unlikely to drive us completely extinct. And I think nuclear war—especially the sort of really big nuclear war that has any chance of driving the human race near to extinction—is fairly unlikely because it’s so obviously not in anyone’s interest.
(Note that I didn’t claim that those predictions are certainly right.)
Tangentially, it occurs to me that large-scale nuclear annihilation might make for interesting bullet-biting test cases for exotic decision theories. Suppose, e.g., that you’re interacting with some other agent and you can see one another’s source code (or have other pretty reliable insight into one another’s behaviour). A situation might arise in which your best course of action is to make a credible threat that in such-and-such circumstances you will destroy the world (meaning, e.g., launch a large-scale nuclear attack that will almost certainly result in almost everyone on both sides dying, etc.). Of course those circumstances have to be very unlikely given your threat. Theories like TDT then say that in those circumstances you should in fact destroy the world, even though at that point there is no possible way for doing so to help you. So, do you do it?
(UDT, which I think is the generally preferred TDT-like theory these days, says more precisely that you should arrange to be governed by an algorithm that in those circumstances will destroy the world. What you do if those circumstances then arise isn’t a separate question. I think that takes some of the psychological sting out of it—though deliberately programming yourself so that in some foreseeable situations you will definitely destroy the world is still quite a bullet to be biting.)
I agree with you about the probability of extinction and of nuclear war.
Regarding the issue of threats to destroy the world, during the Cold War the US and Russia both implied or made threats of that sort. For example, during the Cuban Missile crisis Kennedy explicitly announced that an attack even by a single nuclear weapon (in or from Cuba) would mean full scale nuclear war with Russia.
Kennedy planned the invasion of Cuba, not being aware that Cuba was in possession of tactical nukes which they would have the physical power to use in response to an invasion.
My estimates are: more than 50% chance Cuba would have used at least one tactical nuke in the case of an invasion, and more than 50% chance Kennedy would have made good on his threat to destroy the world.
“As terrified as the world was in October 1962, not even the policy-makers had realized how close to disaster the situation really was. Kennedy thought that the likelihood of nuclear war was 1 in 3, but the administration did not know many things. For example, it believed that none of the missiles were in Cuba yet, and that 2-3,000 of Soviet service personnel was in place. Accordingly, they planned the air strike for the 30th, before any nuclear warheads could be installed. In 1991-92, Soviet officials revealed that 42 IRBMs were in place and fully operational. These could obliterate US cities up to the Canadian border. These sites were guarded by 47,000 Soviet combat troops. Further, 9 MRBMs were ready to be used against the Americans in case of an invasion. The Soviets had tactical nuclear weapons that the local commanders were authorized to use to repel an attack. After he learned of this in 1992, a shaken McNamara told reporters, “This is horrifying. It meant that had a US invasion been carried out. . . there was a 99 percent probability that nuclear war would have been initiated.”
That’s not the issue. The issue is control. I don’t think the Russian ceded the control of nuclear weapons to Cubans. Even if the Cubans overran the missile bases and got physical control over the missiles, they still wouldn’t have been able to launch them.
Kennedy planned the invasion of Cuba, not being aware that Cuba was in possession of tactical nukes
Even when everybody on LW agrees that astrology is bullshit, I would expect that a post to the open thread or to the rationality quotes thread that has that has it’s only content would be downvoted.
Fantastic post.
To push this thread in one particular direction within this parent post, let us consider interestingness as it relates to LWers.
Given that saying that which everyone agrees with (astrology is bullshit for the secular masses, something more sophisticated among LW but particular agreeable here) is likely to solicit karma, and something more cutting edge is likely to solicit both upvotes and downvotes, I suspect a karma score closest to zero among those who already have your high regard due to the content of their posts are those you want to keep an eye on if you want to learn and upgrade your paradigms and have been here for a while already.
Getting a karma score close to zero by balancing agreement and disagreement with the LW herd would mean having (or at least expressing) opinions uncorrelated with theirs. Do you think LWers’ opinions are as often wrong as right?
That’s ignoring the fact that one gets or loses karma for the perceived quality of what one writes, not simply for agreement or disagreement. For sure that doesn’t work perfectly, but it certainly looks to me as if comments with interesting ideas in them, and comments that are expressed particularly well, and meatier-than-average comments, all tend to get positive scores. Looking for someone with near-zero karma means preferring people who don’t systematically write interesting, well-expressed, meaty comments.
Thanks for the added sophistication of your contribution.
If we consider right to be accurately predicting future events, I don’t have any compelling reason to believe a given LW karma giver is any better at predicting the future than not. And, this is after considering the LW annual surveys. From my subjective perspective, I don’t intuit any residual karma to explain after incorporating other factors I believe are predictive of karma, that could be attributed to the predictive accuracy of a post. However, given the paucity of data and the informality of my intuition, I could probably be swayed by a well designed experiment with even a small sample size.
Stylistic discrimination is a choice individual users can make at their peril or to their merit. I suppose they will find out. To someone who hasn’t found most elements of style (exceptions are notable and cluster around certain linguistic patterns of a number of prominent users which I would be interested to explore at some stage) predictive of personally useful posts on LW such as myself, the observation that you have made re: karma and style adds credence to my position that tendency towards zero can be positively informative. Of course, I’m not blind to the many other dynamics that can explain why else a given user’s karma may tend towards zero.
Through some quasi experimental factorial designed experimental posts I hold a particularly weak position that content must always expressed particular well but does not have to be meatier than average to get a positive score beyond a courtesy +1 to +3 or so, among other consideration that are far more sophisticated than they are useful let alone compelling to post.
Tomorrow, sunrise where I am will be somewhere around 7.30. In the next year, there will not be a nuclear war. The human race will still exist a century from now. If I roll a hundred ordinary 6-sided dice, some of them will come up 1 but not more than 1⁄3 of them. The best computer I can buy two years from now for $1000 will be faster than the best I can currently buy for $1000, but not as much as twice as fast. Lawrence Lessig will not be the Democratic nominee for the US presidency in 2016. If you take two objects of similar shape, structure and material, one twice the size of the other, and thump both of them, the larger one will make a lower-pitched sound. Within the next month there will be articles in major UK newspapers saying unflattering things about both David Cameron and Jeremy Corbyn, and articles saying flattering things about both. This time next year, the total value of my pension funds will be between half and double what it is now.
I will be very surprised (and you should be, too) if more than one of those predictions is wrong. None of them is trivial. None of them was difficult to make. I am sure you would have no difficulty making a similar set of predictions with similar accuracy.
Even if you “consider right to be accurately predicting future events” (which is, at best, a controversial definition), LW readers—and people generally, in fact—are pretty good at being right.
What are those other factors, that predict karma well enough that knowing whether what’s said is right yields no further improvement in karma prediction on top of them?
very well said but you’re missing my point. I wanted to emphasise any given LW user. Although particular LW users are very good at predicting, they do not appear to be the same ones who take the effort to vote, on aggregate
In my model, karma is already overdetermined, and it’s not a very good model, but similar factors that describe human behaviours as is cannon on LW go into that model. I may ellaborate in the future but like I said, probably not worth anyone’s time and I’d rather not clarify on it myself than do another thing.
“The human race will still exist a century from now”—could easily be wrong thanks to nuclear weapons
Nuclear war could do us a lot of damage, but it’s pretty unlikely to drive us completely extinct. And I think nuclear war—especially the sort of really big nuclear war that has any chance of driving the human race near to extinction—is fairly unlikely because it’s so obviously not in anyone’s interest.
(Note that I didn’t claim that those predictions are certainly right.)
Tangentially, it occurs to me that large-scale nuclear annihilation might make for interesting bullet-biting test cases for exotic decision theories. Suppose, e.g., that you’re interacting with some other agent and you can see one another’s source code (or have other pretty reliable insight into one another’s behaviour). A situation might arise in which your best course of action is to make a credible threat that in such-and-such circumstances you will destroy the world (meaning, e.g., launch a large-scale nuclear attack that will almost certainly result in almost everyone on both sides dying, etc.). Of course those circumstances have to be very unlikely given your threat. Theories like TDT then say that in those circumstances you should in fact destroy the world, even though at that point there is no possible way for doing so to help you. So, do you do it?
(UDT, which I think is the generally preferred TDT-like theory these days, says more precisely that you should arrange to be governed by an algorithm that in those circumstances will destroy the world. What you do if those circumstances then arise isn’t a separate question. I think that takes some of the psychological sting out of it—though deliberately programming yourself so that in some foreseeable situations you will definitely destroy the world is still quite a bullet to be biting.)
Stanislav Petrov may be relevant here.
I agree with you about the probability of extinction and of nuclear war.
Regarding the issue of threats to destroy the world, during the Cold War the US and Russia both implied or made threats of that sort. For example, during the Cuban Missile crisis Kennedy explicitly announced that an attack even by a single nuclear weapon (in or from Cuba) would mean full scale nuclear war with Russia.
Kennedy planned the invasion of Cuba, not being aware that Cuba was in possession of tactical nukes which they would have the physical power to use in response to an invasion.
My estimates are: more than 50% chance Cuba would have used at least one tactical nuke in the case of an invasion, and more than 50% chance Kennedy would have made good on his threat to destroy the world.
Links? Did Russia actually release the control of their (local) nukes to Cubans? I didn’t hear about this before.
The paper here says:
“As terrified as the world was in October 1962, not even the policy-makers had realized how close to disaster the situation really was. Kennedy thought that the likelihood of nuclear war was 1 in 3, but the administration did not know many things. For example, it believed that none of the missiles were in Cuba yet, and that 2-3,000 of Soviet service personnel was in place. Accordingly, they planned the air strike for the 30th, before any nuclear warheads could be installed. In 1991-92, Soviet officials revealed that 42 IRBMs were in place and fully operational. These could obliterate US cities up to the Canadian border. These sites were guarded by 47,000 Soviet combat troops. Further, 9 MRBMs were ready to be used against the Americans in case of an invasion. The Soviets had tactical nuclear weapons that the local commanders were authorized to use to repel an attack. After he learned of this in 1992, a shaken McNamara told reporters, “This is horrifying. It meant that had a US invasion been carried out. . . there was a 99 percent probability that nuclear war would have been initiated.”
Of course there’s no guarantee that’s accurate.
That’s not the issue. The issue is control. I don’t think the Russian ceded the control of nuclear weapons to Cubans. Even if the Cubans overran the missile bases and got physical control over the missiles, they still wouldn’t have been able to launch them.
Going by WIkipedia, that’s false.
It’s pretty hard to eliminate all humans with our current nuclear weapons.
In particular, look at Hiroshima and Nagasaki today.
Even when everybody on LW agrees that astrology is bullshit, I would expect that a post to the open thread or to the rationality quotes thread that has that has it’s only content would be downvoted.