You should also specify a time-limit since the entry is posted since there’s no length of time beyond which comments can’t be voted up. Edit: You should also probably specify that I can’t ask anyone to vote up the definition (and you should similarly specify that I can’t promise any specific activity on my part if it does get upvoted beyond a certain point).. And you should specify that I can’t put them in a post that has other information other than the definitions (and thus cause upvotes that aren’t connected).
I challenge you to define them, and will donate $10 to a charity of your choice if your definition gets a karma score of at least 3 points.
Ok, then. Here’s my attempt.
Intrinsically interesting topics are topics which satisfy the following criteria:
1) The topic cannot be discussed by an adult human of average intelligence without putting in some cognitive effort and attention. (If you can be busy thinking about another topic while discussing it, then it probably isn’t intrinsically interesting).
If the topic cannot be discussed by a human of average intelligence then this condition is considered to be met.
2) The topic must have objective aspects which are a primary aspect of the topic.
3) The topic must have some overarching theories to connect the topic or have the possibility of overarching theories explain the topic. Thus for example, celebrity divorces would not fall into this category because they are separate unconnected data points. But differing divorces rates in different income brackets
would be ok because one could potentially have interesting sociological explanations for the data.
4) The topic must have bridges to many other topics that aren’t simply a variation of the topic itself. For example, AI bridges to programming, psychology, nature of human morality, evolution, neurobiology, and epistemology. In contrast, D&D rules don’t connect to other topics in any strong way. There are some minimally interesting probability questions that you can ask if you are writing a quiz for an undergraduate probability course but that’s about it. Most of the other topics that it is connected to are still variations of the same topic such as say what a society would look like in a universe that functioned under standard 3.5 D&D rules.
In contrast, D&D rules don’t connect to other topics in any strong way.
I doubt that. Before I had finished the paragraph, things that came to mind included board games, what underlying skills transfer between different board games and RPGs (from empirical evidence, they exists and are large), what the appeal of roleplaying a fictional character it is, which different desires roleplaying versus powergaming satisfy, what makes a character attractive to roleplay, what makes a roleplay performance fun, what makes a D&D setting enticing, how to create an enticing D&D setting, whether the most fun is had when the DM does a good job of almost killing the characters (as someone told me), and more. These, of course, give hooks to combinatorial game theory, personality, improv acting, fiction writing, and fun theory. With the possible exception of personality (though it’s a small leap to MUDs and the Bartle 4, so probably not an exception), all of these play quite important roles in D&D.
I suppose I’m muddying it a bit since some of those things are connected to D&D but not directly to D&D rules, though your original post simply mentioned D&D.
Knowledge is connected enough that I’d be quite impressed if anyone could find (or, heck, invent) a topic which fails criterion 4.
Even D&D rules connect to the general problem of creating games which are understandable and playable and the problem of creating reasonable facsimiles of reality—these contrast in an interesting way with the scientific problem of creating computationally-tractable models which predict reality, for example.
D&D rules don’t connect to other topics in any strong way.
Do you exclude D&D content from “D&D rules”? I’d agree that, say, attack of opportunity intricacies don’t connect well to anything; but something like how D&D handles werecreatures could connect to all kinds of other stuff in fantasy lit.
When I said rules I was thinking something very narrow like the actual content of the 3.5 SRD which is more or less flavorless. Your point seems to be related to Darmani’s criticism about the fourth criterion. This suggests that my criteria for intrinsically interesting as laid out above are serious flawed at least in so far as they fail to capture my intuition for what is intrinsically interesting in that D&D rules shouldn’t be considered intrinsically interesting for reasons similar to why the infield fly rule in baseball isn’t intrinsically interesting. This conversation makes me suspect that the distinction I am trying to make has no actual validity.
The key, I think, is to distinguish between topics that remind you of other topics, and topics that, upon being comprehended, actually help you understand other topics.
D&D rules remind you of D&D content, which helps you understand fantasy literature.
D&D rules, by themselves, though, don’t help you understand much of anything else.
Likewise, baseball helps me understand antitrust law enforcement, because baseball has a Congressional exemption to antitrust laws. The exemption has virtually nothing to do with the infield fly rule, though. The infield fly rule reminds me of baseball, but by itself it sheds no light on antitrust law enforcement.
D&D rules, by themselves, though, don’t help you understand much of anything else.
The influence of Charisma on social discourse, and things like intimidation and bluffing.
The role of strength vs dexterity, the difference between ‘intelligence’ and ‘wisdom’.
Most natural traits, including brain makeup personality and body type are determined by genetics but some small changes can be made over time.
When it comes to performance of skills natural talent plays some part but the overwhelming majority of influence comes from which skills you learn.
Sometimes things boil down to shere dumb luck. All you can do is make the best decisions you can under uncertainty, don’t take it personally when something improbably bad happens but also minimize the expected consequences if you roll a zero.
Most things boil down to the judgement of the guy in charge. (It’s not what you know,
it is who you know, and whether you are sleeping with the GM.)
It is really hard to do stuff when it is dark.
The best way to improve your social skills is to go around killing lots of people and apply what you learn from that to diplomacy, bluff and intimidate...
:) I haven’t followed the conversation closely so I don’t have a firm opinion on that. Looking back…
I would accept it as a useful definition up to and including the first two sentences of “4”. I would replace the remainder with an acknowledgment that what qualifies as an inferential ‘bridge’ to another topic and even what qualifies as a topic proper is subject. I, for example, read the counter example and it prompted all sorts of curious and potentially fascinating subjects and even prompted pleasant memories of numerous conversations I have had that have been connected using basic probability as a stepping stone.
Even if the evolution of the infield fly rule has been used as an example of how common law naturally forms? No, I’m not making that up. Not anti-trust law, but still pretty close to legal matters.
Ok. Thought about this. The standard charity here seems to be the SIAI. I’m not convinced of Eliezer’s estimates for expected return for donations to SIAI (primarily because I put the probability of a Singularity in the foreseeable future to be low). Moreover, if everyone always has donations to SIAI be the result of all LW bets and contests, the level of incentive to bet will go down and so one should try to have a variety of charities that people will not mind but might not be the highest priority charity for many people here. But, I’d also like to ensure that I don’t cause negative utility to you by making you donate to an organization with which you don’t approve. So, my solution is I’m going to list four organizations and you can choose which of the four the donation goes to:
The four are the James Randi Educational Foundation, the National Center for Science Education, Alcor Life Extension Foundation, or the SENS Foundation.
And I’ll match your earlier offer as follows: If you make a post here explaining why you choose the one you did and that post gets at least three karma upvotes by 12 AM GMT on July 1st, I’ll also donate $10 to that organization. (And presumably the same rules against obvious silliness apply as before).
I choose the SENS Foundation, and have donated the $10 via Paypal. The transaction ID is #8YL863192L9547414, although I’m not sure how or whether that helps you verify payment. Maybe somebody can teach me how to provide public proof of private payment.
The SENS Foundation, as I understand it, is in the business of curing aging.
The reason why I chose the SENS Foundation is that I believe that, of the four options, it will do the most to convince people that rational thinking and empirical observation are worthwhile. This, in turn, is my best guess at what will reduce existential risk. Because I can’t know, today, with any confidence what the most important existential risks will be over the next 50 years or so, I want my donations to nudge the world closer to a state where there are enough rational empiricists to successfully address whatever turn out to be the big existential crises.
Why do I think the SENS Foundation will promote science-y views? Basically, I think the most effective technique for getting irrational people to change their worldview is to prevent them with overwhelmingly compelling evidence that the world is hugely different from the way they imagined it to be. Ideally, the evidence would be emotionally uplifting and clearly attributable to the work of scientists. A manned flight to the Moon fits that bill. So would a cure for aging.
Although spiritualists and fundamentalists of all stripes have tremendous resources in terms of stubbornness, denial, and rationalization, it is harder to rationalize away a central fact of life than it is to rationalize away a peer-reviewed study from Nature that you read an excerpt of in the USA Today. You see the moon every night; people went there. It’s hard to escape. More to the point, you don’t want to escape. It’s somehow really cool to believe that people can fly to the moon. So you maybe let go of your suspicion that the Earth is the center of the Universe and let your friend tell you about Newton and Galileo for a moment.
Same thing with aging. Your parents’ friends are right there, 80 years old and still acting like they’re 30. You can’t help but be aware of the anti-aging cure. You can’t help but be impressed, and think it’s cool. You might still believe that mortality is a good thing, or that there’s an afterlife, but you at least welcome medical science into your pantheon of interesting and legitimate things to believe in.
James Randi is a pretty bad-ass mythbuster, and I’m glad NCSE is fighting the good fight to keep “creation science” out of America’s public schools. However many people they manage to convince of the importance of critical thinking, though, I think a cure for aging will convince even more. There’s nothing quite like being WRONG about something you’ve always assumed was indisputably correct to make critical thinking look worthwhile. In this case, the bad assumption is “I will die.”
As for Alcor, it’s also a worthwhile cause, but it’s an uphill battle to convince people that freezing themselves and waiting for the future is a way to cheat death. Curing aging is more straightforward, more user-friendly, and more useful in the event of a partial success—if cryonics partially fails, you’re probably still dead, but if an anti-aging cure fails, you’re probably going to get another few decades of healthy life.
Thanks for the opportunity to choose, and to explain!
Ok. Matched donation.. Receipt ID is 4511-9941-6738-9681
Incidentally, I’m not convinced that major scientific accomplishments actually will serve to increase rationality. To examine the example you gave of the Moon landings, there is in fact a sizable fraction of the US which considers that to be a hoax. Depending on the exact question asked 5% to about 20% of the population doesn’t believe we that people have gone to the Moon in the US, and the percentage is larger for people outside the US.See this Gallup poll and this British poll showing that 25% of people in Britain doubt that we went to the moon. Unfortunately, that article just summarizes the poll and I can’t seem to find free access to the poll itself. But it also contains the noteworthy remark that “Further revelations concerning the British public’s perception of the historic event include 11 per cent who believe the Moon-landing occurred during the 1980s and 1 per cent who believe the first man on the Moon was Buzz Lightyear.” I suspect the 1 per cent can get thrown out, but the 11% looks genuine.
Americans at least cared more about rationality, critical thinking and science when it looked like they were losing the space race after Sputnik. A lot of improvements to our high school curricula occurred after that.
It isn’t obvious to me that SENS will do the best job improving rationality.
Americans at least cared more about rationality, critical thinking and science when it looked like they were losing the space race after Sputnik.
I mean, if you want, we could switch our donations to fund a program that makes sure the Russians discover a cure for aging...
Depending on the exact question asked 5% to about 20% of the population doesn’t believe we that people have gone to the Moon in the US
OK, but that’s the wrong statistic. What percent of the U.S. population insists that the Earth is flat and/or the center of the Universe? How does that compare to the percent of the U.S. population that insists that the Earth is less than 10,000 years old?
Incidentally, I’m not convinced that major scientific accomplishments actually will serve to increase rationality.
Perhaps not directly, in the sense I originally claimed. Nevertheless, major scientific accomplishments should help solve the problem of expecting short inferential distances. If you have just flown to the moon or cured aging, even people who expect short inferential distances will not assume you are crazy when you boldly assert things that don’t immediately seem intuitive. They will give you a moment to explain, which is what I really want to happen when, e.g., scientists are proposing solutions to the existential crisis du jour.
OK, but that’s the wrong statistic. What percent of the U.S. population insists that the Earth is flat and/or the center of the Universe? How does that compare to the percent of the U.S. population that insists that the Earth is less than 10,000 years old?
Well, around 40% of the US thinks the Earth is less than 10,000 years old. But you seem to have a valid point, in that the fraction of the US population which believed in geocentrism dropped drastically in the 1960s and the same for the flat earth percentage which dropped from tiny to negligible. But that seems directly connected to what was actively accomplished in the Moon landings. Young Earth Creationism by contrast was not very popular from 1900 to 1960 or so (even William Jennings Bryan was an old earth Creationist). That made a comeback in the 1960s starting when Henry Morris wrote “The Genesis Flood” in 1961, and that continued to pick up speed through the Moon landings (this incidentally undermines my earlier argument about Sputnik).
Nevertheless, major scientific accomplishments should help solve the problem of expecting short inferential distances.
Are you sure that they will be more willing to listen to long inferential distance claims? I suspect that people may be more likely to simply take something for granted and add that to their worldview. I don’t for example see the common presence of computers or other complicated technologies as substantially increasing the inferential distance people are willing to tolerate.
I mean, if you want, we could switch our donations to fund a program that makes sure the Russians discover a cure for aging...
This leads to an interesting idea: improve science and rationality in one area by helping funding science for rivals. I wonder if that would work...
Young Earth Creationism by contrast was not very popular from 1900 to 1960 or so (even William Jennings Bryan was an old earth Creationist). That made a comeback in the 1960s starting when Henry Morris wrote “The Genesis Flood” in 1961, and that continued to pick up speed through the Moon landings (this incidentally undermines my earlier argument about Sputnik).
Part of that change could perhaps be attributed to the waning effectiveness of hiding behind ‘old earth’ as a way to keep on side with ‘science’. Once the option of alliance with ‘science’ lost viability the natural approach is stake the in group identity as being opposed to any attempts whatsoever to conform historic beliefs to actual evidence. If you can’t have an image of ‘sane’ then you go for an image of ‘confident and uncompromising’ - it is usually more attractive anyway.
I mean, if you want, we could switch our donations to fund a program that makes sure the Russians discover a cure for aging...
This leads to an interesting idea: improve science and rationality in one area by helping funding science for rivals. I wonder if that would work...
I had an idea I call the “evil genius theory” that goes something like this: challenge can produce growth and strength; great challenge, on the verge of existential, can produce tremendous amounts of growth in short periods of time; therefore, fund an evil genius to do great and terrible things, and the challenge / response will better the world on net.
Young Earth Creationism by contrast was not very popular from 1900 to 1960 or so (even William Jennings Bryan was an old earth Creationist). That made a comeback in the 1960s starting when Henry Morris wrote “The Genesis Flood” in 1961, and that continued to pick up speed through the Moon landings (this incidentally undermines my earlier argument about Sputnik).
Part of that change could perhaps be attributed to the waning effectiveness of hiding behind ‘old earth’ as a way to keep on side with ‘science’. Once the option of alliance with ‘science’ lost viability the natural approach is stake the in group identity as being opposed to any attempts whatsoever to conform historic beliefs to actual evidence. If you can’t have an image of ‘sane’ then you go for an image of ‘confident and uncompromising’ - it is usually more attractive anyway.
This needs to be tested for predictive power, but I believe the main reason you lost so quickly is that you bet money against another form of utility with no direct convertability. Having equally fungible forfeits on both sides of the bet makes it more symmetrical.
To venture into fuzzier grounds: The other reason I believe the asymmetry of the bet made you lose so quickly is that the average LWer can predict with high confidence that JoshuaZ will choose one of their top 5 charities.
I believe the main reason you lost so quickly is that you bet money against another form of utility
A decent analysis, but it’s premised on a bad assumption. I didn’t bet; I issued a challenge. Notice that, unlike in a bet, if JoshuaZ failed, he would not necessarily have forfeited karma to me or anyone else. I certainly agree with you that it would be foolish to bet money against karma. I see my actions more as offering a prize for the successful completion of a task than as betting that JoshuaZ would be unable to complete the task.
the average LWer can predict with high confidence that JoshuaZ will choose one of their top 5 charities.
Sure, but they’re still unlikely to vote up a bullshit post. Maybe that gives JoshuaZ a moderate handicap, but my primary purpose was to inspire JoshuaZ to produce a useful analysis that interested me, and not to inspire the LW crowd to precisely assess the worth of that analysis. I suppose in the future I might set a slightly higher threshold—maybe 7 or 8 karma points.
the average LWer can predict with high confidence that JoshuaZ will choose one of their top 5 charities.
Sure, but they’re still unlikely to vote up a bullshit post.
Having read JoshuaZ’s previous contributions to the conversation, and having read the challenge, I was pretty much intending to vote up his response as long as it wasn’t completely inane (it had already crossed the threshold when I read it, so I didn’t bother).
I wonder if any of the (presumably three) people who did upvote it before it crossed the threshold had similar thought processes...?
I wonder if any of the (presumably three) people who did upvote it before it crossed the threshold had similar thought processes...?
I’m one of the people who upvoted it, and I think I had a similar thought process. I wasn’t movitated by a belief that JoshuaZ would choose a charity I liked, though. I just read his post and thought his attempted definition was a good try, and (more importantly) that it was an interesting clarification that would provoke good discussion.
My analysis assumes that any challenge like that is a bet of money against some social value; if there were no utility on one side the challenge would not be taken up; if there were no utility on the other side the challenge would not be offered.
I challenge you to define them, and will donate $10 to a charity of your choice if your definition gets a karma score of at least 3 points.
No cheating by naming your charity before you reach the target, or by sock-puppeting.
You should also specify a time-limit since the entry is posted since there’s no length of time beyond which comments can’t be voted up. Edit: You should also probably specify that I can’t ask anyone to vote up the definition (and you should similarly specify that I can’t promise any specific activity on my part if it does get upvoted beyond a certain point).. And you should specify that I can’t put them in a post that has other information other than the definitions (and thus cause upvotes that aren’t connected).
Let’s just say I trust you to outhink yourself...
I’ll give you two weeks and change to gain the karma—deadline is Noon GMT, June 28th, 2010.
Ok, then. Here’s my attempt.
Intrinsically interesting topics are topics which satisfy the following criteria:
1) The topic cannot be discussed by an adult human of average intelligence without putting in some cognitive effort and attention. (If you can be busy thinking about another topic while discussing it, then it probably isn’t intrinsically interesting). If the topic cannot be discussed by a human of average intelligence then this condition is considered to be met.
2) The topic must have objective aspects which are a primary aspect of the topic.
3) The topic must have some overarching theories to connect the topic or have the possibility of overarching theories explain the topic. Thus for example, celebrity divorces would not fall into this category because they are separate unconnected data points. But differing divorces rates in different income brackets would be ok because one could potentially have interesting sociological explanations for the data.
4) The topic must have bridges to many other topics that aren’t simply a variation of the topic itself. For example, AI bridges to programming, psychology, nature of human morality, evolution, neurobiology, and epistemology. In contrast, D&D rules don’t connect to other topics in any strong way. There are some minimally interesting probability questions that you can ask if you are writing a quiz for an undergraduate probability course but that’s about it. Most of the other topics that it is connected to are still variations of the same topic such as say what a society would look like in a universe that functioned under standard 3.5 D&D rules.
I doubt that. Before I had finished the paragraph, things that came to mind included board games, what underlying skills transfer between different board games and RPGs (from empirical evidence, they exists and are large), what the appeal of roleplaying a fictional character it is, which different desires roleplaying versus powergaming satisfy, what makes a character attractive to roleplay, what makes a roleplay performance fun, what makes a D&D setting enticing, how to create an enticing D&D setting, whether the most fun is had when the DM does a good job of almost killing the characters (as someone told me), and more. These, of course, give hooks to combinatorial game theory, personality, improv acting, fiction writing, and fun theory. With the possible exception of personality (though it’s a small leap to MUDs and the Bartle 4, so probably not an exception), all of these play quite important roles in D&D.
I suppose I’m muddying it a bit since some of those things are connected to D&D but not directly to D&D rules, though your original post simply mentioned D&D.
Knowledge is connected enough that I’d be quite impressed if anyone could find (or, heck, invent) a topic which fails criterion 4.
Even D&D rules connect to the general problem of creating games which are understandable and playable and the problem of creating reasonable facsimiles of reality—these contrast in an interesting way with the scientific problem of creating computationally-tractable models which predict reality, for example.
Do you exclude D&D content from “D&D rules”? I’d agree that, say, attack of opportunity intricacies don’t connect well to anything; but something like how D&D handles werecreatures could connect to all kinds of other stuff in fantasy lit.
When I said rules I was thinking something very narrow like the actual content of the 3.5 SRD which is more or less flavorless. Your point seems to be related to Darmani’s criticism about the fourth criterion. This suggests that my criteria for intrinsically interesting as laid out above are serious flawed at least in so far as they fail to capture my intuition for what is intrinsically interesting in that D&D rules shouldn’t be considered intrinsically interesting for reasons similar to why the infield fly rule in baseball isn’t intrinsically interesting. This conversation makes me suspect that the distinction I am trying to make has no actual validity.
The key, I think, is to distinguish between topics that remind you of other topics, and topics that, upon being comprehended, actually help you understand other topics.
D&D rules remind you of D&D content, which helps you understand fantasy literature. D&D rules, by themselves, though, don’t help you understand much of anything else.
Likewise, baseball helps me understand antitrust law enforcement, because baseball has a Congressional exemption to antitrust laws. The exemption has virtually nothing to do with the infield fly rule, though. The infield fly rule reminds me of baseball, but by itself it sheds no light on antitrust law enforcement.
The influence of Charisma on social discourse, and things like intimidation and bluffing.
The role of strength vs dexterity, the difference between ‘intelligence’ and ‘wisdom’.
Most natural traits, including brain makeup personality and body type are determined by genetics but some small changes can be made over time.
When it comes to performance of skills natural talent plays some part but the overwhelming majority of influence comes from which skills you learn.
Sometimes things boil down to shere dumb luck. All you can do is make the best decisions you can under uncertainty, don’t take it personally when something improbably bad happens but also minimize the expected consequences if you roll a zero.
Most things boil down to the judgement of the guy in charge. (It’s not what you know, it is who you know, and whether you are sleeping with the GM.)
It is really hard to do stuff when it is dark.
The best way to improve your social skills is to go around killing lots of people and apply what you learn from that to diplomacy, bluff and intimidate...
All right, time to beat a strategic retreat. I’m going to stop defending my thesis that JoshuaZ’s definition is rigorous.
:) I haven’t followed the conversation closely so I don’t have a firm opinion on that. Looking back…
I would accept it as a useful definition up to and including the first two sentences of “4”. I would replace the remainder with an acknowledgment that what qualifies as an inferential ‘bridge’ to another topic and even what qualifies as a topic proper is subject. I, for example, read the counter example and it prompted all sorts of curious and potentially fascinating subjects and even prompted pleasant memories of numerous conversations I have had that have been connected using basic probability as a stepping stone.
Even if the evolution of the infield fly rule has been used as an example of how common law naturally forms? No, I’m not making that up. Not anti-trust law, but still pretty close to legal matters.
It seems like your rules 2) and 3) would disqualify literature as an interesting topic.
Right, but we’re looking for flaws with his criteria.
Thanks! Feel free to name your charity whenever you like.
Ok. Thought about this. The standard charity here seems to be the SIAI. I’m not convinced of Eliezer’s estimates for expected return for donations to SIAI (primarily because I put the probability of a Singularity in the foreseeable future to be low). Moreover, if everyone always has donations to SIAI be the result of all LW bets and contests, the level of incentive to bet will go down and so one should try to have a variety of charities that people will not mind but might not be the highest priority charity for many people here. But, I’d also like to ensure that I don’t cause negative utility to you by making you donate to an organization with which you don’t approve. So, my solution is I’m going to list four organizations and you can choose which of the four the donation goes to:
The four are the James Randi Educational Foundation, the National Center for Science Education, Alcor Life Extension Foundation, or the SENS Foundation.
And I’ll match your earlier offer as follows: If you make a post here explaining why you choose the one you did and that post gets at least three karma upvotes by 12 AM GMT on July 1st, I’ll also donate $10 to that organization. (And presumably the same rules against obvious silliness apply as before).
I choose the SENS Foundation, and have donated the $10 via Paypal. The transaction ID is #8YL863192L9547414, although I’m not sure how or whether that helps you verify payment. Maybe somebody can teach me how to provide public proof of private payment.
The SENS Foundation, as I understand it, is in the business of curing aging.
The reason why I chose the SENS Foundation is that I believe that, of the four options, it will do the most to convince people that rational thinking and empirical observation are worthwhile. This, in turn, is my best guess at what will reduce existential risk. Because I can’t know, today, with any confidence what the most important existential risks will be over the next 50 years or so, I want my donations to nudge the world closer to a state where there are enough rational empiricists to successfully address whatever turn out to be the big existential crises.
Why do I think the SENS Foundation will promote science-y views? Basically, I think the most effective technique for getting irrational people to change their worldview is to prevent them with overwhelmingly compelling evidence that the world is hugely different from the way they imagined it to be. Ideally, the evidence would be emotionally uplifting and clearly attributable to the work of scientists. A manned flight to the Moon fits that bill. So would a cure for aging.
Although spiritualists and fundamentalists of all stripes have tremendous resources in terms of stubbornness, denial, and rationalization, it is harder to rationalize away a central fact of life than it is to rationalize away a peer-reviewed study from Nature that you read an excerpt of in the USA Today. You see the moon every night; people went there. It’s hard to escape. More to the point, you don’t want to escape. It’s somehow really cool to believe that people can fly to the moon. So you maybe let go of your suspicion that the Earth is the center of the Universe and let your friend tell you about Newton and Galileo for a moment.
Same thing with aging. Your parents’ friends are right there, 80 years old and still acting like they’re 30. You can’t help but be aware of the anti-aging cure. You can’t help but be impressed, and think it’s cool. You might still believe that mortality is a good thing, or that there’s an afterlife, but you at least welcome medical science into your pantheon of interesting and legitimate things to believe in.
James Randi is a pretty bad-ass mythbuster, and I’m glad NCSE is fighting the good fight to keep “creation science” out of America’s public schools. However many people they manage to convince of the importance of critical thinking, though, I think a cure for aging will convince even more. There’s nothing quite like being WRONG about something you’ve always assumed was indisputably correct to make critical thinking look worthwhile. In this case, the bad assumption is “I will die.”
As for Alcor, it’s also a worthwhile cause, but it’s an uphill battle to convince people that freezing themselves and waiting for the future is a way to cheat death. Curing aging is more straightforward, more user-friendly, and more useful in the event of a partial success—if cryonics partially fails, you’re probably still dead, but if an anti-aging cure fails, you’re probably going to get another few decades of healthy life.
Thanks for the opportunity to choose, and to explain!
Ok. Matched donation.. Receipt ID is 4511-9941-6738-9681
Incidentally, I’m not convinced that major scientific accomplishments actually will serve to increase rationality. To examine the example you gave of the Moon landings, there is in fact a sizable fraction of the US which considers that to be a hoax. Depending on the exact question asked 5% to about 20% of the population doesn’t believe we that people have gone to the Moon in the US, and the percentage is larger for people outside the US.See this Gallup poll and this British poll showing that 25% of people in Britain doubt that we went to the moon. Unfortunately, that article just summarizes the poll and I can’t seem to find free access to the poll itself. But it also contains the noteworthy remark that “Further revelations concerning the British public’s perception of the historic event include 11 per cent who believe the Moon-landing occurred during the 1980s and 1 per cent who believe the first man on the Moon was Buzz Lightyear.” I suspect the 1 per cent can get thrown out, but the 11% looks genuine.
Americans at least cared more about rationality, critical thinking and science when it looked like they were losing the space race after Sputnik. A lot of improvements to our high school curricula occurred after that.
It isn’t obvious to me that SENS will do the best job improving rationality.
I mean, if you want, we could switch our donations to fund a program that makes sure the Russians discover a cure for aging...
OK, but that’s the wrong statistic. What percent of the U.S. population insists that the Earth is flat and/or the center of the Universe? How does that compare to the percent of the U.S. population that insists that the Earth is less than 10,000 years old?
Perhaps not directly, in the sense I originally claimed. Nevertheless, major scientific accomplishments should help solve the problem of expecting short inferential distances. If you have just flown to the moon or cured aging, even people who expect short inferential distances will not assume you are crazy when you boldly assert things that don’t immediately seem intuitive. They will give you a moment to explain, which is what I really want to happen when, e.g., scientists are proposing solutions to the existential crisis du jour.
Well, around 40% of the US thinks the Earth is less than 10,000 years old. But you seem to have a valid point, in that the fraction of the US population which believed in geocentrism dropped drastically in the 1960s and the same for the flat earth percentage which dropped from tiny to negligible. But that seems directly connected to what was actively accomplished in the Moon landings. Young Earth Creationism by contrast was not very popular from 1900 to 1960 or so (even William Jennings Bryan was an old earth Creationist). That made a comeback in the 1960s starting when Henry Morris wrote “The Genesis Flood” in 1961, and that continued to pick up speed through the Moon landings (this incidentally undermines my earlier argument about Sputnik).
Are you sure that they will be more willing to listen to long inferential distance claims? I suspect that people may be more likely to simply take something for granted and add that to their worldview. I don’t for example see the common presence of computers or other complicated technologies as substantially increasing the inferential distance people are willing to tolerate.
This leads to an interesting idea: improve science and rationality in one area by helping funding science for rivals. I wonder if that would work...
Part of that change could perhaps be attributed to the waning effectiveness of hiding behind ‘old earth’ as a way to keep on side with ‘science’. Once the option of alliance with ‘science’ lost viability the natural approach is stake the in group identity as being opposed to any attempts whatsoever to conform historic beliefs to actual evidence. If you can’t have an image of ‘sane’ then you go for an image of ‘confident and uncompromising’ - it is usually more attractive anyway.
I had an idea I call the “evil genius theory” that goes something like this: challenge can produce growth and strength; great challenge, on the verge of existential, can produce tremendous amounts of growth in short periods of time; therefore, fund an evil genius to do great and terrible things, and the challenge / response will better the world on net.
It feels very plausible to me.
Part of that change could perhaps be attributed to the waning effectiveness of hiding behind ‘old earth’ as a way to keep on side with ‘science’. Once the option of alliance with ‘science’ lost viability the natural approach is stake the in group identity as being opposed to any attempts whatsoever to conform historic beliefs to actual evidence. If you can’t have an image of ‘sane’ then you go for an image of ‘confident and uncompromising’ - it is usually more attractive anyway.
This needs to be tested for predictive power, but I believe the main reason you lost so quickly is that you bet money against another form of utility with no direct convertability. Having equally fungible forfeits on both sides of the bet makes it more symmetrical.
To venture into fuzzier grounds: The other reason I believe the asymmetry of the bet made you lose so quickly is that the average LWer can predict with high confidence that JoshuaZ will choose one of their top 5 charities.
A decent analysis, but it’s premised on a bad assumption. I didn’t bet; I issued a challenge. Notice that, unlike in a bet, if JoshuaZ failed, he would not necessarily have forfeited karma to me or anyone else. I certainly agree with you that it would be foolish to bet money against karma. I see my actions more as offering a prize for the successful completion of a task than as betting that JoshuaZ would be unable to complete the task.
Sure, but they’re still unlikely to vote up a bullshit post. Maybe that gives JoshuaZ a moderate handicap, but my primary purpose was to inspire JoshuaZ to produce a useful analysis that interested me, and not to inspire the LW crowd to precisely assess the worth of that analysis. I suppose in the future I might set a slightly higher threshold—maybe 7 or 8 karma points.
Having read JoshuaZ’s previous contributions to the conversation, and having read the challenge, I was pretty much intending to vote up his response as long as it wasn’t completely inane (it had already crossed the threshold when I read it, so I didn’t bother).
I wonder if any of the (presumably three) people who did upvote it before it crossed the threshold had similar thought processes...?
I’m one of the people who upvoted it, and I think I had a similar thought process. I wasn’t movitated by a belief that JoshuaZ would choose a charity I liked, though. I just read his post and thought his attempted definition was a good try, and (more importantly) that it was an interesting clarification that would provoke good discussion.
My analysis assumes that any challenge like that is a bet of money against some social value; if there were no utility on one side the challenge would not be taken up; if there were no utility on the other side the challenge would not be offered.
I’m sorry; I don’t understand.
There is, as you say, utility on both sides of the transaction. What does that have to with whether a bet has been placed?
Is it a challenge, or a bet? I’m just saying that examining it as a bet offers some insight into the unexpectedly lopsided results.