Call me a chicken, but yes: I would not risk going out empty handed even in 1 out of 100000 if I could have left with $100M.
This kind of super-cautious mindset can’t be modeled with any real valued money X (current state of the world) → utility type of mapping.
Like Vladimir Nesov pointed out, that is false—not the preference being expressed, of course, but the statement that the preference can’t be modeled with the mapping.
Now first let me make it clear that I disapprove of the atmosphere you find in some academic science departments where making a false statement is taken to be a mortifying sin. That kind of attitude is a big barrier to teaching and to learning. Since teaching and learning is a big part of what we want to do here, we should not think poorly of a participant for making a false statement.
But I am a little worried that in 88 hours since the false statement was made, no one downvoted the false statement (or if they did, the vote was canceled out by an upvote). And I am a little worried that in the 81 hours since his reply, no one upvoted Nesov’s reply in which he explains why the statement is false. (I have just cast my votes on these 2 comments.)
It is good to have an atmosphere of respect for people even if they make a mistake, but it is bad IMHO when most readers ignore a false statement like the one we have here when there is no doubt about its falseness (it is not open to interpretation) and it involves knowledge central to the mission of the community (e.g., like the one we have here about the most elementary decision theory). Note that elementary decision theory is central to the rationality mission of Less Wrong and to the improve-the-global-situation-with-AI mission of Less Wrong.
Moreover, if you not only read a comment, but also decide to reply to it, well, then IMHO, you should take particular care to make sure you understand the comment, especially when the comment is as short and unnuanced as the one under discussion. But before Nesov’s reply, two people replied to the comment under discussion without showing any sign that they recognise that the one statement of fact made in the comment is false. One reply (upvoted 3 times) reads, ‘The technical term is “risk-averse”, not “chicken”’. The other introduces the Allais paradox, which is irrelevant to why the statement is false.
I do not mean to single out this comment and these 2 replies or the people who wrote them: the only reason I am drawing attention to them is to illustrate something that happens regularly. And I definitely realize that it probably happens a lot less here on Less Wrong than it does in any other conversation on the internet that ranges over as many subject relevant to the human condition as the conversation on Less Wrong does. And a significant reason for that is the hard work Eliezer and others put into the development of the software behind the site.
But I suspect that one of the best opportunities for creating a conversation that is even better than the conversation we are all in right now is to make the response by the community to false statements (the kind not open to interpretation) more salient and more consistent. Wikipedia’s response to false statements gives me the impression of rising to the level of saliency and consistency I am talking about, but of course the software behind Wikipedia does not support conversation as well as the software behind Less Wrong does. (And more importantly but more subtly, Wikipedia is badly governed: much of the goodwill and reputation enjoyed by Wikipedia will probably be captured by the ideological and personal agendas of Wikipedia’s insiders.)
I disagree that false statements are the sorts of things that should be downvoted. I’m all about this being a place where people can happily be false and get corrected, and that means the ’I want to see fewer comments like this” interpretation suggests that I should not downvote comments merely for containing falsehoods.
“I’m all about this being a place where people can happily be false and get corrected.”
I am, too, until the false statements start drown out the relevant true information so that the most rational readers decide to stop coming here anymore or until the volume of false statements overwhelm the community’s ability to respond to false statements. But, yeah, I am with you.
And you make me realize that downvoting is probably not the right response to a false statement. I just think that there should be a response that is not as demanding of the reader’s time and attention as reading the false statement, then reading the responses to the false statement. (Also, it would be nice to give a prospective responder a way to respond that is less demanding of their time than the only way currently available, namely, to compose a comment in reply to the false statement.)
My original statement was mathematically true. Maybe Vladimir was sloppy reading it (his utilty function satisfied only half of the requirements), but I would not downvote him for that.
Like Vladimir Nesov pointed out, that is false—not the preference being expressed, of course, but the statement that the preference can’t be modeled with the mapping.
Now first let me make it clear that I disapprove of the atmosphere you find in some academic science departments where making a false statement is taken to be a mortifying sin. That kind of attitude is a big barrier to teaching and to learning. Since teaching and learning is a big part of what we want to do here, we should not think poorly of a participant for making a false statement.
But I am a little worried that in 88 hours since the false statement was made, no one downvoted the false statement (or if they did, the vote was canceled out by an upvote). And I am a little worried that in the 81 hours since his reply, no one upvoted Nesov’s reply in which he explains why the statement is false. (I have just cast my votes on these 2 comments.)
It is good to have an atmosphere of respect for people even if they make a mistake, but it is bad IMHO when most readers ignore a false statement like the one we have here when there is no doubt about its falseness (it is not open to interpretation) and it involves knowledge central to the mission of the community (e.g., like the one we have here about the most elementary decision theory). Note that elementary decision theory is central to the rationality mission of Less Wrong and to the improve-the-global-situation-with-AI mission of Less Wrong.
Moreover, if you not only read a comment, but also decide to reply to it, well, then IMHO, you should take particular care to make sure you understand the comment, especially when the comment is as short and unnuanced as the one under discussion. But before Nesov’s reply, two people replied to the comment under discussion without showing any sign that they recognise that the one statement of fact made in the comment is false. One reply (upvoted 3 times) reads, ‘The technical term is “risk-averse”, not “chicken”’. The other introduces the Allais paradox, which is irrelevant to why the statement is false.
I do not mean to single out this comment and these 2 replies or the people who wrote them: the only reason I am drawing attention to them is to illustrate something that happens regularly. And I definitely realize that it probably happens a lot less here on Less Wrong than it does in any other conversation on the internet that ranges over as many subject relevant to the human condition as the conversation on Less Wrong does. And a significant reason for that is the hard work Eliezer and others put into the development of the software behind the site.
But I suspect that one of the best opportunities for creating a conversation that is even better than the conversation we are all in right now is to make the response by the community to false statements (the kind not open to interpretation) more salient and more consistent. Wikipedia’s response to false statements gives me the impression of rising to the level of saliency and consistency I am talking about, but of course the software behind Wikipedia does not support conversation as well as the software behind Less Wrong does. (And more importantly but more subtly, Wikipedia is badly governed: much of the goodwill and reputation enjoyed by Wikipedia will probably be captured by the ideological and personal agendas of Wikipedia’s insiders.)
I disagree that false statements are the sorts of things that should be downvoted. I’m all about this being a place where people can happily be false and get corrected, and that means the ’I want to see fewer comments like this” interpretation suggests that I should not downvote comments merely for containing falsehoods.
“I’m all about this being a place where people can happily be false and get corrected.”
I am, too, until the false statements start drown out the relevant true information so that the most rational readers decide to stop coming here anymore or until the volume of false statements overwhelm the community’s ability to respond to false statements. But, yeah, I am with you.
And you make me realize that downvoting is probably not the right response to a false statement. I just think that there should be a response that is not as demanding of the reader’s time and attention as reading the false statement, then reading the responses to the false statement. (Also, it would be nice to give a prospective responder a way to respond that is less demanding of their time than the only way currently available, namely, to compose a comment in reply to the false statement.)
My original statement was mathematically true. Maybe Vladimir was sloppy reading it (his utilty function satisfied only half of the requirements), but I would not downvote him for that.