We are status oriented creatures especially with regard to social activities. Science is one of those social activities, so it is to be expected that science is infected with status seeking. However it is also one of the more efficient ways we have of getting truths, so it must be doing some things correctly. I think that it may have some ideas that surround it that reduce the problems of it being a social enterprise.
One of the problems is the social stigma of being wrong, which most people on the edge of knowledge probably are. Being wrong does not signal your attractive qualities, people don’t like other people that tell them lies of give them false information. I suspect that falsifiability is popular among scientists because it allows them to pre-commit to changing their minds, without taking too high a status hit. This is a bit stronger than leaving a line of retreat as it says when you’ll retreat as well as allowing you to and is a public admission. They can say that they currently believe idea X but if experiment Y shows Z they will abandon X. That statement is also useful for other people as well as it allows them to see the boundaries of the idea.
This can also be seen as working to oppose of the confirmation bias. If you think you are right, there is no reason to look for data that tests your assumptions. If you want to pre-commit to changing your mind, you need to think how your idea might be wrong and are allowed to look for data.
I would like to see this community adopting this approach
In the spirit of this: I would cease advocate this approach if it was shown that people that pre-committed to changing their minds suffered as large a status hit as those that didn’t, when it was shown that they were wrong.
I’d assumed there was standard ways of measuring it along the lines of a typical psychology experiment: involve two groups of people in two different scenarios (wrong, and wrong with retreat). Then quiz the audience on their opinion of the person, their intelligence, work with them,whether you would trust them to perform their area of expertise, be their friend, etc.
However I can’t find much with a bit of googling. I’ll have a look into it later.
Thanks. That sounds good, but it is an experimental program, not something you’d observe on Less Wrong.
I expect that you could get more complex results than yes or no. Like with some primes or some observers preparing a retreat would help, with others it wouldn’t, and in some contexts you’d lose status and credibility directly for trying to prepare a retreat.
True. We are interested in communities where truth-tracking is high status, so that cuts down the number of contexts. We would also probably need to evaluate it against other ways of coping with being incorrect (disassociation e.g. Eliezer(1999), apology etc) and see whether it is a good strategy on average.
We are status oriented creatures especially with regard to social activities. Science is one of those social activities, so it is to be expected that science is infected with status seeking. However it is also one of the more efficient ways we have of getting truths, so it must be doing some things correctly. I think that it may have some ideas that surround it that reduce the problems of it being a social enterprise.
One of the problems is the social stigma of being wrong, which most people on the edge of knowledge probably are. Being wrong does not signal your attractive qualities, people don’t like other people that tell them lies of give them false information. I suspect that falsifiability is popular among scientists because it allows them to pre-commit to changing their minds, without taking too high a status hit. This is a bit stronger than leaving a line of retreat as it says when you’ll retreat as well as allowing you to and is a public admission. They can say that they currently believe idea X but if experiment Y shows Z they will abandon X. That statement is also useful for other people as well as it allows them to see the boundaries of the idea.
This can also be seen as working to oppose of the confirmation bias. If you think you are right, there is no reason to look for data that tests your assumptions. If you want to pre-commit to changing your mind, you need to think how your idea might be wrong and are allowed to look for data.
I would like to see this community adopting this approach
In the spirit of this: I would cease advocate this approach if it was shown that people that pre-committed to changing their minds suffered as large a status hit as those that didn’t, when it was shown that they were wrong.
Upvoted. Although I am curious as to how you will measure the status hits that various people take from being wrong.
I’d assumed there was standard ways of measuring it along the lines of a typical psychology experiment: involve two groups of people in two different scenarios (wrong, and wrong with retreat). Then quiz the audience on their opinion of the person, their intelligence, work with them,whether you would trust them to perform their area of expertise, be their friend, etc.
However I can’t find much with a bit of googling. I’ll have a look into it later.
Thanks. That sounds good, but it is an experimental program, not something you’d observe on Less Wrong.
I expect that you could get more complex results than yes or no. Like with some primes or some observers preparing a retreat would help, with others it wouldn’t, and in some contexts you’d lose status and credibility directly for trying to prepare a retreat.
True. We are interested in communities where truth-tracking is high status, so that cuts down the number of contexts. We would also probably need to evaluate it against other ways of coping with being incorrect (disassociation e.g. Eliezer(1999), apology etc) and see whether it is a good strategy on average.