The ethic of hand-washing and community epistemic practice
Related to: Use the Native Architecture
When cholera moves through countries with poor drinking water sanitation, it apparently becomes more virulent. When it moves through countries that have clean drinking water (more exactly, countries that reliably keep fecal matter out of the drinking water), it becomes less virulent. The theory is that cholera faces a tradeoff between rapidly copying within its human host (so that it has more copies to spread) and keeping its host well enough to wander around infecting others. If person-to-person transmission is cholera’s only means of spreading, it will evolve to keep its host well enough to spread it. If it can instead spread through the drinking water (and thus spread even from hosts who are too ill to go out), it will evolve toward increased lethality. (Critics here.)
I’m stealing this line of thinking from my friend Jennifer Rodriguez-Mueller, but: I’m curious whether anyone’s gotten analogous results for the progress and mutation of ideas, among communities with different communication media and/or different habits for deciding which ideas to adopt and pass on. Are there differences between religions that are passed down vertically (parent to child) vs. horizontally (peer to peer), since the former do better when their bearers raise more children? Do mass media such as radio, TV, newspapers, or printing presses decrease the functionality of the average person’s ideas, by allowing ideas to spread in a manner that is less dependent on their average host’s prestige and influence? (The intuition here is that prestige and influence might be positively correlated with the functionality of the host’s ideas, at least in some domains, while the contingencies determining whether an idea spreads through mass media instruments might have less to do with functionality.)
Extending this analogy—most of us were taught as children to wash our hands. We were given the rationale, not only of keeping ourselves from getting sick, but also of making sure we don’t infect others. There’s an ethic of sanitariness that draws from the ethic of being good community members.
Suppose we likewise imagine that each of us contain a variety of beliefs, some well-founded and some not. Can we make an ethic of “epistemic hygiene” to describe practices that will selectively cause our more accurate beliefs to spread, and cause our less accurate beliefs to stay con tained, even in cases where the individuals spreading those beliefs don’t know which is which? That is: (1) is there a set of simple, accessible practices (analogous to hand-washing) that will help good ideas spread and bad ideas stay contained; and (2) is there a nice set of metaphors and moral intuitions that can keep the practices alive in a community? Do we have such an ethic already, on OB or in intellectual circles more generally? (Also, (3) we would like some other term besides “epistemic hygiene” that would be less Orwellian and/or harder to abuse—any suggestions? Another wording we’ve heard is “good cognitive citizenship”, which sounds relatively less prone to abuse.)
Honesty is an obvious candidate practice, and honesty has much support from human moral intuitions. But “honesty” is too vague to pinpoint the part that’s actually useful. Being honest about one’s evidence and about the actual causes of one’s beliefs is valuable for distinguishing accurate from mistaken beliefs. However, a habit of focussing attention on evidence and on the actual causes of one’s own as well as one’s interlocutor’s beliefs would be just as valuable, and such a practice is not part of the traditional requirements of “honesty”. Meanwhile, I see little reason to expect a socially-endorsed practice of “honesty” about one’s “sincere” but carelessly assembled opinions (about politics, religion, the neighbors’ character, or anything else) to selectively promote accurate ideas.
Another candidate practice is the practice of only passing on ideas one has oneself verified from empirical evidence (as in the ethic of traditional rationality, where arguments from authority are banned, and one attains virtue by checking everything for oneself). This practice sounds plausibly useful against group failure modes where bad ideas are kept in play, and passed on, in large part because so many others believe the idea (e.g. religious beliefs, or the persistence of Aristotelian physics in medieval scholasticism; this is the motivation for the scholarly norm of citing primary literature such as historical documents or original published experiments). But limiting individuals’ sharing to the (tiny) set of beliefs they can themselves check sounds extremely costly. Rolf Nelson’s suggestion that we find words to explicitly separate “individual impressions” (impressions based only on evidence we’ve ourselves verified) from “beliefs” (which include evidence from others’ impressions) sounds promising as a means of avoiding circular evidence while also benefiting from others’ evidence. I’m curious how many here are habitually distinguishing impressions from beliefs. (I am. I find it useful.)
Are there other natural ideas? Perhaps social norms that accord status for reasoned opinion-change in the face of new good evidence, rather than norms that dock status from the “losers” of debates? Or social norms that take care to leave one’s interlocutor a line of retreat in all directions—to take care to avoid setting up consistency and commitment pressures that might wedge them toward either your ideas or their own? (I’ve never seen this strategy implemented as a community norm. Some people conscientiously avoid “rhetorical tricks” or “sales techniques” for getting their interlocutor to adopt their ideas; but I’ve never seen a social norm of carefully preventing one’s interlocutor from having status- or consistency pressures toward entrenchedly keeping their own pre-existing ideas.) These norms strike me as plausibly helpful, if we could manage to implement them. However, they appear difficult to integrate with human instincts and moral intuitions around purity and hand-washing, whereas honesty and empiricism fit comparatively well into human purity intuitions. Perhaps this is why these social norms are much less practiced.
In any case:
(1) Are ethics of “epistemic hygiene”, and of the community impact of one’s speech practices, worth pursuing? Are they already in place? Are there alternative moral frames that one might pursue instead? Are human instincts around purity too dangerously powerful and inflexible for sustainable use in community epistemic practice?
(2) What community practices do you actually find useful, for creating community structures where accurate ideas are selectively promoted?
- Can Humanism Match Religion’s Output? by 27 Mar 2009 11:32 UTC; 82 points) (
- Reasoning isn’t about logic (it’s about arguing) by 14 Mar 2010 4:42 UTC; 66 points) (
- Information cascades by 6 Mar 2009 4:08 UTC; 61 points) (
- Understanding information cascades by 13 Mar 2019 10:55 UTC; 50 points) (
- What data generated that thought? by 26 Apr 2011 12:54 UTC; 42 points) (
- What epistemic hygiene norms should there be? by 21 Mar 2012 19:26 UTC; 29 points) (
- 6 Jan 2014 13:15 UTC; 24 points) 's comment on [LINK] Why I’m not on the Rationalist Masterlist by (
- Effective Advertising and Animal Charity Evaluators by 13 Jun 2018 19:43 UTC; 21 points) (EA Forum;
- Matching-donation fundraisers can be harmfully dishonest by 12 Nov 2016 3:30 UTC; 21 points) (EA Forum;
- Matching donation fundraisers can be harmfully dishonest. by 11 Nov 2016 21:05 UTC; 18 points) (
- How to use “philosophical majoritarianism” by 5 May 2009 6:49 UTC; 13 points) (
- Consider Representative Data Sets by 6 May 2009 1:49 UTC; 12 points) (
- 28 Jun 2010 2:54 UTC; 12 points) 's comment on Unknown knowns: Why did you choose to be monogamous? by (
- Hygienic Anecdotes by 29 Mar 2009 5:46 UTC; 10 points) (
- Adversarial System Hats by 11 Mar 2009 16:56 UTC; 8 points) (
- 15 Oct 2010 9:51 UTC; 7 points) 's comment on Retirement leads to loss of cognitive abilities by (
- 23 Sep 2010 21:22 UTC; 5 points) 's comment on Politics as Charity by (
- 27 Jun 2011 13:38 UTC; 4 points) 's comment on SIAI’s Short-Term Research Program by (
- 18 May 2011 8:15 UTC; 3 points) 's comment on Liars for Jesus by (
- 23 Sep 2019 0:39 UTC; 3 points) 's comment on hamnox’s Shortform by (
- 7 Feb 2016 20:58 UTC; 2 points) 's comment on Marketing Rationality by (
- Posting now enabled on Less Wrong by 5 Mar 2009 16:15 UTC; 2 points) (
- Software tools for community truth-seeking by 10 Mar 2009 13:20 UTC; 2 points) (
- 9 Feb 2010 16:28 UTC; 1 point) 's comment on A survey of anti-cryonics writing by (
- 11 Oct 2010 18:27 UTC; 1 point) 's comment on Rationality and advice by (
Norms to protect against consistency and commitment pressures would be very valuable. One possible mechanism would be to make public ‘Red Team’ analyses: designate a forum where you will present the strongest case you can against one of your favored ideas, along these lines:
http://www.overcomingbias.com/2007/07/introducing-ram.html
This could be improved with rewards for success, which the speaker could provide herself using a mechanism like http://www.stickk.com/
With respect to religion, here’s some support for the vertical versus horizontal spread idea:
Catholicism-celibate priests, early spread by evangelization. Buddhism-celibate monks, early spread by evangelization. Islam-polygamy for believers, early spread by evangelization and violence, with capture of women for followers. Judaism-priests and rabbis marry, tribal religion Hinduism-contains vast diversity, but religious leaders have generally married, generally the religion is inherited and does not seek converts
Carl, that sounds like it could be really useful for increasing the rate of alternate idea-generation and of idea-shift.
“Do you think our group should use the public “Red Team” analyses idea?” LessWrong or SIAI?
“Have you tried it?” Yes, I have tried a version of it and found it useful for improving my model of the issue in question at the margin.
I meant SIAI / people working on existential risk.
I’d be interested in hearing any details of what you did and didn’t find useful, or what you’d recommend, as far as public “Red Team” analyses go.
That should probably go to email rather than comments.
A great idea. Just like ‘idea markets’. Will people use them?
I am concerned that “taking sides”, even self-consciously taking the “opposite” side, might lead to polarization and emotional attachment to factual beliefs.
However, I agree that the idea of red-teaming is interesting and should be tried, as part of an effort to develop some rationalist community best practices.
Yes this is a good point, one that Hopefully Anonymous correctly raises frequently. Rather, one should defend a point of view one rejects or has not considered, not specifically the reversal of one’s current view.
I look forward to the debate between geocentrism and epicycles.
Here’s a practice that might help: “why do I think that” monologues. This would be a group but not oppositional activity. The idea is to elaborate on a thing you currently believe to be true by specifying the reasons you believe it, the reasons you believe the reasons, etc and trying to dig out the whole epistemological structure. The purpose of this is not so much to tear apart someone else’s epistemological structure (it wouldn’t work, nobody learns from that), but rather to learn to see for yourself the points of divergence—which might be far, far upstream of an individual idea.
Good idea.
Making thinking visible, by your suggested “why do I actually think that” monologs, would also help with transfer of useful evidence-gathering or reasoning tricks, so that if e.g. you and I are talking, and you did something useful that I don’t know how to do in coming to a particular conclusion, I can see how it worked and maybe copy your trick in general.
I know math/science tutoring works better when people spell out more of their thinking than is common.
Jack: The idea of having citations everywhere is nice but unpragmatic. It would slow down conversation and dialogue tremendously.
One possible alternative is to have nested dialogues. Each sentence that makes some sort of claim links to another which explains the idea more thoroughly if that is what you disagree with. If you do not disagree with that point, then you can continue reading the main chain. This is similar to the idea of hypertext dialogue: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.40.3246 , and it is similar to what Eliezer has done at OB by being so self-referential.
I think the idea of a nested dialogue is a great one. You could also incorporate reader voting, so that weak arguments get voted off of the dialogue while stronger ones remain, thus winnowing down the argument to its essence over time.
I wonder if our hosts, or any contributors, would be interested in trying out such a procedure as a way of exploring a future disagreement?
A couple of things that I am aware of in a religious community context (orthodox judaism). Of course in this case they were ‘adopted’ due to religious duty, and followed with intermittent success, but still pretty good ideas, especially coming from a couple of millenia ago.
De-biasing decisionmaking in legal context. 1) bribes are forbidden 2) family relationship disqualify the court 2) family relationships disqualify the witnesses 3) someone who ‘lacks compassion’ is disqualified from judging capital cases, specifically someone who does not have children is disqualified from capital cases 4) people with criminal record are disqualified from being a witness 5) people who do not contribute to the world (incl. ‘one who makes a living from gambling’ (occasional gambling is ok) - so much for a lot of our finance industry) are disqualified
Avoiding information cascades, specifically peddling rumors is forbidden.
Two points...
POINT ONE: The cholera example is even more fascinating when you drill down. The bacteria involved is “Vibrio cholerae”.
http://www.textbookofbacteriology.net/cholera.html
It seems to actually have numerically common non-pathogenic forms and the ones with enterotoxin genes appear to have received them from bacteria targeting viruses. If I understand correctly, the toxin genes are integrated (but dormant) within bacterial genomes and infection by bacteriophage CTX triggers their expression.
http://www.mrc-lmb.cam.ac.uk/genomes/madanm/articles/cholera.htm
POINT TWO: It is probably worth keeping in mind the fundamental attribution error.
http://www.jstor.org/pss/4545312
This point is mostly in response to the focus here on habits and norms. Not to say that someone couldn’t work on those productively, but I suspect environmental effects like “mere proximity” have a lot more influence than would be assumed without consciously factoring it in, even over people in this community. The cholera example comes bundled with the same “context focused” message: the authors cited in the OP mostly focusing not on hand washing but on the design of water purification infrastructure.
I can’t imagine that this crowd is unaware of this sort of thing, but I’m not aware of a better example of the “location location location” message this Google’s results from studying their internal betting markets.
http://googleblog.blogspot.com/2008/01/flow-of-information-at-googleplex.html
For myself, I tend to assume that if changes to my habits are to have any significant influence on me, many of them must be focused around shaping and choosing environments that support the kinds of thinking and living that I want to do. I’m still working on this process for myself and have few unambiguously positive results to report and the negative results are too embarrassing to list and would take a lot of text to describe any in useful detail :-P
For lack of such text I’ll recommend “Lady of Mazes” for it’s exploration of themes around technology, “spacial nearness”, social networks, medium-message distinctions, choice architecture, suggestions systems, personal character, political awareness, value-technology interactions, and having a life that is felt to be meaningful. This book is less accessible than “Accelerando” but, for me, it has had much more staying power.
http://www.amazon.com/Lady-Mazes-Karl-Schroeder/dp/0765312190
http://www.amazon.com/Accelerando-Singularity-Charles-Stross/dp/0441012841
Useful practice: Systematize credibility assessments. Find ways to track the sincerity and accuracy of what people have said in the past, and make such information widely available. (An example from the legal domain would be a database of expert witnesses, which includes the number of times courts have qualified them as experts on a particular subject, and the number of times courts adopted or rejected their conclusions.) To the extent such info is widely available, it both helps to “sterilize” the information coming from untrustworthy sources and to promote the contributions that are most likely to be helpful. It also helps improve the incentive structure of truth-seeking discussions.
A version of this kind of thinking is central to the book “Beginning of Infinity” by David Deutsch.
His thinks that most ideas (or memes) can be roughly categorized into those that are (1) vulnerable to criticism/testing, and (2) those that can survive critical evaluation. In an environment where people blindly accept what they are told by authorities then nothing favours the ideas that survive criticism, and therefore they are unlikely to survive—except maybe by coincidence. Instead ideas that perform well on other metrics (such as “seeming important” or “making me look good when I express them”) will take over.
In contrast, if we change to an environment where ideas are criticised. Where people perform experiments or doubt the word of authorities, then the ideas that can survive criticism will have a survival advantage over their competition and spread.
David Deutsch argues that the first kind of society (word from authority, the way things have always been done etc) has dominated most human societies for most of history, but that we are currently in a wonderfully exceptional time (starting around the enlightenment) where criticism, argument and experiment are accepted.
So, in his view, I think the “ideas handwashing” is criticising and questioning ideas.
One concept sharply distinguishing common law legal systems from Roman law ones are their approaches to evidence.
By separating jurist and fact-finder, (judge and jury in America), systems such as ours compensate for human biases (among their other functions) and prevents the fact finder from obtaining some information that would on average make them dumber. For example, the system can notice that people generally set too much store by past criminal history and hearsay evidence, so a judge restricts when such evidence can even be heard by jurors.
Ideally, a fact finder would only use evidence appropriately and not need to be shielded. Where there is no separate fact finder such as a jury, as in inquisitorial Roman law derived systems, it makes no sense to have rules of evidence by which the judge restricts what a fact finder may hear and consider, as the judge is the fact finder as well. Systems with one judge and no jury are not disadvantaged provided the judge can calibrate according to the evidence no worse than he or she would be able to distinguish which evidence to pass along to a jury, were there one.
I can use a similar practice by asking someone for their opinion and giving them only some of the evidence I have—namely, the evidence I think is of the type that will do them more good than harm to hear.
The advantage to hand-washing is that it severely reduces a specific otherwise-easy-to-use vector for disease and there is social pressure due to public restrooms to washing your hands. Can we find something similar for cognitive citizenship? A vector for transmitting bad knowledge, like forwarded emails—maybe in the future I shouldn’t just reply with “Nope, Snopes, also stop.” but instead ask that the sender include a disclaimer?
How about a vector for transmitting bad cognitive algorithms? That one would be far more valuable to block but I haven’t been able to think of a large extant vector at all, much less one that might be attackable.
Regarding the e-mail forwards, I usually reply with “Your address has been added to my list of spammers. Any future e-mails from you will automatically be blocked.” Take that, Grandma!
The most promising concrete suggestion I see here is to adopt verbal conventions for distinguishing direct and indirect evidence. I’m not sure the word “impression” really connotes direct evidence, though with enough consistent usage in that mode we might carve out a common meaning to that effect. But we actually have a whole range of indirection; where would the cutoff in that range be? If I actually looked something up recently in an encyclopedia, while someone else just vaguely remembers looking it up sometime long ago, is that my impression or my belief?
The indication of the (kind of) evidence for a statement is known as evidentiality in linguistics.
The wikipedia article referenced above gives the example of Eastern Pomo, in which a verb takes one of 4 evidential suffixes, corresponding to the type of evidence: nonvisual sensory, inferential, hearsay, or direct knowledge (probably visual).
Great post, food for thought. I sometimes distinguish between beliefs and impressions, but should do so more.
If ideas change incrementally by mutation, and the average false idea does more damage than the truth, and as ideas get closer to the truth they trend noisily toward doing less damage, is that a general moral argument against spreading and believing specific false ideas that seem beneficial (both because the neighbors of beneficial-seeming false ideas regress to a more damaging mean than the neighbors of the truth, and because the truth gains some stability against mutations by being the truth)?
For religions, perhaps see Ben’s: Parasite Ecology and the Evolution of Religion.
David Stove also talked about it a bit (not focusing on the transmission part but more on detection) with “What is Wrong with out Throughts?” ( http://web.maths.unsw.edu.au/~jim/wrongthoughts.html ). I’m not sure there is a good solution, as it is almost impossible to know whether or not you are in the grip of some irrationality.
To give an example, it is conceptually easy to kill germs—bacteria simply can’t handle wide swings in humidity, temperature, acidity, etc. Washing hands with soap and hot water, cooking food, using bleach, etc. are easy ways that are guaranteed to kill bacteria. They have an extremely low failure rate (Anthrax is the toughest bacteria I know of, and it can be killed with enough bleach and ingenuity).
These limitations are caused by limitations in the fundamental processes that make life work. Metabolism has to happen in particular temperature ranges. Cell walls can only be made out of a few sorts of materials, and all of those materials react violently to extremely basic or acidic substances.
The basic problem is that, if there are analogous limitations to “mind viruses”, we simply don’t know what they are (beyond the trivial making the host commit suicide instantly).
The best I have come up with is the advice that Feynman gave in his “Cargo Cult Science” talk ( http://wwwcdf.pd.infn.it/~loreti/science.html ) - cultivate a brutal sense of honesty so that you have a small edge on the detection side of things.
The example by David Stove gave me shivers. I only wish it was shorter—not fewer examples, but shorter author’s comments between them.
This discussion is about hand washing, but now I think more about vaccination. I feel like reading Stove’s article vaccinated me against most of philosophy.
A good epistemic practice might be courage to say “this is nonsense” or “this is insane” when reading a thoughtless flow of words. Perhaps the karma system of LW should include a reason why someone voted text up or down. Reasons for upvote could be like “interesting”, “well referenced” etc., reasons for downvote could be like “useless”, “offensive” or “insane”.
If some text does not make sense, members of rational community should have courage to say “this does not make sense to me”. (People usually don’t do this, because they fear it will make them appear stupid.) It is always a useful signal… at best it means that author should communicate more clearly, at worst it means that author wrote nonsense.
Suggested implementation: Clicking upvote or downvote could make a tiny textbox next to the thumb appear where you can (but are NOT obliged to) type a maximum of 15 letters, explaining the vote in one word.
Reasons for upvotes appear in tiny green letters, reasons for downvotes appear in tiny red letters. Identical words are not repeated but a + can appear next to them.
That doctor would probably want to replace my broken parts with functional parts, rather than treat my diseases. The horror.
Just a very few of the labels used on this site are passwords I am thinking of, labels of the very few ways the forty are wrong. The resources enabling one to see underlying problems among the forty are on this website. However, it is better not to simply declare: “The problem behind most of them all is X”, where X is a label. Someone might believe me!
We can think of cholera transmission (or actually, any memetic spread) as consisting of a feedback loop.
There are positive and negative feedback loops, depending on what properties we’re examining: positive loops lead to a greater and greater value of the property, while negative loops converge on some set value.
Ideally we want to set up our mental environments so that error is trapped in negative feedback loops and reduced as much as possible, while correctness is amplified. In terms of assigned probability, wrongness should go to zero and correctness to one.
The methods for bringing this about are widely known but, oddly, not widely recognized and even less widely applied. They’re called logic.
First, I’d caution against reflexively questioning appeals to authority. Arguments from authority are not fallacies despite their traditional classification as such. There is no way for an individual to experimentally verify even a small fraction of the things she counts as knowledge- it would be an absurd and unnecessary barrier. Indeed, I think cautioning against arguments from authority is a kind of keeping kosher- an outdated purity norm that is no longer necessary given modern science and method. Once upon a time it made great sense to distrust experts because the experts were often bullshitting and there were few checks to prevent them from doing so. Similarly, now we know how to cook our shellfish and so you’re not likely to get sick from eating scallops.
The problem, on the contrary, are claims being passed off as if the maker of the claim has in fact read the experts when they have not. Particularly false claims that do not contradict common sense go by undetected- and do not die. I’m thinking here of something like “Eskimos have an extraordinary number of words for snow because they’re around it all the time” (http://en.wikipedia.org/wiki/Eskimo_words_for_snow) Snopes is obviously a fantastic resource in this regard but if we want to stop the spread of empirically false beliefs I might suggest dramatically expanding the use of wikipedia’s “citation needed” demand. What if instead of citing claims on occasion or as requested every comment was just assumed to need a citation. If a claim lacked a citation a dozen Less Wrong commenters immediately responded with just the words “citation needed?”. If original poster wants to avoid this she simply includes a citation of gives a reason why she didn’t “I’m just guessing” or “There are no empirical claims here” etc. Eventually we’d just come to expect a citation or some sort of explanation and if we didn’t see one we’d know to immediately question the claim.
(I don’t believe I’ve made any non-obvious empirical claims, but if someone wants to see evidence regarding the superiority of modern science as compared to medieval scholarship I can find that)
While I think you might be on the right track with respect to Wikipedia, this wouldn’t really work in casual (or even scholarly) discourse. There are a lot of things of which I’m confident and don’t have an immediately available justification, and tracking them down would be so time-consuming that I just wouldn’t bother to comment on anything.
Also, there is a disanalogy between Wikipedia and other kinds of scholarship; Wikipedia does not allow original research, in which the appropriate citation for a claim might be the preceding argument, and so should not be explicitly stated.
There are two cases where argument from authority is still clearly fallacious:
respecting the authority of someone who is not an expert in the appropriate field—for instance, taking the Pope’s word on evolutionary biology
regarding the authority as itself what gives truth to the claim—This happens, for instance, when one makes appeals to one’s own authority. If someone asks me for a citation and I say “I’m an expert, and I say so” then that’s insufficient.
P.S. You should change that URL to a link so MarkDown doesn’t eat it.
There is a notion of an “information cascade”, which I think is relevant to this question.
As I understand it If individuals have private information (individual impressions), and also observe other individuals’ public actions (beliefs), then it is possible that the group “cascades” to a worse result than one might at first expect.
I don’t understand the idea well, my summary may be inaccurate or clumsy.
Two points...
POINT ONE: The cholera example is even more fascinating when you drill down. The bacteria involved is “Vibrio cholerae”.
http://www.textbookofbacteriology.net/cholera.html
It seems to actually have numerically common non-pathogenic forms and the ones with enterotoxin genes appear to have recieved them from bacteria targeting viruses. If I understand correctly, the toxin genes are integrated (but dormant) within bacterial genomes and infection by bacteriophage CTX triggers their expression.
http://www.mrc-lmb.cam.ac.uk/genomes/madanm/articles/cholera.htm
POINT TWO: It is probably worth keeping in mind the fundamental attribution error.
http://www.jstor.org/pss/4545312
This point is mostly in response to the focus here on habits and norms. Not to say that someone couldn’t work on those productively, but I suspect environmental effects like “mere proximity” have a lot more influence over even people in this community than would be assumed without consciously factoring it in. The cholera example comes bundled with the same “context focused” message: the authors cited in the OP mostly focusing not on hand washing but on the design of water purification infrastructure.
For myself, I tend to assume that if changes to my habits are to have any significant influence on me, many of them must be focused around shaping and choosing environments that support the kinds of thinking and living that I want to do. I’m still working on this process for myself and have few unambiguously positive results to report and the negative results are too embarassing to list and would take a lot of text to describe any in useful detail :-P
For lack of such text I’ll recommend “Lady of Mazes” for it’s exploration of themes around technology, “spacial nearness”, social networks, medium-message distinctions, choice architecture, suggestions systems, personal character, political awareness, value-technology interactions, and having a life that is felt to be meaningful. This book is less accessable than “Accelerando” but, for me, it has had much more staying power.
http://www.amazon.com/Lady-Mazes-Karl-Schroeder/dp/0765312190
http://www.amazon.com/Accelerando-Singularity-Charles-Stross/dp/0441012841
Two points...
POINT ONE: The cholera example is even more fascinating when you drill down. The bacteria involved is “Vibrio cholerae”.
http://www.textbookofbacteriology.net/cholera.html
It seems to actually have numerically common non-pathogenic forms and the ones with enterotoxin genes appear to have recieved them from bacteria targeting viruses. If I understand correctly, the toxin genes are integrated (but dormant) within bacterial genomes and infection by bacteriophage CTX triggers their expression.
http://www.mrc-lmb.cam.ac.uk/genomes/madanm/articles/cholera.htm
POINT TWO: It is probably worth keeping in mind the fundamental attribution error.
http://www.jstor.org/pss/4545312
This point is mostly in response to the focus here on habits and norms. Not to say that someone couldn’t work on those productively, but I suspect that environmental effects like “mere proximity” have a lot more influence over even people in this community than would be assumed without consciously factoring it in. The cholera example comes bundled with the same “context focused” message where the authors cited in the OP mostly focusing not on hand washing but on the design of water purification infrastructure.
For myself, I tend to assume that if changes to my habits are to have any significant influence on me many of them must be focused around shaping and choosing environments that support the kinds of thinking and living that I want to do. I’m still working on this process for myself and have few unambiguously positive results to report and the negative results are too embarassing to list and would take a lot of text to describe any in useful detail :-P
For lack of such text I’ll recommend “Lady of Mazes” for it’s exploration of themes around technology, “nearness”, social networks, medium-message distinctions, choice architecture, suggestions systems, personal character, political awareness, value-technology interactions, and having a life that is felt to be meaningful. This book is less accessable than “Accelerando” but, for me, it has had much more staying power.
http://www.amazon.com/Lady-Mazes-Karl-Schroeder/dp/0765312190
http://www.amazon.com/Accelerando-Singularity-Charles-Stross/dp/0441012841
Two points...
POINT ONE: The cholera example is even more fascinating when you drill down. The bacteria involved is “Vibrio cholerae”.
http://www.textbookofbacteriology.net/cholera.html
It seems to actually have numerically common non-pathogenic forms and the ones with enterotoxin genes appear to have recieved them from bacteria targeting viruses. If I understand correctly, the toxin genes are integrated (but dormant) within bacterial genomes and infection by bacteriophage CTX triggers their expression.
http://www.mrc-lmb.cam.ac.uk/genomes/madanm/articles/cholera.htm
POINT TWO: It is probably worth keeping in mind the fundamental attribution error.
http://www.jstor.org/pss/4545312
This point is mostly in response to the focus here on habits and norms. Not to say that someone couldn’t work on those productively, but I suspect that environmental effects like “mere proximity” to to have a lot more influence over people in general than they assume without factoring in consciously. The cholera example comes bundled with the same “context focused” message where the authors cited in the OP mostly focusing not on hand washing but on the design of water purification infrastructure.
For myself, I tend to assume that if changes to my habits are to have any significant influence on me many of them must be focused around shaping and choosing environments that support the kinds of thinking and living that I want to do. I’m still working on this process for myself and have few unambiguously positive results to report and the negative results are too embarassing to list and would take a lot of text to describe any in useful detail :-P
For lack of such text I’ll recommend “Lady of Mazes” for it’s exploration of themes around technology, “nearness”, social networks, medium-message distinctions, choice architecture, suggestions systems, personal character, political awareness, value-technology interactions, and having a life that is felt to be meaningful. This book is less accessable than “Accelerando” but, for me, it has had much more staying power.
http://www.amazon.com/Lady-Mazes-Karl-Schroeder/dp/0765312190
http://www.amazon.com/Accelerando-Singularity-Charles-Stross/dp/0441012841
Two points...
POINT ONE: The cholera example is even more fascinating when you drill down. The bacteria involved is “Vibrio cholerae”.
http://www.textbookofbacteriology.net/cholera.html
It seems to actually have numerically common non-pathogenic forms and the ones with enterotoxin genes appear to have recieved them from bacteria targeting viruses. If I understand correctly, the toxin genes are integrated (but dormant) within bacterial genomes and infection by bacteriophage CTX triggers their expression.
http://www.mrc-lmb.cam.ac.uk/genomes/madanm/articles/cholera.htm
POINT TWO: It is probably worth keeping in mind the fundamental attribution error.
http://www.jstor.org/pss/4545312
This point is mostly in response to the focus here on habits and norms. Not to say that if someone couldn’t work on those productively, but I suspect that environmental effects like “mere proximity” to to have a lot more influence over people in general than they assume without factoring in consciously. The cholera example comes bundled with this message, with the authors cited in the OP mostly focusing not on hand washing but on the design of water purification infrastructure.
For myself, I tend to assume that if changes to my habits are to have any significant influence on me many of them must be focused around shaping and choosing environments that support the kinds of thinking and living that I want to do. I’m still working on this process for myself and have few unambiguously positive results to report and the negative results are too embarassing to list and would take a lot of text to describe any in useful detail :-P
For lack of such text I’ll recommend “Lady of Mazes” for it’s exploration of themes around technology, “nearness”, social networks, medium-message distinctions, choice architecture, suggestions systems, personal character, political awareness, value-technology interactions, and having a life that is felt to be meaningful. This book is less accessable than “Accelerando” but, for me, it has had much more staying power.
http://www.amazon.com/Lady-Mazes-Karl-Schroeder/dp/0765312190
http://www.amazon.com/Accelerando-Singularity-Charles-Stross/dp/0441012841
Two points...
POINT ONE: The cholera example is even more fascinating when you drill down. The bacteria involved is “Vibrio cholerae”.
http://www.textbookofbacteriology.net/cholera.html
It seems to actually have numerically common non-pathogenic forms and the ones with enterotoxin genes appear to have recieved them from bacteria targeting viruses. If I understand correctly, the toxin genes are integrated (but dormant) within bacterial genomes and infection by bacteriophage CTX triggers their expression.
http://www.mrc-lmb.cam.ac.uk/genomes/madanm/articles/cholera.htm
POINT TWO: It is probably worth keeping in mind the fundamental attribution error.
http://www.jstor.org/pss/4545312
This point is mostly in response to the focus here on habits and norms. Not to say that if someone couldn’t work on those productively, but I suspect that environmental effects like “mere proximity” to to have a lot more influence over people in general than they assume without factoring in consciously. The cholera example comes bundled with this message, with the authors cited in the OP mostly focusing not on hand washing but on the design of water purification infrastructure.
For myself, I tend to assume that if changes to my habits are to have any significant influence on me many of them must be focused around shaping and choosing environments that support the kinds of thinking and living that I want to do. I’m still working on this process for myself and have few unambiguously positive results to report and the negative results are too embarassing to list and would take a lot of text to describe any in useful detail :-P
For lack of such text I’ll recommend “Lady of Mazes” for it’s exploration of themes around technology, “nearness”, social networks, medium-message distinctions, choice architecture, suggestions systems, personal character, political awareness, value-technology interactions, and having a life that is felt to be meaningful. This book is less accessable than “Accelerando” but, for me, it has had much more staying power.
http://www.amazon.com/Lady-Mazes-Karl-Schroeder/dp/0765312190
http://www.amazon.com/Accelerando-Singularity-Charles-Stross/dp/0441012841
Two points...
POINT ONE: The cholera example is even more fascinating when you drill down. The bacteria involved is “Vibrio cholerae”.
http://www.textbookofbacteriology.net/cholera.html
It seems to actually have numerically common non-pathogenic forms and the ones with enterotoxin genes appear to have recieved them from bacteria targeting viruses. If I understand correctly, the genes are integrated (but dormant) within bacterial genomes with viral activation triggering their expression.
http://www.mrc-lmb.cam.ac.uk/genomes/madanm/articles/cholera.htm
POINT TWO: It is probably worth keeping in mind the fundamental attribution error”
http://www.jstor.org/pss/4545312
This point is mostly in response to the focus here on habits and norms. Not to say that if someone couldn’t work on those, but I suspect that environmental effects like “mere proximity” to to have a lot more influence over people in general than they assume without factoring in consciously. The cholera example comes bundled with this message, with the authors cited in the OP mostly focusing not on hand washing but on the design of water purification infrastructure.
For myself, I tend to assume that if changes to my habits are to have any significant influence on me many of them must be focused around shaping and choosing environments that support the kinds of thinking and living that I want to do. I’m still working on this process for myself and have few unambiguously positive results to report and the negative results are too embarassing to list and would take a lot of text to describe any in useful detail :-P
For lack of such text I’ll recommend “Lady of Mazes” for it’s exploration of themes around technology, “nearness”, social networks, medium-message distinctions, choice architecture, suggestions systems, personal character, political awareness, value-technology interactions, and having a life that is felt to be meaningful. This book is less accessable than “Accelerando” but, for me, it has had much more staying power.
http://www.amazon.com/Lady-Mazes-Karl-Schroeder/dp/0765312190
http://www.amazon.com/Accelerando-Singularity-Charles-Stross/dp/0441012841
Two points...
POINT ONE: The cholera example is even more fascinating when you drill down. The bacteria involved is “Vibrio cholerae”.
http://www.textbookofbacteriology.net/cholera.html
It seems to actually have numerically common non-pathogenic forms and the ones with enterotoxin genes appear to have recieved them from bacteria targeting viruses. If I understand correctly, the genes are integrated (but dormant) within bacterial genomes with viral activation triggering their expression.
http://www.mrc-lmb.cam.ac.uk/genomes/madanm/articles/cholera.htm
POINT TWO: It is probably worth keeping in mind the fundamental attribution error”
http://www.jstor.org/pss/4545312
This point is mostly in response to the focus here on habits and norms. Not to say that if someone couldn’t work on those, but I suspect that environmental effects like “mere proximity” to to have a lot more influence over people in general than they assume without factoring in consciously. The cholera example comes bundled with this message, with the authors cited in the OP mostly focusing not on hand washing but on the design of water purification infrastructure.
For myself, I tend to assume that if changes to my habits are to have any significant influence on me many of them must be focused around shaping and choosing environments that support the kinds of thinking and living that I want to do. I’m still working on this process for myself and have few unambiguously positive results to report and the negative results are too embarassing to list and would take a lot of text to describe any in useful detail :-P
For lack of such text I’ll recommend “Lady of Mazes” for it’s exploration of themes around technology, “nearness”, social networks, medium-message distinctions, choice architecture, suggestions systems, personal character, political awareness, value-technology interactions, and having a life that is felt to be meaningful. This book is less accessable than “Accelerando” but, for me, it has had much more staying power.
http://www.amazon.com/Lady-Mazes-Karl-Schroeder/dp/0765312190 http://www.amazon.com/Accelerando-Singularity-Charles-Stross/dp/0441012841
1. The cholera example is even more fascinating when you drill down.
The bacteria “Vibrio cholerae” seems to actually have numerically common non-pathogenic forms and the ones with enterotoxin genes appear to have recieved them from bacteria targeting viruses. If I understand correctly, the genes are integrated (but dormant) within bacterial genomes with viral activation triggering their expression semi-speculative lit summary).
2. Keep in mine the fundamental attribution error.
This is mostly in response to the focus here on habits and norms. I suspect that environmental effects like “mere proximity” to to have a lot more influence over people in general than they assume without factoring it in consciously. For myself, I tend to assume that if changes to my habits are to have any significant influence on me many of them must be focused around shaping and choosing environments that support the kinds of thinking and living that I want to do.
I’m still working on this process for myself and have few unambiguously positive results to report and the negative results are too embarassing to list and would take a lot of text to describe any in useful detail :-P
For lack of such text I’ll recommend Lady of Mazes for it’s exploration of themes around technology, “nearness”, social networks, medium-message distinctions, choice architecture, suggestions systems, personal character, political awareness, value-technology interactions, and having a life that is felt to be meaningful. This book is less accessable than Accelerando but, for me, it has had much more staying power.
The trouble with only passing on verified ideas is that it stops you being able to pass on ideas you wish to get verified that need significant resources to do so, and need the help of others. E.g. higgs boson, an AI theory, a new low level computer design.
So perhaps a way of coding ideas, e.g in need of testing, tested myself, tested second hand, publicly available test data.
Colour coding comments on a forum might be a good place to test this kind of scheme. Then people can easily discriminate what is verified and what is speculative.
Good point pearson.
I wonder how “playing devil’s advocate” fits into the epistemic hygiene / good cognitive citizenship world view.
On the one hand, it can reduce group think and broaden the range of areas considered. On the other hand, its called devil’s advocate because you are advocating what are presumably bad ideas. If they are advocated too well, or you are not ‘flagged’ as operating in the devil’s advocate role you might actually be spreading bad ideas.
I was thinking about this subject because I tend to slip into the devil’s advocate role in IRL conversations and was pondering if the fact that I spend a lot of time advocating ideas I don’t support might be epistemiclly harmful (or at least a low value use of time).
Edit: I distinguish this role in casual conversation from a more formal red team approach (which would be known to all team members and so not at risk of mistaking the motivation behind advocacy)
The term originated with the canonization of saints. The Devil’s Advocate was the lawyer tasked with making the argument that the person up for sainthood didn’t actually deserve to be recognized as a saint- either the miracles associated with them were faked / not actually miraculous, they did something during their life that the Catholic Church wouldn’t want associated with a Saint, or so on. Another lawyer, God’s Advocate, was tasked with making the case for sainthood.
The practice was abolished in 1983, which opened the floodgates for granting sainthood as it made the process faster and less difficult. Every now and then, someone will be asked to testify against the potential saint- as Christopher Hitchens famously was with Mother Teresa- and his investigation of the claimed ‘miracle’ seemed like a pretty clear debunking to me.
In its original form, the Devil’s Advocate basically represents not extending the benefit of the doubt to proposed ideas, but examining them critically, and seems like a perfect example of good epistemic hygiene and formal red teaming.
A somewhat more productive interpretation of the conversational approach is probably steel manning, the inversion of straw manning.
Thanks for the information. Though seeing how formal the original “devil’s advocate” was again makes me worry about the wisdom of doing the same informally. Searching for patterns, it seems like the lauded examples of this are all formal and well flagged.
Peter Richerson here says:
He goes on to give some data about that.
My favourite example of this sort of thing is the demographic transition in meme-rich Japan. The native humans there live for a long time—but they are practically sterile.
To quote from my 2011 book on memetics:
Much the same point applies to the internet.
It’s not clear to me high rise apartment and office blocks foster meme spreading more than a hub-spokes model of broadcasters and tight-knit communities. (This may just be a nitpick about ‘maximizes’, or it may lead to a more subtle point. I’m not quite sure which it is.)