TL;DR: “Infohazard” means any kind of information that could be harmful in some fashion. Let’s use “cognitohazard” to describe information that could specifically harm the person who knows it.
Some people in my circle like to talk about the idea of information hazards or infohazards, which are dangerous information. This isn’t a fictional concept – Nick Bostrom characterizes a number of different types of infohazards in his 2011 paper that introduces the term (PDF available here). Lots of kinds of information can be dangerous or harmful in some fashion – detailed instructions for making a nuclear bomb. A signal or hint that a person is a member of a marginalized group. An extremist ideology. A spoiler for your favorite TV show. (Listen, an infohazard is a kind of hazard, not a measure of intensity. A papercut is still a kind of injury!)
I’ve been in places where “infohazard” is used in the Bostromian sense casually – to talk about, say, dual-use research of concern in the biological sciences, and describe the specific dangers that might come from publishing procedures of results.
I’ve also been in more esoteric conversations where people use the word “infohazard” to talk about a specific kind of Bostromian information hazard: information that may harm the person who knows it. This is a stranger concept, but there are still lots of apparent examples – a catchy earworm. “You just lost the game.” More seriously, an easy method of committing suicide for a suicidal person. A prototypical fictional example is the “basilisk” fractal from David Langford’s 1988 short story BLIT, which kills you if you see it.
This is a subset of the original definition because it is harmful information, but it’s expected to harm the person who knows it in particular. For instance, detailed schematics for a nuclear weapon aren’t really expected to bring harm to a potential weaponeer – the danger is that the weaponeer will use them to harm others. But fully internalizing the information that Amazon will deliver you a 5-pound bag of Swedish Fish whenever you want is specifically a danger to you. (…Me.)
This disparate use of terms is confusing. I think Bostrom and his intellectual kith get the broader definition of “infohazard”, since they coined the word and are actually using it professionally.*
I propose we call the second thing – information that harms the knower – a cognitohazard.
Pictured: Instantiated example of a cognitohazard. Something something red herrings.
This term is shamelessly borrowed from the SCP Foundation, which uses it the same way in fiction. I figure the usage can’t make the concept sound any more weird and sci-fi than it already does.
(Cognitohazards don’t have to be hazardous to everybody. Someone who hates Swedish Fish is not going to spend all their money buying bags of Swedish Fish off of Amazon and diving into them like Scrooge McDuck. For someone who loves Swedish Fish – well, no comment. I’d call this “a potential cognitohazard” if you were to yell it into a crowd with unknown opinions on Swedish Fish.)
Anyways, hope that clears things up.
*For a published track record of this usage, see: an academic paper from Future of Humanity Institute and Center for Health Security staff, another piece by Bostrom, an opinion piece by esteemed synthetic biologist Kevin Esvelt, a piece on synthetic biology by FHI researcher Cassidy Nelson, a piece by Phil Torres.
(UPDATE: The version I initially published proposed the term “memetic hazard” rather than “cognitohazard.” Commentor MichaelA kindly pointed out that “memetic hazard” already meant a different concept that better suited that name. Since I had only just put out the post, I decided to quickly backpeddle and switch out the word for another one with similar provinence. I hate having to do this, but it sure beats not doing it. Sorry for any confusion, and thank you, MichaelA!)
I agree that it’s valuable to note that information hazards can sometimes hurt the person who gets the information themselves. And I agree that Bostrom’s sense of information hazards is definitely broader than just that, so if people are using infohazards to mean just information that harms the person who knows it specifically, then clearing up their confusion seems good.
But I don’t know if memetic hazards is a great term for that, because it seems most natural to use the label “memetic hazards” for a superset of information hazards, not a subset. “Memes” are ideas or units of culture, of which true information is just one type. So it seems most natural to use the term “memetic hazards” for something like “harms that result from ideas” (or perhaps “ideas that spread”, or “ideas that evolve”), rather than just from true information, and rather than just harms for the knower (or just for the holder of the idea).
I think the fact that memetic hazards is already used in some places the way you propose using it is one reason to accept the term anyway. But I’m not sure it’s a strong enough reason, given 1) how unintuitive the term seems to be for what we want it to capture, and 2) the fact that the term seems like it is intuitive for a separate concept that would also be worth talking about (so perhaps we should hesitate to use up the term for something else). And it seems somewhat hard to come up with alternative terms for that separate concept—in particular, “idea hazards” is already used in a different way by Bostrom, so that’s not a good candidate.
In fact, “meme hazards” has already been used in roughly the way I suggest above, and I’m currently helping revamp the ideas in the post that use that, and was hoping to use the term “memetic hazards” for that purpose. (And this was going to be published this week, ironically enough—we’ve been scooped!) We did notice that the term memetic hazards was already used in the way you suggest, but thought that that use was sufficiently non-mainstream and sufficiently non-intuitive that it might make sense to stick with our proposed usage.
I don’t have great ideas for an alternative term for the concept you wish to point to, but perhaps something in the direction of “knower-harming infohazards”, “self-affecting infohazards”, or “internalised infohazards”?
Aw, carp, you’re totally right. It had been pointed out to me while I was getting feedback that “memetic hazard” doesn’t clearly gesture at the thing, but I hadn’t thought of or been aware that there was a coherent and reasonable definition of “memetic hazard” that’s the thing it sounds like it should mean.
I do actually have one more term up my sleeve, which is “cognitohazard”, which comes about the same way and more clearly indicates the danger. (Which is from thinking / “cognitizing” (?) about it.)
I’m trying to think of a way to switch this out now that doesn’t cause people to get confused or think that the [infohazard vs. knowledge that harms the knower] distinction doesn’t matter. Hmmm. Let me think if I should just edit these posts now.
Update: I have swapped this out. I appreciate your feedback, because the distinction you point to seems like a valuable one, and I don’t want to step on a great term. Hopefully this resolves the issue?
“Memetic hazards” is a fairly well-established term for the thing referred to as “cognitohazard” here. If you google it you can find its use in several places, not just SCP (where I think it arose). I honestly object to trying to establish “meme hazard” to mean something different, especially since I don’t think that concept (a superset of “infohazard” that also includes falsehoods) is very useful (most people agree that falsehoods are bad, and the harms of spreading false information are well-known).
To say that meme hazards has already been used in that sense is technically true, but the term’s usage in that post was defecting from common usage, and its use in other draft posts has been objected to by several people, including (but not limited to) me. I’ve been working on info-hazardy stuff for a while, and have been asked by several people about the relationship between info hazards and memetic hazards, with the latter being used in the original “harm to the knower” sense. I take this as evidence that the term is in (somewhat) common usage, and as such should not be repurposed in a way that is virtually guaranteed to cause confusion and derail conversations with lengthy explanations.
As an analogy, the fact that the Council of Europe, Council of the European Union, and European Council are all existent and different things is widely perceived as silly and bad. Similarly, given that the term “memetic hazard” is already taken to mean one thing (which is kind of but not exactly a subset of information hazards), introducing “meme hazard” as a term for a related but importantly different thing (which is a superset of info hazards) seems to me to be clearly a bad move. Just find a different term already, and leave “memetic hazard” where it is.
On reflection, I think I maybe need to give some justification for why I object so strongly to muddying the terminological waters. Also, this and the preceding comment are directed at MichaelA and Convergence Analysis, not at eukaryote (I put it in the wrong thread, sorry).
Anyone who’s been educated in a technical field knows what it’s like to encounter a really nasty terminological tangle. Over decades, lots of different terms build up for lots of related but distinct terms, many of which are similar even though their referents are importantly different, or different even though their referents are the same. Teachers spend a lot of time untangling these terminological difficulties, and students spend a lot of time being confused by them. They also make explaining the issues to laypeople much more difficult than they need to be. Even though a better, simpler terminology would clearly be preferred, the costs of switching are nearly always greater than the costs of sticking with convention, and so terminological confusion tends to get worse over time, like junk DNA accumulating on a genome.
This will almost inevitably happen with any intellectually tricky field, but we can at least do our best to mitigate it by being aware of the terminology that has gone before and making sure we pick terms that are minimally likely to cause confusion. We certainly shouldn’t deliberately choose terms that are extremely similar to existing terms, even though their meaning is very different. Especially if the issue has been brought to your attention, since this provides additional evidence that confusion is likely. Deliberately trying to repurpose a term to mean something importantly different from its original meaning is even worse.
In the case of the various Europe-associated councils, it would clearly have been desirable for the namers of later ones to have stopped and tried to come up with a better name (e.g. one that doesn’t involve the word “council”, or provides some additional distinguishing information). Instead, they decided (perhaps with some justice, I don’t know) that their usage was better, ploughed ahead, and now we’re stuck with a horrible confusing tangle.
Ditto this case with “meme hazards” and “memetic hazards”. The meaning of “memetic hazard” is somewhat established (insofar as anything in this field is established). But those proposing “meme hazard” think (with some justice) that their usage makes more sense, and so want to try and override the existing usage. If they fail, we will have two extremely similar terms persisting in the culture, meaning importantly but confusingly different things (one roughly a subset of info hazards, the other a superset). We’ll all have to spend time first understanding and then explaining the difference, and even then someone will occasionally use “meme hazard” to refer to (the established meaning of) “memetic hazard” or vice-versa, and confusion will result. And all this will have been avoidable with just slightly more considerate choice of new terminology.
There are plenty of other terms one could use for the superset of information hazards that includes false information. I’ve previously suggested some in the past (communication hazard, concept hazard); I’m sure more could be come up with with a little effort. I’m not convinced the superset concept is important enough to be worth crystallising into a term at all, but I wouldn’t be too surprised if I’m wrong about that. Even in that world, though, I think one still has a duty to pick terms that are optimised to avoid confusion, rather than (as in this case) to cause it.
[Edited to remove “idea hazard” as a suggestion, since MichaelA correctly pointed out above that it has a different meaning, and to remove inflammatory language I don’t endorse.]
I strongly agree with the need for people to be more careful when inventing terminology, and have been puzzled why many don’t seem to be as concerned about it (e.g., often refuse to or are reluctant to change confusing terminology when the potential for confusion is pointed out to them). I think it’s probably worth writing a top-level post about this.
So I think there’s an interesting distinction here between bad terminology you just made up, and bad terminology you’re inheriting from others.
If you just invented a new term and several people think it’s not a good term, they’ll probably seem wrong to you, and there’s a good chance you’ll be wrong about that and should change it — before your new term has time to take root. There should definitely be a duty on people to make sure their new terms are not confusing.
On the other hand, if you (and at least some other people) think an existing term is bad you have two choices: you can accept it for the purposes of consistency or try to change it to something less confusing before its reach grows further. Both strategies are trying to avoid confusion in some sense, but differ in their variance; the first is accepting the existing confusion for the sake of not creating further confusion, and the second is risking further confusion to try and reduce existing confusion.
Which of these is the correct course of action probably depends on how problematic the old name is, how widespread it is, and how much power you have to change it. Personally, I think it’s less confusing to memorise one bad term than to remember the relationship between one bad term and one better term, so I think the risk of proliferating terminology is probably not worth taking most of the time. But sometimes there’ll be a pretty compelling reason to make the change, especially if you can co-ordinate enough top people in the field to make it stick.
So far I’ve mostly been talking about the situation where something is called X, and for some reason you think it should be called Y. This is pretty common (see e.g. the debates around what to call clean/cultured/cultivated/… meat). I think this current disagreement over “memetic hazard” is worse than that, though, because rather than trying to change the name of a thing, the goal is to change the thing a name refers to. So we have a sort of shuffle proposed, where the name X is transferred from thing A to thing B and a new name Y is proposed for thing A. This seems much more likely to cause confusion to me.
(Personal, quick-fire views—not Convergence’s)
Ok, your comments have definitely updated me towards thinking the non-intuitive (in my view) usage of “memetic hazards” is more established than I’d thought, and thus towards being less confident about trying to appropriate the term for what I still do think is a more intuitive usage. I also definitely agree that conflicting terminology is worth making efforts for avoid, where possible.
One thing I should add is that we’re ideally aiming to not just point at a superset of infohazards, but also to emphasise the “memetic” aspects (mutations, proliferation of more catchy ideas, etc.). I think I agree that there’s not that much value in a term just staking out the idea of “true or false info that causes harm”, but a term that also highlights those “memetic” aspects does seem (to me) worthwhile. And unfortunately, “communication/concept hazard” doesn’t seem to capture that (although I did consider “concept hazard” for a moment, when I started reading your comments and thus started again on trying to find alternatives).
And reading Dagon’s comment has also updated me further towards thinking that a term that highlights the memetic aspects is useful, and that the current usage of “memetic hazards” is not ideal!
So ultimately I mostly feel unsure how best to proceed. I do feel pretty confident that it’d be better to use a term other than “memetic hazard” for an info hazard where it’s the knower who’s hurt by knowing. I think several of the suggestions that have been made seem workable, including “direct information hazard” (thanks for highlighting that option). “Direct information hazard” could also intuitively mean other things, but it doesn’t seem to have one obvious intuitive meaning that conflicts with what we want to use it for, so it beats “memetic hazards” on that front. (So this paragraph is me suggesting that, while perhaps we shouldn’t repurpose the term “memetic hazards”, we should still avoid spreading or further entrenching the current, confusing usage of that term, and should jump aboard a different term for that concept instead.)
But I feel less sure what to call this other thing we want to talk about. It seems the options are:
Charge ahead with “meme hazard” or “memetic hazard”—but now that I’m aware it seems to be used more widely than I thought, I’m less inclined to go with that. (It also maybe seems like a somewhat bad plan from a sort of outside view or epistemic humility perspective, if there are people giving reasonable-seeming arguments against that usage.)
Try to come up with another term that also seems good for this concept. I think this option is ideal, but I don’t have any suggestions right away.
“Idea mutation hazard” isn’t quite right, as it’s also about things like more catchy or simplified ideas dominating over time. “Idea selection hazard” isn’t quite right for the inverse reason.
Do you (or someone else) have any good suggestions?
Give up on having a specific term for this concept, and just talk about it in more longhanded ways. That’s probably acceptable if we can’t think of another term, I guess.
Thanks for this. I think that even with the edits I was probably too confrontational above, so sorry about that. I’m not sure why this issue is emotional for me, that seems weird.
To start off, I agree that, ceteris paribus, the current usage of “memetic hazard” is strange. It has the advantage over e.g. “direct IH” of sounding cool and scary, which was probably desirable for SCP-like uses but is perhaps not ideal for people actually trying to do serious work on info-hazardy concepts.
I notice a conflict in my thoughts here, where I want to be able to refer to knower-harming hazards with a term that is (a) distinctive, evocative and catchy (such that it seems compelling and is actually used in real situations) and (b) sober, precise and informative (such that it can be used productively in technical writing on the subject). “Memetic hazard” satisfies (a) but not (b); “direct information hazard” satisfies (b) but not (a). This is not ideal.
I think for academic-ish work the term “direct info hazard” or something similarly bland is a fine descriptor for “knower-harming information”. I’m not sure what sort of term we would want to use for more popular work. “Knowledge hazard” seems okay to me? But I agree more suggestions here would be valuable.
Insofar as “memetic hazard” is being used simply to mean “knower-harming information hazard”, this seems reasonable. The term is still obscure enough that if enough people jumped on a new term it could probably gain more traction, and “memetic hazard” can be left as an obscure and kinda-confusing synonym of [whatever the new term is] that people bring up from time to time but isn’t widely used outside SCP.
[One counter-consideration. Having skimmed some existing usage of “memetic hazard” on the internet, it seems some people are using it to mean a directly (?) harmful idea that also encourages its bearers to spread it. The blandest form of this would be an idea that is harmful to know but fun to talk about; sci-fi (including SCP) contains many much more extreme instances. This usage does seem to make more use of the “memetic” aspect of the name. It also seems to (a) be hard to really capture precisely and (b) deviate from how I typically use (and how eukaryote originally used) the term, so it might be better to just leave that aspect alone for now.]
The question remains of what to call the concept you are trying to capture in your work. At present I don’t think I have a good enough understanding of what it is you’re going for to offer great suggestions. From my limited understanding, I do think “communication hazard” could do the trick – it seems to me to capture (a) the generality, i.e. that we’re not focusing on true or false info; (b) the selection idea, i.e. that part of the hazard arises from how well different ideas spread via communication, and (c) part of the mutation idea, namely the part that arises from imperfect person-to-person communication (rather than within-mind mutations).
Assuming you still think “communication hazard” is no good, I might suggest making a top-level post explaining the concept you want to capture and looking for more/better suggestions? That seems like it could generate some new ideas; we could also use a similar approach to look for suggested replacements of the current usage of “memetic hazard”. Regardless, I would also suggest that, while it’s definitely worth putting in some time and effort (and gathering of multiple opinions) to optimise terminology, it may still sometimes be worth adopting a term that is less ideal at describing what you want in order to avoid cross-term confusion.
I’ve just posted an introduction to/summary of/clarification of the concept of information hazards on the EA Forum. I was halfway through writing that when I came across this post and made these comments. You and eukaryote helped bring to my attention that it’d be valuable to note the “direct information hazards”/”knowledge hazards”/whatever subset, and that it’s just a subset, so that’s in there (and thanks for that!). The post doesn’t touch on the “memetic” side of things—Convergence will get to that later, and will think carefully about what term to use when we do.
Anyway, I’d be interested in your thoughts on that post, since you seem to have thought a lot about these topics :)
(ETA: I found that latest comment of yours interesting food for thought, and upvoted it, but don’t have specific things to add as a reply at the moment.)
It also seems worth noting that (as I imagine you’re aware, and as you partly allude to) Bostrom does outline several types of the sort of infohazards where the knower of the information is whose is harmed, and gives examples. E.g.:
Spoiler hazards (which you give an example of)
“Knowing-too-much hazard: Our possessing some information makes us a potential target or object of dislike.” For example, “In the witch hunts of the Early Modern period in Europe, a woman’s alleged possession of knowledge of the occult or of birth control methods may have put her at increased risk of being accused of witchcraft”
“Commitment hazard: There is a risk that the obtainment of some information will weaken one’s ability credibly to commit to some course of action.”
(There are more, and examples of the types, but I’m just writing this quickly.)
This could suggest that no new label is needed. That said, I can see an argument for some quick way of expressing something broader than any one of those types, but narrower than all infohazards. My nitpick/discussion-prompt is more about what term to use, if we do wish to refer to that concept.
(Just to be super extra clear, I don’t mean this as at all argumentative or as countering what I see as your main aims.)
Regardless of scuffles over the name, I do want to express support for the idea of spreading awareness of memetic hazard (/whatever) as its own distinct concept. I’ve definitely been in conversations where I’ve said something mildly memetic-hazardy (e.g. “hey, that thing kinda reminds me of this other, unpleasant thing”) and got the response of “hey, info hazard”. And I think having a more precise term for that kind of knower-harming information in slightly wider parlance would be helpful.
I’d recommend just using https://wiki.lesswrong.com/wiki/Information_hazard as a base ontology. The knowledge of Swedish Fish availability would be a temptation or distraction hazard.
I’d reserve “memetic hazard” for information hazards related to beliefs passed via a memetic route (as a metaphor from genetic information). These may be true or false (or may be models or belief-systems that are neither true nor false), but are “catchy” in terms of propagating the ideas in humans. To my mind, it’s about the transmission and encoding of the information, not the effect on the receiver. There can be memetic temptation hazards and memetic biasing hazards, for instance.
This is roughly the usage of the term that seems to make sense to me, but Will Bradshaw seems to present some good reasons against using that term for that concept. I replied in that thread.
Do you happen to have any other good ideas of terms to capture that concept (i.e., to highlight how ideas can cause harm after/through mutating or being “selected” memetically)?
What do you think of the change? (I think Bostrom’s terms are fine, but it’s still useful to have a word for the broad category of “knowing this may hurt you”.)
It is an unfortunate fact that everyone who starts to work on info hazards at some point decides to come up with their own typology. :P
As a result, there is a surfeit of terms here. Anders Sandberg has proposed “direct information hazard” as a broad category of info hazards that directly harm the knower, and I’ve largely adopted his usage. It does seem desirable to have a term for any kind of communication/information that harms the knower, regardless of whether it is true or false or neither.
“Cognition hazard” kind of gestures at this but doesn’t really capture it for me. I would guess a cognition hazard would be something that (a) is hazardous because it causes you to think about it a lot (brooding/obsessing/etc) or (b) is hazardous if you do so. This feels like a smaller/more technical category than what is usually captured by “memetic hazard”. Maybe “knowledge hazard” would do the trick, if you definitely want to abandon “standard” usage (such as it is)?
Some quick musings on alternatives for the “self-affecting” info hazard type:
Personal hazard
Self info hazard
Self hazard
Self-harming hazard
I’d say the first, third, and fourth of those options sound too broad—they don’t make it clear that this is about info. But I think something in that direction could be good (e.g., I proposed in a top-level comment “self-affecting info hazards”). I also think the term Anders Sandberg uses is acceptable.
Mostly I’d just want to steer away from using a term that sounds like it obviously should mean some other specific thing (which I’d personally say is the case for “memetic hazards”).
Brainstorming:
Cognition hazard
Knowledge hazard
Awareness hazard
Knower hazard (sounds too much like “Noah hazard”?)
realisation hazard
comprehension hazard
...
I also thought of “culture hazard” but that sounds like a different thing.
I think it’s probably okay for the term to not be immediately intensely evocative of the thing we’re going for, as long as it’s (a) catchy and (b) makes enough sense once explained to be memorable. I do think “memetic hazard” meets both of these criteria, though perhaps (a) more than (b).
Ironically, and perhaps unfortunately, the current usage of “memetic hazard” does seem to be very memetically fit. :P
I do want to flag that, following my own advice above, I would switch to “cognition hazard”/”cognitohazard” if that has the most consensus and we can’t come up with a better term, as long as we also find some new term for the other competing meaning of “memetic hazard”; this seems to be the strategy that minimises total confusion/conflict.