I strongly agree with the need for people to be more careful when inventing terminology, and have been puzzled why many don’t seem to be as concerned about it (e.g., often refuse to or are reluctant to change confusing terminology when the potential for confusion is pointed out to them). I think it’s probably worth writing a top-level post about this.
So I think there’s an interesting distinction here between bad terminology you just made up, and bad terminology you’re inheriting from others.
If you just invented a new term and several people think it’s not a good term, they’ll probably seem wrong to you, and there’s a good chance you’ll be wrong about that and should change it — before your new term has time to take root. There should definitely be a duty on people to make sure their new terms are not confusing.
On the other hand, if you (and at least some other people) think an existing term is bad you have two choices: you can accept it for the purposes of consistency or try to change it to something less confusing before its reach grows further. Both strategies are trying to avoid confusion in some sense, but differ in their variance; the first is accepting the existing confusion for the sake of not creating further confusion, and the second is risking further confusion to try and reduce existing confusion.
Which of these is the correct course of action probably depends on how problematic the old name is, how widespread it is, and how much power you have to change it. Personally, I think it’s less confusing to memorise one bad term than to remember the relationship between one bad term and one better term, so I think the risk of proliferating terminology is probably not worth taking most of the time. But sometimes there’ll be a pretty compelling reason to make the change, especially if you can co-ordinate enough top people in the field to make it stick.
So far I’ve mostly been talking about the situation where something is called X, and for some reason you think it should be called Y. This is pretty common (see e.g. the debates around what to call clean/cultured/cultivated/… meat). I think this current disagreement over “memetic hazard” is worse than that, though, because rather than trying to change the name of a thing, the goal is to change the thing a name refers to. So we have a sort of shuffle proposed, where the name X is transferred from thing A to thing B and a new name Y is proposed for thing A. This seems much more likely to cause confusion to me.
Ok, your comments have definitely updated me towards thinking the non-intuitive (in my view) usage of “memetic hazards” is more established than I’d thought, and thus towards being less confident about trying to appropriate the term for what I still do think is a more intuitive usage. I also definitely agree that conflicting terminology is worth making efforts for avoid, where possible.
One thing I should add is that we’re ideally aiming to not just point at a superset of infohazards, but also to emphasise the “memetic” aspects (mutations, proliferation of more catchy ideas, etc.). I think I agree that there’s not that much value in a term just staking out the idea of “true or false info that causes harm”, but a term that also highlights those “memetic” aspects does seem (to me) worthwhile. And unfortunately, “communication/concept hazard” doesn’t seem to capture that (although I did consider “concept hazard” for a moment, when I started reading your comments and thus started again on trying to find alternatives).
And reading Dagon’s comment has also updated me further towards thinking that a term that highlights the memetic aspects is useful, and that the current usage of “memetic hazards” is not ideal!
So ultimately I mostly feel unsure how best to proceed. I do feel pretty confident that it’d be better to use a term other than “memetic hazard” for an info hazard where it’s the knower who’s hurt by knowing. I think several of the suggestions that have been made seem workable, including “direct information hazard” (thanks for highlighting that option). “Direct information hazard” could also intuitively mean other things, but it doesn’t seem to have one obvious intuitive meaning that conflicts with what we want to use it for, so it beats “memetic hazards” on that front. (So this paragraph is me suggesting that, while perhaps we shouldn’t repurpose the term “memetic hazards”, we should still avoid spreading or further entrenching the current, confusing usage of that term, and should jump aboard a different term for that concept instead.)
But I feel less sure what to call this other thing we want to talk about. It seems the options are:
Charge ahead with “meme hazard” or “memetic hazard”—but now that I’m aware it seems to be used more widely than I thought, I’m less inclined to go with that. (It also maybe seems like a somewhat bad plan from a sort of outside view or epistemic humility perspective, if there are people giving reasonable-seeming arguments against that usage.)
Try to come up with another term that also seems good for this concept. I think this option is ideal, but I don’t have any suggestions right away.
“Idea mutation hazard” isn’t quite right, as it’s also about things like more catchy or simplified ideas dominating over time. “Idea selection hazard” isn’t quite right for the inverse reason.
Do you (or someone else) have any good suggestions?
Give up on having a specific term for this concept, and just talk about it in more longhanded ways. That’s probably acceptable if we can’t think of another term, I guess.
Thanks for this. I think that even with the edits I was probably too confrontational above, so sorry about that. I’m not sure why this issue is emotional for me, that seems weird.
To start off, I agree that, ceteris paribus, the current usage of “memetic hazard” is strange. It has the advantage over e.g. “direct IH” of sounding cool and scary, which was probably desirable for SCP-like uses but is perhaps not ideal for people actually trying to do serious work on info-hazardy concepts.
I notice a conflict in my thoughts here, where I want to be able to refer to knower-harming hazards with a term that is (a) distinctive, evocative and catchy (such that it seems compelling and is actually used in real situations) and (b) sober, precise and informative (such that it can be used productively in technical writing on the subject). “Memetic hazard” satisfies (a) but not (b); “direct information hazard” satisfies (b) but not (a). This is not ideal.
I think for academic-ish work the term “direct info hazard” or something similarly bland is a fine descriptor for “knower-harming information”. I’m not sure what sort of term we would want to use for more popular work. “Knowledge hazard” seems okay to me? But I agree more suggestions here would be valuable.
So this paragraph is me suggesting that, while perhaps we shouldn’t repurpose the term “memetic hazards”, we should still avoid spreading or further entrenching the current, confusing usage of that term, and should jump aboard a different term for that concept instead.
Insofar as “memetic hazard” is being used simply to mean “knower-harming information hazard”, this seems reasonable. The term is still obscure enough that if enough people jumped on a new term it could probably gain more traction, and “memetic hazard” can be left as an obscure and kinda-confusing synonym of [whatever the new term is] that people bring up from time to time but isn’t widely used outside SCP.
[One counter-consideration. Having skimmed some existing usage of “memetic hazard” on the internet, it seems some people are using it to mean a directly (?) harmful idea that also encourages its bearers to spread it. The blandest form of this would be an idea that is harmful to know but fun to talk about; sci-fi (including SCP) contains many much more extreme instances. This usage does seem to make more use of the “memetic” aspect of the name. It also seems to (a) be hard to really capture precisely and (b) deviate from how I typically use (and how eukaryote originally used) the term, so it might be better to just leave that aspect alone for now.]
The question remains of what to call the concept you are trying to capture in your work. At present I don’t think I have a good enough understanding of what it is you’re going for to offer great suggestions. From my limited understanding, I do think “communication hazard” could do the trick – it seems to me to capture (a) the generality, i.e. that we’re not focusing on true or false info; (b) the selection idea, i.e. that part of the hazard arises from how well different ideas spread via communication, and (c) part of the mutation idea, namely the part that arises from imperfect person-to-person communication (rather than within-mind mutations).
Assuming you still think “communication hazard” is no good, I might suggest making a top-level post explaining the concept you want to capture and looking for more/better suggestions? That seems like it could generate some new ideas; we could also use a similar approach to look for suggested replacements of the current usage of “memetic hazard”. Regardless, I would also suggest that, while it’s definitely worth putting in some time and effort (and gathering of multiple opinions) to optimise terminology, it may still sometimes be worth adopting a term that is less ideal at describing what you want in order to avoid cross-term confusion.
I’ve just posted an introduction to/summary of/clarification of the concept of information hazards on the EA Forum. I was halfway through writing that when I came across this post and made these comments. You and eukaryote helped bring to my attention that it’d be valuable to note the “direct information hazards”/”knowledge hazards”/whatever subset, and that it’s just a subset, so that’s in there (and thanks for that!). The post doesn’t touch on the “memetic” side of things—Convergence will get to that later, and will think carefully about what term to use when we do.
Anyway, I’d be interested in your thoughts on that post, since you seem to have thought a lot about these topics :)
(ETA: I found that latest comment of yours interesting food for thought, and upvoted it, but don’t have specific things to add as a reply at the moment.)
I strongly agree with the need for people to be more careful when inventing terminology, and have been puzzled why many don’t seem to be as concerned about it (e.g., often refuse to or are reluctant to change confusing terminology when the potential for confusion is pointed out to them). I think it’s probably worth writing a top-level post about this.
So I think there’s an interesting distinction here between bad terminology you just made up, and bad terminology you’re inheriting from others.
If you just invented a new term and several people think it’s not a good term, they’ll probably seem wrong to you, and there’s a good chance you’ll be wrong about that and should change it — before your new term has time to take root. There should definitely be a duty on people to make sure their new terms are not confusing.
On the other hand, if you (and at least some other people) think an existing term is bad you have two choices: you can accept it for the purposes of consistency or try to change it to something less confusing before its reach grows further. Both strategies are trying to avoid confusion in some sense, but differ in their variance; the first is accepting the existing confusion for the sake of not creating further confusion, and the second is risking further confusion to try and reduce existing confusion.
Which of these is the correct course of action probably depends on how problematic the old name is, how widespread it is, and how much power you have to change it. Personally, I think it’s less confusing to memorise one bad term than to remember the relationship between one bad term and one better term, so I think the risk of proliferating terminology is probably not worth taking most of the time. But sometimes there’ll be a pretty compelling reason to make the change, especially if you can co-ordinate enough top people in the field to make it stick.
So far I’ve mostly been talking about the situation where something is called X, and for some reason you think it should be called Y. This is pretty common (see e.g. the debates around what to call clean/cultured/cultivated/… meat). I think this current disagreement over “memetic hazard” is worse than that, though, because rather than trying to change the name of a thing, the goal is to change the thing a name refers to. So we have a sort of shuffle proposed, where the name X is transferred from thing A to thing B and a new name Y is proposed for thing A. This seems much more likely to cause confusion to me.
(Personal, quick-fire views—not Convergence’s)
Ok, your comments have definitely updated me towards thinking the non-intuitive (in my view) usage of “memetic hazards” is more established than I’d thought, and thus towards being less confident about trying to appropriate the term for what I still do think is a more intuitive usage. I also definitely agree that conflicting terminology is worth making efforts for avoid, where possible.
One thing I should add is that we’re ideally aiming to not just point at a superset of infohazards, but also to emphasise the “memetic” aspects (mutations, proliferation of more catchy ideas, etc.). I think I agree that there’s not that much value in a term just staking out the idea of “true or false info that causes harm”, but a term that also highlights those “memetic” aspects does seem (to me) worthwhile. And unfortunately, “communication/concept hazard” doesn’t seem to capture that (although I did consider “concept hazard” for a moment, when I started reading your comments and thus started again on trying to find alternatives).
And reading Dagon’s comment has also updated me further towards thinking that a term that highlights the memetic aspects is useful, and that the current usage of “memetic hazards” is not ideal!
So ultimately I mostly feel unsure how best to proceed. I do feel pretty confident that it’d be better to use a term other than “memetic hazard” for an info hazard where it’s the knower who’s hurt by knowing. I think several of the suggestions that have been made seem workable, including “direct information hazard” (thanks for highlighting that option). “Direct information hazard” could also intuitively mean other things, but it doesn’t seem to have one obvious intuitive meaning that conflicts with what we want to use it for, so it beats “memetic hazards” on that front. (So this paragraph is me suggesting that, while perhaps we shouldn’t repurpose the term “memetic hazards”, we should still avoid spreading or further entrenching the current, confusing usage of that term, and should jump aboard a different term for that concept instead.)
But I feel less sure what to call this other thing we want to talk about. It seems the options are:
Charge ahead with “meme hazard” or “memetic hazard”—but now that I’m aware it seems to be used more widely than I thought, I’m less inclined to go with that. (It also maybe seems like a somewhat bad plan from a sort of outside view or epistemic humility perspective, if there are people giving reasonable-seeming arguments against that usage.)
Try to come up with another term that also seems good for this concept. I think this option is ideal, but I don’t have any suggestions right away.
“Idea mutation hazard” isn’t quite right, as it’s also about things like more catchy or simplified ideas dominating over time. “Idea selection hazard” isn’t quite right for the inverse reason.
Do you (or someone else) have any good suggestions?
Give up on having a specific term for this concept, and just talk about it in more longhanded ways. That’s probably acceptable if we can’t think of another term, I guess.
Thanks for this. I think that even with the edits I was probably too confrontational above, so sorry about that. I’m not sure why this issue is emotional for me, that seems weird.
To start off, I agree that, ceteris paribus, the current usage of “memetic hazard” is strange. It has the advantage over e.g. “direct IH” of sounding cool and scary, which was probably desirable for SCP-like uses but is perhaps not ideal for people actually trying to do serious work on info-hazardy concepts.
I notice a conflict in my thoughts here, where I want to be able to refer to knower-harming hazards with a term that is (a) distinctive, evocative and catchy (such that it seems compelling and is actually used in real situations) and (b) sober, precise and informative (such that it can be used productively in technical writing on the subject). “Memetic hazard” satisfies (a) but not (b); “direct information hazard” satisfies (b) but not (a). This is not ideal.
I think for academic-ish work the term “direct info hazard” or something similarly bland is a fine descriptor for “knower-harming information”. I’m not sure what sort of term we would want to use for more popular work. “Knowledge hazard” seems okay to me? But I agree more suggestions here would be valuable.
Insofar as “memetic hazard” is being used simply to mean “knower-harming information hazard”, this seems reasonable. The term is still obscure enough that if enough people jumped on a new term it could probably gain more traction, and “memetic hazard” can be left as an obscure and kinda-confusing synonym of [whatever the new term is] that people bring up from time to time but isn’t widely used outside SCP.
[One counter-consideration. Having skimmed some existing usage of “memetic hazard” on the internet, it seems some people are using it to mean a directly (?) harmful idea that also encourages its bearers to spread it. The blandest form of this would be an idea that is harmful to know but fun to talk about; sci-fi (including SCP) contains many much more extreme instances. This usage does seem to make more use of the “memetic” aspect of the name. It also seems to (a) be hard to really capture precisely and (b) deviate from how I typically use (and how eukaryote originally used) the term, so it might be better to just leave that aspect alone for now.]
The question remains of what to call the concept you are trying to capture in your work. At present I don’t think I have a good enough understanding of what it is you’re going for to offer great suggestions. From my limited understanding, I do think “communication hazard” could do the trick – it seems to me to capture (a) the generality, i.e. that we’re not focusing on true or false info; (b) the selection idea, i.e. that part of the hazard arises from how well different ideas spread via communication, and (c) part of the mutation idea, namely the part that arises from imperfect person-to-person communication (rather than within-mind mutations).
Assuming you still think “communication hazard” is no good, I might suggest making a top-level post explaining the concept you want to capture and looking for more/better suggestions? That seems like it could generate some new ideas; we could also use a similar approach to look for suggested replacements of the current usage of “memetic hazard”. Regardless, I would also suggest that, while it’s definitely worth putting in some time and effort (and gathering of multiple opinions) to optimise terminology, it may still sometimes be worth adopting a term that is less ideal at describing what you want in order to avoid cross-term confusion.
I’ve just posted an introduction to/summary of/clarification of the concept of information hazards on the EA Forum. I was halfway through writing that when I came across this post and made these comments. You and eukaryote helped bring to my attention that it’d be valuable to note the “direct information hazards”/”knowledge hazards”/whatever subset, and that it’s just a subset, so that’s in there (and thanks for that!). The post doesn’t touch on the “memetic” side of things—Convergence will get to that later, and will think carefully about what term to use when we do.
Anyway, I’d be interested in your thoughts on that post, since you seem to have thought a lot about these topics :)
(ETA: I found that latest comment of yours interesting food for thought, and upvoted it, but don’t have specific things to add as a reply at the moment.)