We already have a Schelling point for “infohazard”: Bostrom’s paper. Redefining “infohazard” now is needlessly confusing. (And most of the time I hear “infohazard” it’s in the collectively-destructive smallpox-y sense, and as Buck notes this is more important and common.)
If Bostrom’s paper is our Schelling point, ‘infohazard’ encompasses much more than just the collectively-destructive smallpox-y sense.
Here’s the definition from the paper.
Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.
‘Harm’ here does not mean ‘net harm’. There’s a whole section on ‘Adversarial Risks’, cases where information can harm one party by benefitting another party:
In competitive situations, one person’s information can cause harm to another even if no intention to cause harm is present. Example: The rival job applicant knew more and got the job.
ETA: localdeity’s comment below points out that it’s a pretty bad idea to have a term that colloquially means ‘information we should all want suppressed’ but technically also means ‘information I want suppressed’. This isn’t just pointless pedantry.
Yeah, that concept is literally just “harmful info,” which takes no more syllables to say than “infohazard,” and barely takes more letters to write. Please do not use the specialized term if your actual meaning is captured by the English term, the one which most people would understand immediately.
I kinda agree. I still think Bostrom’s “infohazard” is analytically useful. But that’s orthogonal. If you think other concepts are more useful, make up new words for them; Bostrom’s paper is the Schelling point for “infohazard.”
In practice, I’m ok with a broad definition because when I say “writing about that AI deployment is infohazardous” everyone knows what I mean (and in particular that I don’t mean the ‘adversarial risks’ kind).
We already have a Schelling point for “infohazard”: Bostrom’s paper. Redefining “infohazard” now is needlessly confusing. (And most of the time I hear “infohazard” it’s in the collectively-destructive smallpox-y sense, and as Buck notes this is more important and common.)
If Bostrom’s paper is our Schelling point, ‘infohazard’ encompasses much more than just the collectively-destructive smallpox-y sense.
Here’s the definition from the paper.
‘Harm’ here does not mean ‘net harm’. There’s a whole section on ‘Adversarial Risks’, cases where information can harm one party by benefitting another party:
ETA: localdeity’s comment below points out that it’s a pretty bad idea to have a term that colloquially means ‘information we should all want suppressed’ but technically also means ‘information I want suppressed’. This isn’t just pointless pedantry.
Yeah, that concept is literally just “harmful info,” which takes no more syllables to say than “infohazard,” and barely takes more letters to write. Please do not use the specialized term if your actual meaning is captured by the English term, the one which most people would understand immediately.
I kinda agree. I still think Bostrom’s “infohazard” is analytically useful. But that’s orthogonal. If you think other concepts are more useful, make up new words for them; Bostrom’s paper is the Schelling point for “infohazard.”
In practice, I’m ok with a broad definition because when I say “writing about that AI deployment is infohazardous” everyone knows what I mean (and in particular that I don’t mean the ‘adversarial risks’ kind).