RSS

In­for­ma­tion Hazards

TagLast edit: 6 Feb 2024 23:22 UTC by gilch

An Information Hazard (or infohazard for short) is some true information that could harm people, or other sentient beings, if known. It is tricky to determine policies on information hazards. Some information might genuinely be dangerous, but excessive controls on information has its own perils.

This tag is for discussing the phenomenon of Information Hazards and what to do with them. Not for actual Information Hazards themselves.

An example might be a formula for easily creating cold fusion in your garage, which would be very dangerous. Alternatively, it might be an idea that causes great mental harm to people.

Bostrom’s Typology of Information Hazards

Nick Bostrom coined the term information hazard in a 2011 paper [1] for Review of Contemporary Philosophy. He defines it as follows:

Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.

Bostrom points out that this is in contrast to the generally accepted principle of information freedom and that, while rare, the possibility of information hazards needs to be considered when making information policies. He proceeds to categorize and define a large number of sub-types of information hazards. For example, he defines artificial intelligence hazard as:

Artificial intelligence hazard: There could be computer-related risks in which the threat would derive primarily from the cognitive sophistication of the program rather than the specific properties of any actuators to which the system initially has access.

The following table is reproduced from Bostrom 2011 [1].

TYPOLOGY OF INFORMATION HAZARDS
I. By information transfer mode
Data hazard
Idea hazard
Attention hazard
Template hazard
Signaling hazard
Evocation hazard
II. By effect
TYPESUBTYPE
ADVERSARIAL RISKSCompetiveness hazardEnemy Hazard
Intellectual property hazard
Commitment hazard
Knowing-too-much hazard
RISKS TO SOCIAL ORGANIZATION AND MARKETSNorm hazardInformation asymmetry Hazard
Unveiling hazard
Recognition hazard
RISKS OF IRRATIONALITY AND ERRORIdeological hazard
Distraction and temptation hazard
Role model hazard
Biasing hazard
De-biasing hazard
Neuropsychological hazard
Information-burying hazard
RISKS TO VALUABLE STATES AND ACTIVITIESPsychological reaction hazardDisappointment hazard
Spoiler hazard
Mindset hazard
Belief-constituted value hazard
(mixed)Embarrassment hazard
RISKS FROM INFORMATION TECHNOLOGY SYSTEMSInformation system hazardInformation infrastructure failure hazard
Information infrastructure misuse hazard
Artificial intelligence hazard
RISKS FROM DEVELOPMENTDevelopment hazard

See Also

References

  1. Bostrom, N. (2011). “Information Hazards: A Typology of Potential Harms from Knowledge”. Review of Contemporary Philosophy 10: 44-79.

Ter­ror­ism, Tylenol, and dan­ger­ous information

Davis_Kingsley12 May 2018 10:20 UTC
146 points
46 comments3 min readLW link

Bioinfohazards

Spiracular17 Sep 2019 2:41 UTC
87 points
14 comments18 min readLW link2 reviews

What are in­for­ma­tion haz­ards?

MichaelA18 Feb 2020 19:34 UTC
41 points
15 comments4 min readLW link

MIRI an­nounces new “Death With Dig­nity” strategy

Eliezer Yudkowsky2 Apr 2022 0:43 UTC
339 points
545 comments18 min readLW link1 review

Don’t Share In­for­ma­tion Exfo­haz­ardous on Others’ AI-Risk Models

Thane Ruthenis19 Dec 2023 20:09 UTC
67 points
11 comments1 min readLW link

Don’t use ‘in­fo­haz­ard’ for col­lec­tively de­struc­tive info

Eliezer Yudkowsky15 Jul 2022 5:13 UTC
85 points
33 comments1 min readLW link2 reviews
(www.facebook.com)

Map­ping down­side risks and in­for­ma­tion hazards

20 Feb 2020 14:46 UTC
22 points
0 comments9 min readLW link

Ac­cu­rate Models of AI Risk Are Hyper­ex­is­ten­tial Exfohazards

Thane Ruthenis25 Dec 2022 16:50 UTC
31 points
38 comments9 min readLW link

Con­jec­ture: In­ter­nal In­fo­haz­ard Policy

29 Jul 2022 19:07 UTC
131 points
6 comments19 min readLW link

Needed: AI in­fo­haz­ard policy

Vanessa Kosoy21 Sep 2020 15:26 UTC
66 points
16 comments2 min readLW link

In­for­ma­tion haz­ards: Why you should care and what you can do

23 Feb 2020 20:47 UTC
18 points
4 comments15 min readLW link

“In­fo­haz­ard” is a pre­dom­i­nantly con­flict-the­o­retic concept

jessicata2 Dec 2021 17:54 UTC
45 points
17 comments14 min readLW link
(unstableontology.com)

Thoughts on the Scope of LessWrong’s In­fo­haz­ard Policies

Ben Pace9 Mar 2020 7:44 UTC
48 points
5 comments8 min readLW link

The Fu­sion Power Gen­er­a­tor Scenario

johnswentworth8 Aug 2020 18:31 UTC
142 points
30 comments3 min readLW link

Oc­cu­pa­tional Infohazards

jessicata18 Dec 2021 20:56 UTC
35 points
134 comments47 min readLW link

Memetic Hazards in Videogames

jimrandomh10 Sep 2010 2:22 UTC
136 points
164 comments3 min readLW link

Sex­ual Abuse at­ti­tudes might be infohazardous

Pseudonymous Otter19 Jul 2022 18:06 UTC
255 points
71 comments1 min readLW link

A few mis­con­cep­tions sur­round­ing Roko’s basilisk

Rob Bensinger5 Oct 2015 21:23 UTC
90 points
135 comments5 min readLW link

The prob­lems with the con­cept of an in­fo­haz­ard as used by the LW com­mu­nity [Linkpost]

Noosphere8922 Dec 2023 16:13 UTC
75 points
43 comments3 min readLW link
(www.beren.io)

Know­ing About Bi­ases Can Hurt People

Eliezer Yudkowsky4 Apr 2007 18:01 UTC
220 points
82 comments2 min readLW link

A point of clar­ifi­ca­tion on in­fo­haz­ard terminology

eukaryote2 Feb 2020 17:43 UTC
51 points
21 comments2 min readLW link
(eukaryotewritesblog.com)

Gra­di­ent Des­cent on the Hu­man Brain

1 Apr 2024 22:39 UTC
52 points
5 comments2 min readLW link

Some back­ground for rea­son­ing about dual-use al­ign­ment research

Charlie Steiner18 May 2023 14:50 UTC
126 points
21 comments9 min readLW link

Win­ning vs Truth – In­fo­haz­ard Trade-Offs

eapache7 Mar 2020 22:49 UTC
12 points
11 comments2 min readLW link

Con­sume fic­tion wisely

RomanS21 Jan 2022 20:23 UTC
−9 points
56 comments5 min readLW link

Sal­vage Epistemology

jimrandomh30 Apr 2022 2:10 UTC
98 points
119 comments1 min readLW link

Memetic down­side risks: How ideas can evolve and cause harm

25 Feb 2020 19:47 UTC
21 points
3 comments15 min readLW link

Notes on Innocence

David Gross26 Jan 2024 14:45 UTC
13 points
21 comments19 min readLW link

Prin­ci­ples of Pri­vacy for Align­ment Research

johnswentworth27 Jul 2022 19:53 UTC
72 points
31 comments7 min readLW link

Please stop pub­lish­ing ideas/​in­sights/​re­search about AI

Tamsin Leake2 May 2024 14:54 UTC
0 points
61 comments4 min readLW link

pub­lish­ing al­ign­ment re­search and exfohazards

Tamsin Leake31 Oct 2022 18:02 UTC
80 points
12 comments1 min readLW link1 review
(carado.moe)

Good and bad ways to think about down­side risks

11 Jun 2020 1:38 UTC
19 points
12 comments11 min readLW link

[Question] Self-cen­sor­ing on AI x-risk dis­cus­sions?

Decaeneus1 Jul 2024 18:24 UTC
17 points
2 comments1 min readLW link

In­fo­haz­ards vs Fork Hazards

jimrandomh5 Jan 2023 9:45 UTC
68 points
16 comments1 min readLW link

A brief his­tory of eth­i­cally con­cerned scientists

Kaj_Sotala9 Feb 2013 5:50 UTC
103 points
143 comments14 min readLW link

[Question] Is acausal ex­tor­tion pos­si­ble?

sisyphus11 Nov 2022 19:48 UTC
−20 points
34 comments3 min readLW link

“In­f­tox­i­c­ity” and other new words to de­scribe mal­i­cious in­for­ma­tion and com­mu­ni­ca­tion thereof

Jáchym Fibír23 Dec 2023 18:15 UTC
−1 points
6 comments3 min readLW link

The Jour­nal of Danger­ous Ideas

rogersbacon3 Feb 2024 15:40 UTC
−25 points
4 comments5 min readLW link
(www.secretorum.life)

[Question] Is there a known method to find oth­ers who came across the same po­ten­tial in­fo­haz­ard with­out spoiling it to the pub­lic?

hive17 Oct 2024 10:47 UTC
4 points
6 comments1 min readLW link

AI Safety via Luck

Jozdien1 Apr 2023 20:13 UTC
81 points
7 comments11 min readLW link

[Link and com­men­tary] The Offense-Defense Balance of Scien­tific Knowl­edge: Does Pub­lish­ing AI Re­search Re­duce Mi­suse?

MichaelA16 Feb 2020 19:56 UTC
24 points
4 comments3 min readLW link

AI as Su­per-Demagogue

RationalDino5 Nov 2023 21:21 UTC
0 points
11 comments9 min readLW link

SlateS­tarCodex deleted be­cause NYT wants to dox Scott

Rudi C23 Jun 2020 7:51 UTC
89 points
93 comments1 min readLW link

[META] Build­ing a ra­tio­nal­ist com­mu­ni­ca­tion sys­tem to avoid censorship

Donald Hobson23 Jun 2020 14:12 UTC
36 points
33 comments2 min readLW link

Hash­marks: Pri­vacy-Pre­serv­ing Bench­marks for High-Stakes AI Evaluation

Paul Bricman4 Dec 2023 7:31 UTC
12 points
6 comments16 min readLW link
(arxiv.org)

Les­sons from the Cold War on In­for­ma­tion Hazards: Why In­ter­nal Com­mu­ni­ca­tion is Critical

Gentzel24 Feb 2018 23:34 UTC
47 points
10 comments4 min readLW link

USA v Pro­gres­sive 1979 excerpt

RyanCarey27 Nov 2017 17:32 UTC
22 points
2 comments2 min readLW link

[Question] AI in­ter­pretabil­ity could be harm­ful?

Roman Leventov10 May 2023 20:43 UTC
13 points
2 comments1 min readLW link

[Question] How not to write the Cook­book of Doom?

brunoparga16 Jun 2023 13:37 UTC
17 points
5 comments1 min readLW link

Shock Level 5: Big Wor­lds and Mo­dal Realism

Roko25 May 2010 23:19 UTC
39 points
158 comments4 min readLW link

Stay­ing Split: Sa­ba­tini and So­cial Justice

Duncan Sabien (Deactivated)8 Jun 2022 8:32 UTC
152 points
28 comments21 min readLW link

Slow­ing down AI progress is an un­der­ex­plored al­ign­ment strategy

Norman Borlaug24 Jul 2023 16:56 UTC
42 points
27 comments5 min readLW link

Sig­nal­ing Guilt

Krieger8 Oct 2022 20:40 UTC
21 points
6 comments1 min readLW link

Pri­vate al­ign­ment re­search shar­ing and coordination

porby4 Sep 2022 0:01 UTC
62 points
13 comments5 min readLW link

[Question] What is our cur­rent best in­fo­haz­ard policy for AGI (safety) re­search?

Roman Leventov15 Nov 2022 22:33 UTC
12 points
2 comments1 min readLW link

Who should write the defini­tive post on Ziz?

Nicholas / Heather Kross15 Dec 2022 6:37 UTC
3 points
45 comments3 min readLW link

[Question] Has pri­vate AGI re­search made in­de­pen­dent safety re­search in­effec­tive already? What should we do about this?

Roman Leventov23 Jan 2023 7:36 UTC
43 points
5 comments5 min readLW link

[Question] Is re­li­gion lo­cally cor­rect for con­se­quen­tial­ists in some in­stances?

Robert Feinstein8 Mar 2023 4:02 UTC
4 points
8 comments1 min readLW link
No comments.