RSS

In­for­ma­tion Hazards

TagLast edit: Feb 6, 2024, 11:22 PM by gilch

An Information Hazard (or infohazard for short) is some true information that could harm people, or other sentient beings, if known. It is tricky to determine policies on information hazards. Some information might genuinely be dangerous, but excessive controls on information has its own perils.

This tag is for discussing the phenomenon of Information Hazards and what to do with them. Not for actual Information Hazards themselves.

An example might be a formula for easily creating cold fusion in your garage, which would be very dangerous. Alternatively, it might be an idea that causes great mental harm to people.

Bostrom’s Typology of Information Hazards

Nick Bostrom coined the term information hazard in a 2011 paper [1] for Review of Contemporary Philosophy. He defines it as follows:

Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.

Bostrom points out that this is in contrast to the generally accepted principle of information freedom and that, while rare, the possibility of information hazards needs to be considered when making information policies. He proceeds to categorize and define a large number of sub-types of information hazards. For example, he defines artificial intelligence hazard as:

Artificial intelligence hazard: There could be computer-related risks in which the threat would derive primarily from the cognitive sophistication of the program rather than the specific properties of any actuators to which the system initially has access.

The following table is reproduced from Bostrom 2011 [1].

TYPOLOGY OF INFORMATION HAZARDS
I. By information transfer mode
Data hazard
Idea hazard
Attention hazard
Template hazard
Signaling hazard
Evocation hazard
II. By effect
TYPESUBTYPE
ADVERSARIAL RISKSCompetiveness hazardEnemy Hazard
Intellectual property hazard
Commitment hazard
Knowing-too-much hazard
RISKS TO SOCIAL ORGANIZATION AND MARKETSNorm hazardInformation asymmetry Hazard
Unveiling hazard
Recognition hazard
RISKS OF IRRATIONALITY AND ERRORIdeological hazard
Distraction and temptation hazard
Role model hazard
Biasing hazard
De-biasing hazard
Neuropsychological hazard
Information-burying hazard
RISKS TO VALUABLE STATES AND ACTIVITIESPsychological reaction hazardDisappointment hazard
Spoiler hazard
Mindset hazard
Belief-constituted value hazard
(mixed)Embarrassment hazard
RISKS FROM INFORMATION TECHNOLOGY SYSTEMSInformation system hazardInformation infrastructure failure hazard
Information infrastructure misuse hazard
Artificial intelligence hazard
RISKS FROM DEVELOPMENTDevelopment hazard

See Also

References

  1. Bostrom, N. (2011). “Information Hazards: A Typology of Potential Harms from Knowledge”. Review of Contemporary Philosophy 10: 44-79.

Ter­ror­ism, Tylenol, and dan­ger­ous information

Davis_KingsleyMay 12, 2018, 10:20 AM
149 points
46 comments3 min readLW link

Bioinfohazards

SpiracularSep 17, 2019, 2:41 AM
87 points
14 comments18 min readLW link2 reviews

What are in­for­ma­tion haz­ards?

MichaelAFeb 18, 2020, 7:34 PM
41 points
15 comments4 min readLW link

MIRI an­nounces new “Death With Dig­nity” strategy

Eliezer YudkowskyApr 2, 2022, 12:43 AM
354 points
545 comments18 min readLW link1 review

Don’t use ‘in­fo­haz­ard’ for col­lec­tively de­struc­tive info

Eliezer YudkowskyJul 15, 2022, 5:13 AM
86 points
33 comments1 min readLW link2 reviews
(www.facebook.com)

Don’t Share In­for­ma­tion Exfo­haz­ardous on Others’ AI-Risk Models

Thane RuthenisDec 19, 2023, 8:09 PM
68 points
11 comments1 min readLW link

Map­ping down­side risks and in­for­ma­tion hazards

Feb 20, 2020, 2:46 PM
23 points
0 comments9 min readLW link

Ac­cu­rate Models of AI Risk Are Hyper­ex­is­ten­tial Exfohazards

Thane RuthenisDec 25, 2022, 4:50 PM
32 points
38 comments9 min readLW link

Needed: AI in­fo­haz­ard policy

Vanessa KosoySep 21, 2020, 3:26 PM
68 points
16 comments2 min readLW link

Thoughts on the Scope of LessWrong’s In­fo­haz­ard Policies

Ben PaceMar 9, 2020, 7:44 AM
48 points
5 comments8 min readLW link

Con­jec­ture: In­ter­nal In­fo­haz­ard Policy

Jul 29, 2022, 7:07 PM
131 points
6 comments19 min readLW link

In­for­ma­tion haz­ards: Why you should care and what you can do

Feb 23, 2020, 8:47 PM
18 points
4 comments15 min readLW link

“In­fo­haz­ard” is a pre­dom­i­nantly con­flict-the­o­retic concept

jessicataDec 2, 2021, 5:54 PM
45 points
17 comments14 min readLW link
(unstableontology.com)

Oc­cu­pa­tional Infohazards

jessicataDec 18, 2021, 8:56 PM
38 points
134 comments47 min readLW link

Memetic Hazards in Videogames

jimrandomhSep 10, 2010, 2:22 AM
137 points
164 comments3 min readLW link

The Fu­sion Power Gen­er­a­tor Scenario

johnswentworthAug 8, 2020, 6:31 PM
155 points
30 comments3 min readLW link

Know­ing About Bi­ases Can Hurt People

Eliezer YudkowskyApr 4, 2007, 6:01 PM
224 points
82 comments2 min readLW link

The prob­lems with the con­cept of an in­fo­haz­ard as used by the LW com­mu­nity [Linkpost]

Noosphere89Dec 22, 2023, 4:13 PM
75 points
43 comments3 min readLW link
(www.beren.io)

Notes on Innocence

David GrossJan 26, 2024, 2:45 PM
13 points
21 comments18 min readLW link

Gra­di­ent Des­cent on the Hu­man Brain

Apr 1, 2024, 10:39 PM
59 points
5 comments2 min readLW link

[Question] Self-cen­sor­ing on AI x-risk dis­cus­sions?

DecaeneusJul 1, 2024, 6:24 PM
17 points
2 comments1 min readLW link

Memetic down­side risks: How ideas can evolve and cause harm

Feb 25, 2020, 7:47 PM
27 points
3 comments15 min readLW link

Good and bad ways to think about down­side risks

Jun 11, 2020, 1:38 AM
19 points
12 comments11 min readLW link

A brief his­tory of eth­i­cally con­cerned scientists

Kaj_SotalaFeb 9, 2013, 5:50 AM
104 points
143 comments14 min readLW link

A few mis­con­cep­tions sur­round­ing Roko’s basilisk

Rob BensingerOct 5, 2015, 9:23 PM
91 points
135 comments5 min readLW link

A point of clar­ifi­ca­tion on in­fo­haz­ard terminology

eukaryoteFeb 2, 2020, 5:43 PM
52 points
21 comments2 min readLW link
(eukaryotewritesblog.com)

Some back­ground for rea­son­ing about dual-use al­ign­ment research

Charlie SteinerMay 18, 2023, 2:50 PM
126 points
22 comments9 min readLW link1 review

Win­ning vs Truth – In­fo­haz­ard Trade-Offs

eapacheMar 7, 2020, 10:49 PM
12 points
11 comments2 min readLW link

Con­sume fic­tion wisely

RomanSJan 21, 2022, 8:23 PM
−9 points
56 comments5 min readLW link

Sal­vage Epistemology

jimrandomhApr 30, 2022, 2:10 AM
101 points
119 comments1 min readLW link

Prin­ci­ples of Pri­vacy for Align­ment Research

johnswentworthJul 27, 2022, 7:53 PM
73 points
31 comments7 min readLW link

In­fo­haz­ards vs Fork Hazards

jimrandomhJan 5, 2023, 9:45 AM
68 points
16 comments1 min readLW link

Sex­ual Abuse at­ti­tudes might be infohazardous

Pseudonymous OtterJul 19, 2022, 6:06 PM
256 points
72 comments1 min readLW link

Pri­vate al­ign­ment re­search shar­ing and coordination

porbySep 4, 2022, 12:01 AM
62 points
13 comments5 min readLW link

Can Knowl­edge Hurt You? The Dangers of In­fo­haz­ards (and Exfo­haz­ards)

Feb 8, 2025, 3:51 PM
20 points
0 comments5 min readLW link
(www.youtube.com)

Les­sons from the Cold War on In­for­ma­tion Hazards: Why In­ter­nal Com­mu­ni­ca­tion is Critical

GentzelFeb 24, 2018, 11:34 PM
47 points
10 comments4 min readLW link

USA v Pro­gres­sive 1979 excerpt

RyanCareyNov 27, 2017, 5:32 PM
22 points
2 comments2 min readLW link

[Question] Is acausal ex­tor­tion pos­si­ble?

sisyphusNov 11, 2022, 7:48 PM
−20 points
34 comments3 min readLW link

[Question] AI in­ter­pretabil­ity could be harm­ful?

Roman LeventovMay 10, 2023, 8:43 PM
13 points
2 comments1 min readLW link

[Question] How not to write the Cook­book of Doom?

brunopargaJun 16, 2023, 1:37 PM
17 points
5 comments1 min readLW link

[Question] Is there a known method to find oth­ers who came across the same po­ten­tial in­fo­haz­ard with­out spoiling it to the pub­lic?

hiveOct 17, 2024, 10:47 AM
4 points
6 comments1 min readLW link

Shock Level 5: Big Wor­lds and Mo­dal Realism

RokoMay 25, 2010, 11:19 PM
39 points
158 comments4 min readLW link

[Question] What is our cur­rent best in­fo­haz­ard policy for AGI (safety) re­search?

Roman LeventovNov 15, 2022, 10:33 PM
12 points
2 comments1 min readLW link

Who should write the defini­tive post on Ziz?

Nicholas / Heather KrossDec 15, 2022, 6:37 AM
4 points
45 comments3 min readLW link

Cy­berE­con­omy. The Limits to Growth

Feb 16, 2025, 9:02 PM
−3 points
0 comments23 min readLW link

“In­f­tox­i­c­ity” and other new words to de­scribe mal­i­cious in­for­ma­tion and com­mu­ni­ca­tion thereof

Jáchym FibírDec 23, 2023, 6:15 PM
−1 points
6 comments3 min readLW link

[Question] Has pri­vate AGI re­search made in­de­pen­dent safety re­search in­effec­tive already? What should we do about this?

Roman LeventovJan 23, 2023, 7:36 AM
43 points
5 comments5 min readLW link

Stay­ing Split: Sa­ba­tini and So­cial Justice

Duncan Sabien (Deactivated)Jun 8, 2022, 8:32 AM
153 points
28 comments21 min readLW link

Slow­ing down AI progress is an un­der­ex­plored al­ign­ment strategy

Norman BorlaugJul 24, 2023, 4:56 PM
42 points
27 comments5 min readLW link

[Question] Is re­li­gion lo­cally cor­rect for con­se­quen­tial­ists in some in­stances?

Robert FeinsteinMar 8, 2023, 4:02 AM
4 points
8 comments1 min readLW link

AI Safety via Luck

JozdienApr 1, 2023, 8:13 PM
81 points
7 comments11 min readLW link

Sig­nal­ing Guilt

KriegerOct 8, 2022, 8:40 PM
21 points
6 comments1 min readLW link

[Link and com­men­tary] The Offense-Defense Balance of Scien­tific Knowl­edge: Does Pub­lish­ing AI Re­search Re­duce Mi­suse?

MichaelAFeb 16, 2020, 7:56 PM
24 points
4 comments3 min readLW link

AI as Su­per-Demagogue

RationalDinoNov 5, 2023, 9:21 PM
11 points
12 comments9 min readLW link

SlateS­tarCodex deleted be­cause NYT wants to dox Scott

Rudi CJun 23, 2020, 7:51 AM
89 points
93 comments1 min readLW link

[META] Build­ing a ra­tio­nal­ist com­mu­ni­ca­tion sys­tem to avoid censorship

Donald HobsonJun 23, 2020, 2:12 PM
36 points
33 comments2 min readLW link

The Jour­nal of Danger­ous Ideas

rogersbaconFeb 3, 2024, 3:40 PM
−25 points
4 comments5 min readLW link
(www.secretorum.life)

Hash­marks: Pri­vacy-Pre­serv­ing Bench­marks for High-Stakes AI Evaluation

Paul BricmanDec 4, 2023, 7:31 AM
12 points
6 comments16 min readLW link
(arxiv.org)
No comments.