RSS

High Reli­a­bil­ity Organizations

TagLast edit: Mar 17, 2023, 9:52 PM by Raemon

High Reliability Organizations (HROs) are organizations that operate in high-risk domains but reliably avoid catastrophic failures. Examples include nuclear plants, air traffic control, and aircraft carriers. Research into HROs aims to determine how they achieve extreme reliability and whether these lessons apply to AI companies working on dangerous technologies.

Key HRO insights include: tracking failures to learn; avoiding oversimplification; staying operationally sensitive; committing to resilience; deferring to expertise; and an “informed culture” that reports issues, avoids blame, and fosters flexibility and learning.

HRO literature may provide useful principles for AI companies, but different feedback loops and job functions limit applications. Research into fields like biotech may also be relevant.

(Written by Claude, using High Reliability Orgs, and AI Companies as input. Feel free to rewrite)

High Reli­a­bil­ity Orgs, and AI Companies

RaemonAug 4, 2022, 5:45 AM
86 points
7 comments12 min readLW link1 review

“Care­fully Boot­strapped Align­ment” is or­ga­ni­za­tion­ally hard

RaemonMar 17, 2023, 6:00 PM
262 points
23 comments11 min readLW link1 review

What Mul­tipo­lar Failure Looks Like, and Ro­bust Agent-Ag­nos­tic Pro­cesses (RAAPs)

Andrew_CritchMar 31, 2021, 11:50 PM
282 points
65 comments22 min readLW link1 review

Ro­bust Ar­tifi­cial In­tel­li­gence and Ro­bust Hu­man Organizations

Gordon Seidoh WorleyJul 17, 2019, 2:27 AM
17 points
2 comments2 min readLW link
(arxiv.org)

Do we have a plan for the “first crit­i­cal try” prob­lem?

Christopher KingApr 3, 2023, 4:27 PM
−3 points
14 comments1 min readLW link
No comments.