RSS

Se­cu­rity Mindset

TagLast edit: Feb 16, 2022, 12:36 AM by abramdemski

Security Mindset is a predisposition for thinking about the world in a security-oriented way. A large part of this way of thinking involves always being on the lookout for exploits.

Uncle Milton Industries has been selling ant farms to children since 1956. Some years ago, I remember opening one up with a friend. There were no actual ants included in the box. Instead, there was a card that you filled in with your address, and the company would mail you some ants. My friend expressed surprise that you could get ants sent to you in the mail.

I replied: “What’s really interesting is that these people will send a tube of live ants to anyone you tell them to.”

Security requires a particular mindset. Security professionals — at least the good ones — see the world differently. They can’t walk into a store without noticing how they might shoplift. They can’t use a computer without wondering about the security vulnerabilities. They can’t vote without trying to figure out how to vote twice. They just can’t help it.

SmartWater is a liquid with a unique identifier linked to a particular owner. “The idea is for me to paint this stuff on my valuables as proof of ownership,” I wrote when I first learned about the idea. “I think a better idea would be for me to paint it on your valuables, and then call the police.”

Really, we can’t help it.

-- Bruce Schneier, The security Mindset, Schneier on Security

[I’m unsure of the origin of the term, but Schneier is at least an outspoken advocate. --Abram]

In 2017, Eliezer Yudkowsky wrote a pair of posts on the security mindset:

Amongst other things, these posts forwarded the idea that true security mindset is not just the tendency to spot lots and lots of security flaws. Spotting security flaws is not in itself enough to build secure systems, because you could be spotting flaws with your design forever, patching specific weak points, and moving on to find yet more flaws.

Building secure systems requires coming up with strong positive arguments for the security of a system. These positive arguments have several important features:

  1. They have as few assumptions as possible, because each assumption is an additional chance to be wrong.

  2. Each assumption is individually very certain.

  3. The conclusion of the argument is a meaningful security guarantee.

The mindset required to build tight security arguments like this is different from the mindset required to find security holes.

Se­cu­rity Mind­set and Or­di­nary Paranoia

Eliezer YudkowskyNov 25, 2017, 5:53 PM
132 points
25 comments29 min readLW link

POC || GTFO cul­ture as par­tial an­ti­dote to al­ign­ment wordcelism

lcMar 15, 2023, 10:21 AM
147 points
13 comments7 min readLW link2 reviews

Se­cu­rity Mind­set and the Lo­gis­tic Suc­cess Curve

Eliezer YudkowskyNov 26, 2017, 3:58 PM
106 points
49 comments20 min readLW link

Do your­self a FAVAR: se­cu­rity mindset

lemonhopeJun 18, 2022, 2:08 AM
20 points
2 comments2 min readLW link

Six Di­men­sions of Oper­a­tional Ad­e­quacy in AGI Projects

Eliezer YudkowskyMay 30, 2022, 5:00 PM
310 points
66 comments13 min readLW link1 review

Cir­cum­vent­ing in­ter­pretabil­ity: How to defeat mind-readers

Lee SharkeyJul 14, 2022, 4:59 PM
114 points
15 comments33 min readLW link

Se­cu­rity Mind­set: Les­sons from 20+ years of Soft­ware Se­cu­rity Failures Rele­vant to AGI Alignment

elspoodJun 21, 2022, 11:55 PM
362 points
42 comments7 min readLW link1 review

Assess­ment of AI safety agen­das: think about the down­side risk

Roman LeventovDec 19, 2023, 9:00 AM
13 points
1 comment1 min readLW link

Train­ing of su­per­in­tel­li­gence is se­cretly adversarial

quetzal_rainbowFeb 7, 2024, 1:38 PM
15 points
2 comments5 min readLW link

Duct Tape security

Isaac KingApr 26, 2024, 6:57 PM
68 points
11 comments5 min readLW link

Notes on Caution

David GrossDec 1, 2022, 3:05 AM
14 points
0 comments19 min readLW link

Pre­dict­ing AI Re­leases Through Side Channels

Reworr RJan 7, 2025, 7:06 PM
16 points
1 comment1 min readLW link

Reli­a­bil­ity, Se­cu­rity, and AI risk: Notes from in­fosec text­book chap­ter 1

AkashApr 7, 2023, 3:47 PM
34 points
1 comment4 min readLW link

[In­ter­view w/​ Jeffrey Ladish] Ap­ply­ing the ‘se­cu­rity mind­set’ to AI and x-risk

fowlertmApr 11, 2023, 6:14 PM
12 points
0 comments1 min readLW link

Even if hu­man & AI al­ign­ment are just as easy, we are screwed

Matthew_OpitzApr 13, 2023, 5:32 PM
35 points
5 comments5 min readLW link

Le­gi­t­imis­ing AI Red-Team­ing by Public

VojtaKovarikApr 19, 2023, 2:05 PM
10 points
7 comments3 min readLW link

Sen­sor Ex­po­sure can Com­pro­mise the Hu­man Brain in the 2020s

trevorOct 26, 2023, 3:31 AM
17 points
6 comments10 min readLW link

5 Rea­sons Why Govern­ments/​Mili­taries Already Want AI for In­for­ma­tion Warfare

trevorOct 30, 2023, 4:30 PM
32 points
0 comments10 min readLW link

Balanc­ing Se­cu­rity Mind­set with Col­lab­o­ra­tive Re­search: A Proposal

MadHatterNov 1, 2023, 12:46 AM
9 points
3 comments4 min readLW link

Helpful ex­am­ples to get a sense of mod­ern au­to­mated manipulation

trevorNov 12, 2023, 8:49 PM
33 points
4 comments9 min readLW link

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin PopeMar 21, 2023, 12:06 AM
358 points
232 comments39 min readLW link1 review

Biose­cu­rity Cul­ture, Com­puter Se­cu­rity Culture

jefftkAug 30, 2023, 4:40 PM
103 points
11 comments2 min readLW link
(www.jefftk.com)

Fix­ing In­sider Threats in the AI Sup­ply Chain

Madhav MalhotraOct 7, 2023, 1:19 PM
20 points
2 comments5 min readLW link

AI Safety is Drop­ping the Ball on Clown Attacks

trevorOct 22, 2023, 8:09 PM
73 points
82 comments34 min readLW link

Se­cu­rity Mind­set and Take­off Speeds

DanielFilanOct 27, 2020, 3:20 AM
55 points
23 comments8 min readLW link
(danielfilan.com)

“Just hiring peo­ple” is some­times still ac­tu­ally possible

lcAug 5, 2022, 9:44 PM
38 points
11 comments5 min readLW link

Con­jec­ture: In­ter­nal In­fo­haz­ard Policy

Jul 29, 2022, 7:07 PM
131 points
6 comments19 min readLW link

Builder/​Breaker for Deconfusion

abramdemskiSep 29, 2022, 5:36 PM
72 points
9 comments9 min readLW link

It’s time to worry about on­line pri­vacy again

MalmesburyDec 25, 2022, 9:05 PM
67 points
23 comments6 min readLW link

Why do we post our AI safety plans on the In­ter­net?

Peter S. ParkNov 3, 2022, 4:02 PM
4 points
4 comments11 min readLW link

A po­ten­tially high im­pact differ­en­tial tech­nolog­i­cal de­vel­op­ment area

Noosphere89Jun 8, 2023, 2:33 PM
5 points
2 comments2 min readLW link

AI can ex­ploit safety plans posted on the Internet

Peter S. ParkDec 4, 2022, 12:17 PM
−15 points
4 comments1 min readLW link

PoMP and Cir­cum­stance: Introduction

benatkinDec 9, 2024, 5:54 AM
1 point
1 comment1 min readLW link

Back to the Past to the Future

PrometheusOct 18, 2023, 4:51 PM
5 points
0 comments1 min readLW link

Soft Na­tion­al­iza­tion: how the USG will con­trol AI labs

Aug 27, 2024, 3:11 PM
76 points
7 comments21 min readLW link
(www.convergenceanalysis.org)

Can Large Lan­guage Models effec­tively iden­tify cy­ber­se­cu­rity risks?

emile delcourtAug 30, 2024, 8:20 PM
18 points
0 comments11 min readLW link

Where Does Ad­ver­sar­ial Pres­sure Come From?

quetzal_rainbowDec 14, 2023, 10:31 PM
17 points
1 comment2 min readLW link

Se­cu­rity Mind­set—Fire Alarms and Trig­ger Signatures

elspoodFeb 9, 2023, 9:15 PM
23 points
0 comments4 min readLW link

Cau­tions about LLMs in Hu­man Cog­ni­tive Loops

Alice BlairMar 2, 2025, 7:53 PM
38 points
9 comments7 min readLW link

On See­ing Through ‘On See­ing Through: A Unified The­ory’: A Unified Theory

gwernJun 15, 2019, 6:57 PM
26 points
0 comments1 min readLW link
(www.gwern.net)

AI in­fosec: first strikes, zero-day mar­kets, hard­ware sup­ply chains, adop­tion barriers

Allison DuettmannApr 1, 2023, 4:44 PM
41 points
0 comments9 min readLW link

Ad­vice Needed: Does Us­ing a LLM Com­pomise My Per­sonal Epistemic Se­cu­rity?

NaomiMar 11, 2024, 5:57 AM
17 points
7 comments2 min readLW link

Pro­tect­ing agent boundaries

ChipmonkJan 25, 2024, 4:13 AM
11 points
6 comments2 min readLW link

Boundaries-based se­cu­rity and AI safety approaches

Allison DuettmannApr 12, 2023, 12:36 PM
43 points
2 comments6 min readLW link

Ap­ply to the Con­cep­tual Boundaries Work­shop for AI Safety

ChipmonkNov 27, 2023, 9:04 PM
50 points
0 comments3 min readLW link

Cryp­to­graphic and aux­iliary ap­proaches rele­vant for AI safety

Allison DuettmannApr 18, 2023, 2:18 PM
7 points
0 comments6 min readLW link

Safety Data Sheets for Op­ti­miza­tion Processes

StrivingForLegibilityJan 4, 2024, 11:30 PM
15 points
1 comment4 min readLW link

The Se­cu­rity Mind­set, S-Risk and Pub­lish­ing Pro­saic Align­ment Research

lukemarksApr 22, 2023, 2:36 PM
39 points
7 comments5 min readLW link

In­ter­pret­ing the Learn­ing of Deceit

RogerDearnaleyDec 18, 2023, 8:12 AM
30 points
14 comments9 min readLW link

LW Meetup @ DEFCON (Las Ve­gas) − 5-7pm Thu. Aug. 11 at Fo­rum Food Court (Cae­sars)

jchanAug 8, 2022, 2:57 PM
6 points
0 comments1 min readLW link

(re­tired ar­ti­cle) AGI With In­ter­net Ac­cess: Why we won’t stuff the ge­nie back in its bot­tle.

Max TKMar 18, 2023, 3:43 AM
5 points
10 comments4 min readLW link

Is AI Safety drop­ping the ball on pri­vacy?

markovSep 13, 2023, 1:07 PM
50 points
17 comments7 min readLW link
No comments.