RSS

Mindcrime

TagLast edit: 15 Jun 2023 16:08 UTC by Raemon

Mindcrime occurs when a computational process which has moral value is mistreated. For example, an advanced AI trying to predict human behavior might create simulations of humans so detailed as to be conscious observers, which would then suffer through whatever hypothetical scenarios the AI wanted to test and then be discarded.

Mindcrime on a large scale constitutes a risk of astronomical suffering.

Mindcrime is different from other AI risks in that the AI need not even affect anything outside its box for the catastrophe to occur.

The term was coined by Nick Bostrom in Superintelligence: Paths, Dangers, Strategies.

Not the same as thoughtcrime, a term for having beliefs considered unacceptable by society.

Non­per­son Predicates

Eliezer Yudkowsky27 Dec 2008 1:47 UTC
66 points
177 comments6 min readLW link

The AI in a box boxes you

Stuart_Armstrong2 Feb 2010 10:10 UTC
170 points
389 comments1 min readLW link

Men­tal Models Of Peo­ple Can Be People

Nox ML25 Apr 2023 0:03 UTC
12 points
55 comments8 min readLW link

Is it pos­si­ble to pre­vent the tor­ture of ems?

NancyLebovitz29 Jun 2011 7:42 UTC
14 points
31 comments1 min readLW link

The Aliens have Landed!

TimFreeman19 May 2011 17:09 UTC
46 points
158 comments3 min readLW link

Thoughts on Hu­man Models

21 Feb 2019 9:10 UTC
126 points
32 comments10 min readLW link1 review

Su­per­in­tel­li­gence 12: Mal­ig­nant failure modes

KatjaGrace2 Dec 2014 2:02 UTC
15 points
51 comments5 min readLW link

A Models-cen­tric Ap­proach to Cor­rigible Alignment

J Bostock17 Jul 2021 17:27 UTC
2 points
0 comments6 min readLW link

Theod­icy and the simu­la­tion hy­poth­e­sis, or: The prob­lem of simu­la­tor evil

philosophybear26 Dec 2022 18:55 UTC
12 points
12 comments19 min readLW link
(philosophybear.substack.com)