Mindcrime occurs when a computational process which has moral value is mistreated. For example, an advanced AI trying to predict human behavior might create simulations of humans so detailed as to be conscious observers, which would then suffer through whatever hypothetical scenarios the AI wanted to test and then be discarded.
Mindcrime on a large scale constitutes a risk of astronomical suffering.
Mindcrime is different from other AI risks in that the AI need not even affect anything outside its box for the catastrophe to occur.
The term was coined by Nick Bostrom in Superintelligence: Paths, Dangers, Strategies.
Not the same as thoughtcrime, a term for having beliefs considered unacceptable by society.
This is different from a thought crime, right? I would distinguish in the page description. Otherwise, if it’s not already an accepted term, I would consider changing it to avoid confusion.
“Mind Crime” was the term Bostrom used in Superintelligence. I don’t know of a better term that covers the same things.
Usually when people talk about mind crime they’re talking about torture simulations or something similar, which is different than the usual use of “thought crime”. My sense is that if you really believed that thinking certain thoughts was immoral, thought crime would be a type of mind crime, but I’m not sure if anyone has used the term in that way.
Edit: https://www.lesswrong.com/posts/BKjJJH2cRpJcAnP7T/thoughts-on-human-models says:
so maybe the accepted meaning is narrower than I thought and this wiki page should be updated accordingly.
Edit x2:
I reread the relevant section of Superintelligence, which is in line with that, and have rewritten the page.