There’s an important problem here which Jessica described in some detail in a more grounded way than the “demons” frame:
As a brief model of something similar to this (not necessarily the same model as the Leverage people were using): people often pick up behaviors (“know-how”) and mental models from other people, through acculturation and imitation. Some of this influence could be (a) largely unconscious on the part of the receiver, (b) partially intentional or the part of the person having mental effects on others (where these intentions may include behaviorist conditioning, similar to hypnosis, causing behaviors to be triggered under certain circumstances), and (c) overall harmful to the receiver’s conscious goals. According to IFS-like psychological models, it’s common for a single brain to contain multiple sub-processes with different intentions. While the mental subprocess implantation hypothesis is somewhat strange, it’s hard to rule out based on physics or psychology.
If we’re confused about a problem like Friendly AI, it’s preparadigmatic & therefore most people trying to talk about it are using words wrong. Jessica is reporting a perverse optimization where people are penalized more for talking confusedly about important problems they’re confused about, than for simply ignoring the problems.
-”Jessica is reporting a perverse optimization where people are penalized more for talking confusedly about important problems they’re confused about, than for simply ignoring the problems.”
I feel like “talking confusedly” here means “talking in a way that no one else can understand”. If no one else can understand, they cannot give feedback on your ideas. That said, it is not clear that penalizing confused talk is a solution to this problem.
At least some people were able to understand though. This lead to a sort of social division where some people were much more willing/able to talk about certain social phenomena than other people were.
There’s an important problem here which Jessica described in some detail in a more grounded way than the “demons” frame:
If we’re confused about a problem like Friendly AI, it’s preparadigmatic & therefore most people trying to talk about it are using words wrong. Jessica is reporting a perverse optimization where people are penalized more for talking confusedly about important problems they’re confused about, than for simply ignoring the problems.
-”Jessica is reporting a perverse optimization where people are penalized more for talking confusedly about important problems they’re confused about, than for simply ignoring the problems.”
I feel like “talking confusedly” here means “talking in a way that no one else can understand”. If no one else can understand, they cannot give feedback on your ideas. That said, it is not clear that penalizing confused talk is a solution to this problem.
At least some people were able to understand though. This lead to a sort of social division where some people were much more willing/able to talk about certain social phenomena than other people were.