When conscious intent is selectively discouraged more than unconscious intent, the result is rule by unconscious intent. Those who can conveniently forget, who can maintain narcissistic fantasies, who can avoid introspection, who can be ruled by emotions with hidden causes, will be the only ones able to deceive (or otherwise to violate norms) blamelessly.
Only a subset of lies may be detected by any given justice process, but “conscious/unconscious” does not correspond to the boundary of such a subset. In fact, due to the flexibility and mystery of mental architecture, such a split is incredibly hard to pin down by any precise theory.
“Your honor, I know I told the customer that the chemical I sold to them would cure their disease, and it didn’t, and I had enough information to know that, but you see, I wasn’t conscious that it wouldn’t cure their disease, as I was selling it to them, so it isn’t really fraud” would not fly in any court that is even seriously pretending to be executing justice.
When conscious intent is selectively discouraged more than unconscious intent, the result is rule by unconscious intent. Those who can conveniently forget, who can maintain narcissistic fantasies, who can avoid introspection, who can be ruled by emotions with hidden causes, will be the only ones able to deceive (or otherwise to violate norms) blamelessly.
Conscious intent being selectively discouraged more than unconscious intent does not logically imply that unconscious intent to deceive will be blameless or “free from or not deserving blame”, only that it will be blamed less.
(I think you may have an unconscious motivation to commit this logical error in order to further your side of the argument. Normally I wouldn’t say this out loud, or in public, but you seem to be proposing a norm where people do state such beliefs freely. Is that right? And do you think this instance also falls under “lying”?)
I think conscious intent being selectively discouraged more than unconscious intent can make sense for several reasons:
Someone deceiving with conscious intent can apply more compute / intelligence and other resources for optimizing and maintaining the lie, which means the deception can be much bigger and more consequential, thereby causing greater damage to others.
Deceiving with conscious intent implies that the person endorses lying in that situation which means you probably need to do something substantially different to dissuade that person from lying in a similar situation in the future, compared to someone deceiving with unconscious intent. In the latter case, it might suffice to diplomatically (e.g., privately) bring up the issue to that person’s conscious awareness, so they can consciously override their unconscious motivations.
Conscious lies tend to be harder to detect (due to more optimizing power applied towards creating the appearance of truth). Economics research into optimal punishment suggests that (all else equal) crimes that are harder to detect should be punished more.
Unconscious deception is hard to distinguish from innocent mistakes. If you try to punish what you think are cases of unconscious deception, you’ll end up making a lot people feel like they were punished unfairly, either because they’re truly innocent, or because they’re not consciously aware of any deceptive intent and therefore think they’re innocent. You inevitably make a lot of enemies to you personally or to the norm you’re proposing.
(There are some issues in the way I stated points 1-4 above that I can see but don’t feel like spending more time to fix. I would rather spend my time on other topics but nobody is bringing up these points so I feel like I have to, given how much the parent comment has been upvoted.)
Conscious intent being selectively discouraged more than unconscious intent does not logically imply that unconscious intent to deceive will be blameless or “free from or not deserving blame”, only that it will be blamed less.
Yes, I was speaking imprecisely. A better phrasing is “when only conscious intent is blamed, …”
you seem to be proposing a norm where people do state such beliefs freely. Is that right?
Yes. (I think your opinion is correct in this case)
And do you think this instance also falls under “lying”?
It would fall under hyperbole. I think some but not all hyperboles are lies, and I weakly think this one was.
Regarding the 4 points:
I think 1 is true
2 is generally false (people dissuaded from unconsciously lying once will almost always keep unconsciously lying; not lying to yourself is hard and takes work; and someone who’s consciously lying can also stop lying when called out privately if that’s more convenient)
3 is generally false, people who are consciously lying will often subconsciously give signals that they are lying that others can pick up on (e.g. seeming nervous, taking longer to answer questions), compared to people who subconsciously lie, who usually feel safer, as there as an internal blameless narrative being written constantly.
4 is irrelevant due to the point about conscious/unconscious not being a boundary that can be pinned down by a justice process; if you’re considering this you should mainly think about what the justice process is able to pin down rather than the conscious/unconscious split.
In general I worry more about irrational adversariality than rational adversariality, and I especially worry about pressures towards making people have lower integrity of mind (e.g. pressures to destroy one’s own world-representation). I think someone who worries more about rational adversariality could more reasonably worry more about conscious lying than unconscious lying. (Still, that doesn’t tell them what to do about it; telling people “don’t consciously lie” doesn’t work, since some people will choose not to follow that advice; so a justice procedure is still necessary, and will have issues with pinning down the conscious/unconscious split)
> I think you may have an unconscious motivation to commit this logical error in order to further your side of the argument. Normally I wouldn’t say this out loud, or in public, you seem to be proposing a norm where people do state such beliefs freely. Is that right?
Yes. (I think your opinion is correct in this case)
Wow. Thanks for saying so explicitly, I wouldn’t have guessed that, and am surprised. How do you imagine that it plays out, or how it properly ought to play out when someone makes an accusation / insinuation of another person like this?
Treat it as a thing that might or might not be true, like other things? Sometimes it’s hard to tell whether it’s true, and in those cases it’s useful to be able to say something like “well, maybe, can’t know for sure”.
I’m trying to understand why this norm seems so crazy to me...
I definitely do something very much like this with people that I’m close with, in private. I have once been in a heated multi-person conversation, and politely excused myself and a friend, to step into another room. In that context, I then looked the friend in the eye, and said “it seems to me that you’re rationalizing [based on x evidence]. Are you sure you really believe what you’re saying here?”
And friends have sometimes helped me in similar ways, “the things that you’re saying don’t quite add up...”
(Things like this happen more often these days, now that rationalists have imported more Circling norms of sharing feelings and stories. Notably these norms include a big helping of NVC norms: owning your experience as your own, and keeping interpretation separate from observation.)
All things considered, I think this is a pretty radical move. But it seems like it depends a lot on the personal trust between me and the other person. I would feel much less comfortable with that kind of interaction with a random stranger, or in a public space.
Why?
Well for one thing, if I’m having a fight with someone, having someone else question my motivations can cause me to lose ground in the fight. It can be an aggressive move, used to undercut the arguments that one is trying to make.
For another, engaging with a person’s psychological guts like that is intimate, and vulnerable. I am much less likely to be defensive if I trust that the other person is sincerely looking out for my best interests.
I guess I feel like it’s basically not any of your business what’s happening in my mind. If you have an issue with my arguments, you can attack those, those are public. And you are, of course, free to have your own private opinion about my biases, but only the actual mistakes in reasoning that I make are in the common domain for you to correct.
In general, It seems like a bad norm have “psychological” evidence be admissible in discourse, because it biases the disagreements towards whoever is more charismatic / has more rhetorical skill in pointing out biases, as opposed to the the person who is more correct.
Also, it just doesn’t seem like it helps very much. “I have a hypothesis that you’re rationalizing.” The other party is like, “Ok. Well, I think my position is correct.” and then they go back to the object level (maybe with one of them more defensive). I can’t know what’s happening in your head, so I can’t really call you out on what’s happening there, or enforce norms there. [I would want to think about it more, but I think that might be a crux for me.]
. . .
Now I’m putting those feeling next to my sense of what we should do when one has someone like Gleb Tsipursky in the mix.
I think all of the above still stands. It is inappropriate for me to attack him at the level of his psychology, as opposed to pointing to specific bad-actions (including borderline actions), and telling him to stop, and if that fails, telling him that he is no-longer welcome here.
This was mostly for my own thinking, but I’d be glad to hear what you think, Jessica.
The concept of “not an argument” seems useful; “you’re rationalizing” isn’t an argument (unless it has evidence accompanying it). (This handles point 1)
I don’t really believe in tabooing discussion of mental states on the basis that they’re private, that seems like being intentionally stupid and blind, and puts a (low) ceiling on how much sense can be made of the world. (Truth is entangled!) Of course it can derail discussions but again, “not an argument”. (Eliezer’s post says it’s “dangerous” without elaborating, that’s basically giving a command rather than a model, which I’m suspicious of)
There’s a legitimate concern about blame/scapegoating but things can be worded to avoid that. (I think Wei did a good job here, noting that the intention is probably subconscious)
With someone like Gleb it’s useful to be able to point out to at least some people (possibly including him) that he’s doing stupid/harmful actions repeatedly in a pattern that suggests optimization. So people can build a model of what’s going on (which HAS to include mental states, since they’re a causally very important part of the universe!) and take appropriate action. If you can’t talk about adversarial optimization pressures you’re probably owned by them (and being owned by them would lead to not feeling safe talking about them).
Someone deceiving with conscious intent can apply more compute / intelligence and other resources for optimizing and maintaining the lie, which means the deception can be much bigger and more consequential, thereby causing greater damage to others.
Unconscious deception is hard to distinguish from innocent mistakes.
Surely someone consciously intending to deceive can apply some of that extra compute to making it harder to distinguish their behavior from an innocent mistake.
“Your honor, I know I told the customer that the chemical I sold to them would cure their disease, and it didn’t, and I had enough information to know that, but you see, I wasn’t conscious that it wouldn’t cure their disease, as I was selling it to them, so it isn’t really fraud” would not fly in any court that is even seriously pretending to be executing justice.
(just to provide the keyword: the relevant legal doctrine here is that the seller “knew or should have known” that the drug wouldn’t cure the disease)
“Your honor, I know I told the customer that the chemical I sold to them would cure their disease, and it didn’t, and I had enough information to know that, but you see, I wasn’t conscious that it wouldn’t cure their disease, as I was selling it to them, so it isn’t really fraud” would not fly in any court that is even seriously pretending to be executing justice.
Yet, oddly, something called ‘criminal intent’ is indeed required in addition to the crime itself.
It seems that ‘criminal intent’ is not interpreted as conscious intent. Rather, the actions of the accused must be incompatible with those of a reasonable person trying to avoid the crime.
Can you say more about this? I’ve been searching for a while about the differences between civil and criminal fraud, and my best guess (though I am really not sure) is that both have an intentional component. Here for example is an article on intent in the Texas Civil Law code:
[I’m not a lawyer and it’s been a long time since law school. Also apologies for length]
Sorry—I was unclear. All I meant was that civil cases don’t require *criminal intent.* You’re right that they’ll both usually have some intent component, which will vary by the claim and the jurisdiction (which makes it hard to give a simple answer).
---
tl;dr: It’s complicated. Often reckless disregard for the truth r deliberate ignorance is enough to make a fraud case. Sometimes a “negligent misrepresentation” is enough for a civil suit. But overall both criminal and ccivil cases usually have some kind of intent/reckless in difference/deliberate ignorance requirement. Securities fraud in NY is an important exception.
Also I can’t emphasize enough that there are 50 versions in 50 states and also securities fraud, mail fraud, wire fraud, etc can all be defined differently in each state.
----
After a quick Google., it looks to me like the criminal and civil standards are usually pretty similar.
It looks like criminal fraud typically (but not always) requires “fraudulent intent” or “knowledge that the fraudulent claim was false.” However, it seems “reckless indifference to the truth” is enough to satisfy this in many jurisdictions.[1]
New York is famous for the Martin Act, which outlaws both criminal and civil securities fraud without having any intent requirement at all.[2] (This is actually quite important because a high percentage of all securities transactions go through New York at some point, so NY gets to use this law to prosecute transactions that occur basically anywhere).
The action most equivalent to civil fraud is Misrepresentation of material facts/fraudulent misrepresentation. This seems a bit more likely than criminal law to accept “reckless indifference” as a substitute for actually knowing that the relevant claim was false.[3] For example, thee Federal False Claims Act makes you liable if you display “deliberate ignorance” or “reckless disregard of the truth” even if you don’t knowingly make a false claim.[4]
However, in at least some jurisdictions you can bring a civil claim for negligent misrepresentation of material facts, which seems to basically amount too fraud but with a negligence standard, not an intent standardd.[5]
P.S. Note that we seem to be discussing the aspect of “intent” pertaining to whether the defendant knew the relevant statement was false.There’s also often a required intent to deceive or harm in both the criminal and civil context (I’dd guess the requirement is a bit weaker in civil law.
[2] “In some instances, particularly those involving civil actions for fraud and securities cases, the intent requirement is met if the prosecution or plaintiff is able to show that the false statements were made recklessly—that is, with complete disregard for truth or falsity.”
[4] “Notably, in order to secure a conviction, the state is not required to prove scienter (except in connection with felonies) or an actual purchase or sale or damages resulting from the fraud.[2]
***
.In 1926, the New York Court of Appeals held in People v. Federated Radio Corp. that proof of fraudulent intent was unnecessary for prosecution under the Act.[8] In 1930, the court elaborated that the Act should “be liberally and sympathetically construed in order that its beneficial purpose may, so far as possible, be attained.”[9]
[5] “Although a misrepresentation fraud case may not be based on negligent or accidental misrepresentations, in some instances a civil action may be filed for negligent misrepresentation. This tort action is appropriate if a defendant suffered a loss because of the carelessness or negligence of another party upon which the defendant was entitled to rely. Examples would be negligent false statements to a prospective purchaser regarding the value of a closely held company’s stock or the accuracy of its financial statements.” https://www.acfe.com/uploadedFiles/Shared_Content/Products/Self-Study_CPE/Fraud-Trial-2011-Chapter-Excerpt.pdf
I feel torn because I agree that unconscious intent is incredibly important to straighten out, but also think
1. everyone else is relatively decent at blaming them for their poor intent in the meantime (though there are some cases I’d like to see people catch onto faster), and
2. this is mostly between the person and themselves.
It seems like you’re advocating for people to be publicly shamed more for their unconscious bad intentions, and this seems both super bad for social fabric (and witch-hunt-permitting) while imo not adding very much capacity to change due to point (2), and would be much better accomplished by a culture of forgiveness such that the elephant lets people look at it. Are there parts of this you strongly disagree with?
I’m not in favor of shaming people. I’m strongly in favor of forgiveness. Justice in the current context requires forgiveness because of how thoroughly the forces of deception have prevailed, and how motivated people are to extend coverups to avoid punishment. Law fought fraud, and fraud won.
It’s important to be very clear on what actually happened (incl. about violations), AND to avoid punishing people. Truth and reconciliation.
Justice in the current context requires forgiveness because of how thoroughly the forces of deception have prevailed, and how motivated people are to extend coverups to avoid punishment. Law fought fraud, and fraud won.
This seems really important for understanding where you’re at, and I don’t get it yet.
I would love a concrete example of people being motivated to extend coverups to avoid punishment.
It’s important to be very clear on what actually happened (incl. about violations), AND to avoid punishing people. Truth and reconciliation.
I think this a very much underrated avenue to improve lots of things. I’m a little sad at the thought that neither are likely without the looming threat of possible punishment.
This is an excellent point. The more relevant boundary seems like the one we usually refer to with the phrase “should have known”—and indeed this is more or less the notion that the courts use.
The question, then, is: do we have a satisfying account of “should have known”? If so: can we describe it sensibly and concisely? If not: can we formulate one?
I roughly agree with this being the most promising direction. In my mind the problem isn’t “did so-and-so lie, or rationalize?” the question is “was so-and-so demonstratably epistemically negligent?”. If so, and if you can fairly apply disincentives (or, positive incentives on how to be epistemically non-negligent), then the first question just doesn’t matter.
In actual law, we have particular rules about what people are expected to know. It is possible we could construct such rules for LessWrong and/or the surrounding ecosystems, but I think doing so is legitimately challenging.
I disagree that answering the first question doesn’t matter—that’s a very extreme “mistake theory” lens.
If someone is actively adversarial vs. biased but open to learning, that changes quite a bit about how leaders and others in the community should approach the situation.
I do agree that it’s important to have the “are they actively adversarial” hypothesis and corresponding language. (This is why I’ve generally argued against the conflation of lying and rationalization).
But I also think, at least in most of the disagreements and conflicts I’ve seen so far, much of the problem has had more to do with rationalization (or, in some cases, different expectations of how much effort to put into intellectual integrity)
I think there is also an undercurrent of genuine conflict (as people jockey for money/status) that manifests primarily through rationalization, and in some cases duplicity.*
*where the issue is less about people lying but is about them semi-consciously presenting different faces to different people.
Indeed, I agree that it would be more challenging for us, and I have some thoughts about why that would be and how to mitigate it. That said, I think the most productive and actionable way to make progress on this is to look into the relevant legal standards: what standards are applied in criminal proceedings (in the U.S.? elsewhere?) to “should have known”? to cases of civil liability? contract law? corporate law? etc. By looking at what constraints these sorts of situations place on people, and what epistemic obligations are assumed, we can get some insight into how our needs might be similar and/or different, compared to those contexts, which should give us ideas on how to formulate the relevant norms.
I think we, and others too, are already constructing rules, tho not at as a single grand taxonomy, completed as a single grand project, but piecemeal, e.g. like common law.
There have been recent shifts in ideas about what counts as ‘epistemically negligent’ [and that’s a great phrase by the way!], at least among some groups of people with which I’m familiar. I think the people of this site, and the greater diaspora, have much more stringent standards today in this area.
When conscious intent is selectively discouraged more than unconscious intent, the result is rule by unconscious intent. Those who can conveniently forget, who can maintain narcissistic fantasies, who can avoid introspection, who can be ruled by emotions with hidden causes, will be the only ones able to deceive (or otherwise to violate norms) blamelessly.
Only a subset of lies may be detected by any given justice process, but “conscious/unconscious” does not correspond to the boundary of such a subset. In fact, due to the flexibility and mystery of mental architecture, such a split is incredibly hard to pin down by any precise theory.
“Your honor, I know I told the customer that the chemical I sold to them would cure their disease, and it didn’t, and I had enough information to know that, but you see, I wasn’t conscious that it wouldn’t cure their disease, as I was selling it to them, so it isn’t really fraud” would not fly in any court that is even seriously pretending to be executing justice.
Conscious intent being selectively discouraged more than unconscious intent does not logically imply that unconscious intent to deceive will be blameless or “free from or not deserving blame”, only that it will be blamed less.
(I think you may have an unconscious motivation to commit this logical error in order to further your side of the argument. Normally I wouldn’t say this out loud, or in public, but you seem to be proposing a norm where people do state such beliefs freely. Is that right? And do you think this instance also falls under “lying”?)
I think conscious intent being selectively discouraged more than unconscious intent can make sense for several reasons:
Someone deceiving with conscious intent can apply more compute / intelligence and other resources for optimizing and maintaining the lie, which means the deception can be much bigger and more consequential, thereby causing greater damage to others.
Deceiving with conscious intent implies that the person endorses lying in that situation which means you probably need to do something substantially different to dissuade that person from lying in a similar situation in the future, compared to someone deceiving with unconscious intent. In the latter case, it might suffice to diplomatically (e.g., privately) bring up the issue to that person’s conscious awareness, so they can consciously override their unconscious motivations.
Conscious lies tend to be harder to detect (due to more optimizing power applied towards creating the appearance of truth). Economics research into optimal punishment suggests that (all else equal) crimes that are harder to detect should be punished more.
Unconscious deception is hard to distinguish from innocent mistakes. If you try to punish what you think are cases of unconscious deception, you’ll end up making a lot people feel like they were punished unfairly, either because they’re truly innocent, or because they’re not consciously aware of any deceptive intent and therefore think they’re innocent. You inevitably make a lot of enemies to you personally or to the norm you’re proposing.
(There are some issues in the way I stated points 1-4 above that I can see but don’t feel like spending more time to fix. I would rather spend my time on other topics but nobody is bringing up these points so I feel like I have to, given how much the parent comment has been upvoted.)
Yes, I was speaking imprecisely. A better phrasing is “when only conscious intent is blamed, …”
Yes. (I think your opinion is correct in this case)
It would fall under hyperbole. I think some but not all hyperboles are lies, and I weakly think this one was.
Regarding the 4 points:
I think 1 is true
2 is generally false (people dissuaded from unconsciously lying once will almost always keep unconsciously lying; not lying to yourself is hard and takes work; and someone who’s consciously lying can also stop lying when called out privately if that’s more convenient)
3 is generally false, people who are consciously lying will often subconsciously give signals that they are lying that others can pick up on (e.g. seeming nervous, taking longer to answer questions), compared to people who subconsciously lie, who usually feel safer, as there as an internal blameless narrative being written constantly.
4 is irrelevant due to the point about conscious/unconscious not being a boundary that can be pinned down by a justice process; if you’re considering this you should mainly think about what the justice process is able to pin down rather than the conscious/unconscious split.
In general I worry more about irrational adversariality than rational adversariality, and I especially worry about pressures towards making people have lower integrity of mind (e.g. pressures to destroy one’s own world-representation). I think someone who worries more about rational adversariality could more reasonably worry more about conscious lying than unconscious lying. (Still, that doesn’t tell them what to do about it; telling people “don’t consciously lie” doesn’t work, since some people will choose not to follow that advice; so a justice procedure is still necessary, and will have issues with pinning down the conscious/unconscious split)
Wow. Thanks for saying so explicitly, I wouldn’t have guessed that, and am surprised. How do you imagine that it plays out, or how it properly ought to play out when someone makes an accusation / insinuation of another person like this?
Treat it as a thing that might or might not be true, like other things? Sometimes it’s hard to tell whether it’s true, and in those cases it’s useful to be able to say something like “well, maybe, can’t know for sure”.
I’m trying to understand why this norm seems so crazy to me...
I definitely do something very much like this with people that I’m close with, in private. I have once been in a heated multi-person conversation, and politely excused myself and a friend, to step into another room. In that context, I then looked the friend in the eye, and said “it seems to me that you’re rationalizing [based on x evidence]. Are you sure you really believe what you’re saying here?”
And friends have sometimes helped me in similar ways, “the things that you’re saying don’t quite add up...”
(Things like this happen more often these days, now that rationalists have imported more Circling norms of sharing feelings and stories. Notably these norms include a big helping of NVC norms: owning your experience as your own, and keeping interpretation separate from observation.)
All things considered, I think this is a pretty radical move. But it seems like it depends a lot on the personal trust between me and the other person. I would feel much less comfortable with that kind of interaction with a random stranger, or in a public space.
Why?
Well for one thing, if I’m having a fight with someone, having someone else question my motivations can cause me to lose ground in the fight. It can be an aggressive move, used to undercut the arguments that one is trying to make.
For another, engaging with a person’s psychological guts like that is intimate, and vulnerable. I am much less likely to be defensive if I trust that the other person is sincerely looking out for my best interests.
I guess I feel like it’s basically not any of your business what’s happening in my mind. If you have an issue with my arguments, you can attack those, those are public. And you are, of course, free to have your own private opinion about my biases, but only the actual mistakes in reasoning that I make are in the common domain for you to correct.
In general, It seems like a bad norm have “psychological” evidence be admissible in discourse, because it biases the disagreements towards whoever is more charismatic / has more rhetorical skill in pointing out biases, as opposed to the the person who is more correct.
The arbital page on Psychoanalyzing is very relevant.
Also, it just doesn’t seem like it helps very much. “I have a hypothesis that you’re rationalizing.” The other party is like, “Ok. Well, I think my position is correct.” and then they go back to the object level (maybe with one of them more defensive). I can’t know what’s happening in your head, so I can’t really call you out on what’s happening there, or enforce norms there. [I would want to think about it more, but I think that might be a crux for me.]
. . .
Now I’m putting those feeling next to my sense of what we should do when one has someone like Gleb Tsipursky in the mix.
I think all of the above still stands. It is inappropriate for me to attack him at the level of his psychology, as opposed to pointing to specific bad-actions (including borderline actions), and telling him to stop, and if that fails, telling him that he is no-longer welcome here.
This was mostly for my own thinking, but I’d be glad to hear what you think, Jessica.
The concept of “not an argument” seems useful; “you’re rationalizing” isn’t an argument (unless it has evidence accompanying it). (This handles point 1)
I don’t really believe in tabooing discussion of mental states on the basis that they’re private, that seems like being intentionally stupid and blind, and puts a (low) ceiling on how much sense can be made of the world. (Truth is entangled!) Of course it can derail discussions but again, “not an argument”. (Eliezer’s post says it’s “dangerous” without elaborating, that’s basically giving a command rather than a model, which I’m suspicious of)
There’s a legitimate concern about blame/scapegoating but things can be worded to avoid that. (I think Wei did a good job here, noting that the intention is probably subconscious)
With someone like Gleb it’s useful to be able to point out to at least some people (possibly including him) that he’s doing stupid/harmful actions repeatedly in a pattern that suggests optimization. So people can build a model of what’s going on (which HAS to include mental states, since they’re a causally very important part of the universe!) and take appropriate action. If you can’t talk about adversarial optimization pressures you’re probably owned by them (and being owned by them would lead to not feeling safe talking about them).
Surely someone consciously intending to deceive can apply some of that extra compute to making it harder to distinguish their behavior from an innocent mistake.
(just to provide the keyword: the relevant legal doctrine here is that the seller “knew or should have known” that the drug wouldn’t cure the disease)
Yet, oddly, something called ‘criminal intent’ is indeed required in addition to the crime itself.
It seems that ‘criminal intent’ is not interpreted as conscious intent. Rather, the actions of the accused must be incompatible with those of a reasonable person trying to avoid the crime.
Note that criminal intent is *not* required for a civil fraud suit which could be brought simultaneously with or after a criminal proceeding.
Can you say more about this? I’ve been searching for a while about the differences between civil and criminal fraud, and my best guess (though I am really not sure) is that both have an intentional component. Here for example is an article on intent in the Texas Civil Law code:
https://www.dwlawtx.com/issue-intent-civil-litigation/
[I’m not a lawyer and it’s been a long time since law school. Also apologies for length]
Sorry—I was unclear. All I meant was that civil cases don’t require *criminal intent.* You’re right that they’ll both usually have some intent component, which will vary by the claim and the jurisdiction (which makes it hard to give a simple answer).
---
tl;dr: It’s complicated. Often reckless disregard for the truth r deliberate ignorance is enough to make a fraud case. Sometimes a “negligent misrepresentation” is enough for a civil suit. But overall both criminal and ccivil cases usually have some kind of intent/reckless in difference/deliberate ignorance requirement. Securities fraud in NY is an important exception.
Also I can’t emphasize enough that there are 50 versions in 50 states and also securities fraud, mail fraud, wire fraud, etc can all be defined differently in each state.
----
After a quick Google., it looks to me like the criminal and civil standards are usually pretty similar.
It looks like criminal fraud typically (but not always) requires “fraudulent intent” or “knowledge that the fraudulent claim was false.” However, it seems “reckless indifference to the truth” is enough to satisfy this in many jurisdictions.[1]
New York is famous for the Martin Act, which outlaws both criminal and civil securities fraud without having any intent requirement at all.[2] (This is actually quite important because a high percentage of all securities transactions go through New York at some point, so NY gets to use this law to prosecute transactions that occur basically anywhere).
The action most equivalent to civil fraud is Misrepresentation of material facts/fraudulent misrepresentation. This seems a bit more likely than criminal law to accept “reckless indifference” as a substitute for actually knowing that the relevant claim was false.[3] For example, thee Federal False Claims Act makes you liable if you display “deliberate ignorance” or “reckless disregard of the truth” even if you don’t knowingly make a false claim.[4]
However, in at least some jurisdictions you can bring a civil claim for negligent misrepresentation of material facts, which seems to basically amount too fraud but with a negligence standard, not an intent standardd.[5]
P.S. Note that we seem to be discussing the aspect of “intent” pertaining to whether the defendant knew the relevant statement was false.There’s also often a required intent to deceive or harm in both the criminal and civil context (I’dd guess the requirement is a bit weaker in civil law.
------
[1] “Fraudulent intent is shown if a representation is made with reckless indifference to its truth or falsity.” https://www.justice.gov/jm/criminal-resource-manual-949-proof-fraudulent-intent
[2] “In some instances, particularly those involving civil actions for fraud and securities cases, the intent requirement is met if the prosecution or plaintiff is able to show that the false statements were made recklessly—that is, with complete disregard for truth or falsity.”
[3] https://en.wikipedia.org/wiki/False_Claims_Act#1986_changes
[4] “Notably, in order to secure a conviction, the state is not required to prove scienter (except in connection with felonies) or an actual purchase or sale or damages resulting from the fraud.[2]
***
.In 1926, the New York Court of Appeals held in People v. Federated Radio Corp. that proof of fraudulent intent was unnecessary for prosecution under the Act.[8] In 1930, the court elaborated that the Act should “be liberally and sympathetically construed in order that its beneficial purpose may, so far as possible, be attained.”[9]
https://en.wikipedia.org/wiki/Martin_Act#Investigative_Powers
[5] “Although a misrepresentation fraud case may not be based on negligent or accidental misrepresentations, in some instances a civil action may be filed for negligent misrepresentation. This tort action is appropriate if a defendant suffered a loss because of the carelessness or negligence of another party upon which the defendant was entitled to rely. Examples would be negligent false statements to a prospective purchaser regarding the value of a closely held company’s stock or the accuracy of its financial statements.” https://www.acfe.com/uploadedFiles/Shared_Content/Products/Self-Study_CPE/Fraud-Trial-2011-Chapter-Excerpt.pdf
Thank you, this was a good clarification and really helpful!
I feel torn because I agree that unconscious intent is incredibly important to straighten out, but also think
1. everyone else is relatively decent at blaming them for their poor intent in the meantime (though there are some cases I’d like to see people catch onto faster), and
2. this is mostly between the person and themselves.
It seems like you’re advocating for people to be publicly shamed more for their unconscious bad intentions, and this seems both super bad for social fabric (and witch-hunt-permitting) while imo not adding very much capacity to change due to point (2), and would be much better accomplished by a culture of forgiveness such that the elephant lets people look at it. Are there parts of this you strongly disagree with?
I’m not in favor of shaming people. I’m strongly in favor of forgiveness. Justice in the current context requires forgiveness because of how thoroughly the forces of deception have prevailed, and how motivated people are to extend coverups to avoid punishment. Law fought fraud, and fraud won.
It’s important to be very clear on what actually happened (incl. about violations), AND to avoid punishing people. Truth and reconciliation.
This seems really important for understanding where you’re at, and I don’t get it yet.
I would love a concrete example of people being motivated to extend coverups to avoid punishment.
Do you have writings I should read?
Jeffrey Epstein
I think this a very much underrated avenue to improve lots of things. I’m a little sad at the thought that neither are likely without the looming threat of possible punishment.
This is an excellent point. The more relevant boundary seems like the one we usually refer to with the phrase “should have known”—and indeed this is more or less the notion that the courts use.
The question, then, is: do we have a satisfying account of “should have known”? If so: can we describe it sensibly and concisely? If not: can we formulate one?
I roughly agree with this being the most promising direction. In my mind the problem isn’t “did so-and-so lie, or rationalize?” the question is “was so-and-so demonstratably epistemically negligent?”. If so, and if you can fairly apply disincentives (or, positive incentives on how to be epistemically non-negligent), then the first question just doesn’t matter.
In actual law, we have particular rules about what people are expected to know. It is possible we could construct such rules for LessWrong and/or the surrounding ecosystems, but I think doing so is legitimately challenging.
I disagree that answering the first question doesn’t matter—that’s a very extreme “mistake theory” lens.
If someone is actively adversarial vs. biased but open to learning, that changes quite a bit about how leaders and others in the community should approach the situation.
I do agree that it’s important to have the “are they actively adversarial” hypothesis and corresponding language. (This is why I’ve generally argued against the conflation of lying and rationalization).
But I also think, at least in most of the disagreements and conflicts I’ve seen so far, much of the problem has had more to do with rationalization (or, in some cases, different expectations of how much effort to put into intellectual integrity)
I think there is also an undercurrent of genuine conflict (as people jockey for money/status) that manifests primarily through rationalization, and in some cases duplicity.*
*where the issue is less about people lying but is about them semi-consciously presenting different faces to different people.
Indeed, I agree that it would be more challenging for us, and I have some thoughts about why that would be and how to mitigate it. That said, I think the most productive and actionable way to make progress on this is to look into the relevant legal standards: what standards are applied in criminal proceedings (in the U.S.? elsewhere?) to “should have known”? to cases of civil liability? contract law? corporate law? etc. By looking at what constraints these sorts of situations place on people, and what epistemic obligations are assumed, we can get some insight into how our needs might be similar and/or different, compared to those contexts, which should give us ideas on how to formulate the relevant norms.
I think we, and others too, are already constructing rules, tho not at as a single grand taxonomy, completed as a single grand project, but piecemeal, e.g. like common law.
There have been recent shifts in ideas about what counts as ‘epistemically negligent’ [and that’s a great phrase by the way!], at least among some groups of people with which I’m familiar. I think the people of this site, and the greater diaspora, have much more stringent standards today in this area.