Entangled Truths, Contagious Lies
One of your very early philosophers came to the conclusion that a fully competent mind, from a study of one fact or artifact belonging to any given universe, could construct or visualize that universe, from the instant of its creation to its ultimate end . . .
—First Lensman
If any one of you will concentrate upon one single fact, or small object, such as a pebble or the seed of a plant or other creature, for as short a period of time as one hundred of your years, you will begin to perceive its truth.
—Gray Lensman
I am reasonably sure that a single pebble, taken from a beach of our own Earth, does not specify the continents and countries, politics and people of this Earth. Other planets in space and time, other Everett branches, would generate the same pebble.
On the other hand, the identity of a single pebble would seem to include our laws of physics. In that sense the entirety of our Universe—all the Everett branches—would be implied by the pebble.1
From the study of that single pebble you could see the laws of physics and all they imply. Thinking about those laws of physics, you can see that planets will form, and you can guess that the pebble came from such a planet. The internal crystals and molecular formations of the pebble developed under gravity, which tells you something about the planet’s mass; the mix of elements in the pebble tells you something about the planet’s formation.
I am not a geologist, so I don’t know to which mysteries geologists are privy. But I find it very easy to imagine showing a geologist a pebble, and saying, “This pebble came from a beach at Half Moon Bay,” and the geologist immediately says, “I’m confused,” or even, “You liar.” Maybe it’s the wrong kind of rock, or the pebble isn’t worn enough to be from a beach—I don’t know pebbles well enough to guess the linkages and signatures by which I might be caught, which is the point.
“Only God can tell a truly plausible lie.” I wonder if there was ever a religion that developed this as a proverb? I would (falsifiably) guess not: it’s a rationalist sentiment, even if you cast it in theological metaphor. Saying “everything is interconnected to everything else, because God made the whole world and sustains it” may generate some nice warm ’n’ fuzzy feelings during the sermon, but it doesn’t get you very far when it comes to assigning pebbles to beaches.
A penny on Earth exerts a gravitational acceleration on the Moon of around 4.5 × 10-31 m/s2, so in one sense it’s not too far wrong to say that every event is entangled with its whole past light cone. And since inferences can propagate backward and forward through causal networks, epistemic entanglements can easily cross the borders of light cones. But I wouldn’t want to be the forensic astronomer who had to look at the Moon and figure out whether the penny landed heads or tails—the influence is far less than quantum uncertainty and thermal noise.
If you said, “Everything is entangled with something else,” or, “Everything is inferentially entangled and some entanglements are much stronger than others,” you might be really wise instead of just Deeply Wise.
Physically, each event is in some sense the sum of its whole past light cone, without borders or boundaries. But the list of noticeable entanglements is much shorter, and it gives you something like a network. This high-level regularity is what I refer to when I talk about the Great Web of Causality.
I use these Capitalized Letters somewhat tongue-in-cheek, perhaps; but if anything at all is worth Capitalized Letters, surely the Great Web of Causality makes the list.
“Oh what a tangled web we weave, when first we practise to deceive,” said Sir Walter Scott. Not all lies spin out of control—we don’t live in so righteous a universe. But it does occasionally happen that someone lies about a fact, and then has to lie about an entangled fact, and then another fact entangled with that one:
“Where were you?”
“Oh, I was on a business trip.”
“What was the business trip about?”
“I can’t tell you that; it’s proprietary negotiations with a major client.”
“Oh—they’re letting you in on those? Good news! I should call your boss to thank him for adding you.”
“Sorry—he’s not in the office right now . . .”
Human beings, who are not gods, often fail to imagine all the facts they would need to distort to tell a truly plausible lie. “God made me pregnant” sounded a tad more likely in the old days before our models of the world contained (quotations of) Y chromosomes. Many similar lies, today, may blow up when genetic testing becomes more common. Rapists have been convicted, and false accusers exposed, years later, based on evidence they didn’t realize they could leave. A student of evolutionary biology can see the design signature of natural selection on every wolf that chases a rabbit; and every rabbit that runs away; and every bee that stings instead of broadcasting a polite warning—but the deceptions of creationists sound plausible to them, I’m sure.
Not all lies are uncovered, not all liars are punished; we don’t live in that righteous a universe. But not all lies are as safe as their liars believe. How many sins would become known to a Bayesian superintelligence, I wonder, if it did a (non-destructive?) nanotechnological scan of the Earth? At minimum, all the lies of which any evidence still exists in any brain. Some such lies may become known sooner than that, if the neuroscientists ever succeed in building a really good lie detector via neuroimaging. Paul Ekman (a pioneer in the study of tiny facial muscle movements) could probably read off a sizeable fraction of the world’s lies right now, given a chance.
Not all lies are uncovered, not all liars are punished. But the Great Web is very commonly underestimated. Just the knowledge that humans have already accumulated would take many human lifetimes to learn. Anyone who thinks that a non-God can tell a perfect lie, risk-free, is underestimating the tangledness of the Great Web.
Is honesty the best policy? I don’t know if I’d go that far: Even on my ethics, it’s sometimes okay to shut up. But compared to outright lies, either honesty or silence involves less exposure to recursively propagating risks you don’t know you’re taking.
1Assuming, as seems likely, there are no truly free variables.
- Have epistemic conditions always been this bad? by 25 Jan 2020 4:42 UTC; 206 points) (
- Incorrect hypotheses point to correct observations by 20 Nov 2018 21:10 UTC; 167 points) (
- Value Claims (In Particular) Are Usually Bullshit by 30 May 2024 6:26 UTC; 143 points) (
- Assume Bad Faith by 25 Aug 2023 17:36 UTC; 112 points) (
- Understanding your understanding by 22 Mar 2010 22:33 UTC; 102 points) (
- The Minority Coalition by 24 Jun 2024 20:01 UTC; 99 points) (
- Truth and Advantage: Response to a draft of “AI safety seems hard to measure” by 22 Mar 2023 3:36 UTC; 98 points) (
- Craving, suffering, and predictive processing (three characteristics series) by 15 May 2020 13:21 UTC; 90 points) (
- Unnatural Categories Are Optimized for Deception by 8 Jan 2021 20:54 UTC; 89 points) (
- You’re Entitled to Arguments, But Not (That Particular) Proof by 15 Feb 2010 7:58 UTC; 88 points) (
- Don’t Apply the Principle of Charity to Yourself by 19 Nov 2011 19:26 UTC; 82 points) (
- So You’ve Changed Your Mind by 28 Apr 2011 19:42 UTC; 77 points) (
- “Epistemic range of motion” and LessWrong moderation by 27 Nov 2023 21:58 UTC; 65 points) (
- Bayesian updating in real life is mostly about understanding your hypotheses by 1 Jan 2024 0:10 UTC; 63 points) (
- A hermeneutic net for agency by 1 Jan 2024 8:06 UTC; 58 points) (
- Not Technically Lying by 4 Jul 2009 18:40 UTC; 50 points) (
- Protected From Myself by 19 Oct 2008 0:09 UTC; 47 points) (
- 24 Dec 2019 9:41 UTC; 46 points) 's comment on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk by (
- If Clarity Seems Like Death to Them by 30 Dec 2023 17:40 UTC; 45 points) (
- What data generated that thought? by 26 Apr 2011 12:54 UTC; 42 points) (
- In Defence of Spock by 21 Apr 2021 21:34 UTC; 36 points) (
- Commitment and credibility in multipolar AI scenarios by 4 Dec 2020 18:48 UTC; 31 points) (
- Ethical Inhibitions by 19 Oct 2008 20:44 UTC; 31 points) (
- 20 Feb 2014 13:51 UTC; 30 points) 's comment on Open Thread for February 18-24 2014 by (
- Escaping Your Past by 22 Apr 2009 21:15 UTC; 28 points) (
- 29 Jan 2011 18:10 UTC; 23 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 7 by (
- 25 Nov 2019 2:07 UTC; 23 points) 's comment on Act of Charity by (
- 9 Feb 2014 10:25 UTC; 20 points) 's comment on Publication: the “anti-science” trope is culturally polarizing and makes people distrust scientists by (
- 10 May 2010 19:58 UTC; 18 points) 's comment on Is Google Paperclipping the Web? The Perils of Optimization by Proxy in Social Systems by (
- 24 Oct 2019 5:22 UTC; 17 points) 's comment on Deleted by (
- Fundamental question: What determines a mind’s effects? by 3 Sep 2023 17:15 UTC; 15 points) (
- Zombies! Substance Dualist Zombies? by 11 Dec 2024 6:10 UTC; 15 points) (
- 28 Jul 2023 2:45 UTC; 15 points) 's comment on A Hill of Validity in Defense of Meaning by (
- 25 Jan 2021 15:08 UTC; 14 points) 's comment on On clinging by (
- The fallacy of work-life compartmentalization by 4 Mar 2010 22:59 UTC; 13 points) (
- Truth and Advantage: Response to a draft of “AI safety seems hard to measure” by 22 Mar 2023 3:36 UTC; 11 points) (EA Forum;
- 6 Apr 2013 20:09 UTC; 11 points) 's comment on Fermi Estimates by (
- A Hill of Validity in Defense of Meaning by 15 Jul 2023 17:57 UTC; 8 points) (
- 19 Feb 2013 4:43 UTC; 8 points) 's comment on Falsifiable and non-Falsifiable Ideas by (
- 6 Nov 2021 1:34 UTC; 8 points) 's comment on [Book Review] “The Bell Curve” by Charles Murray by (
- Trust-maximizing AGI by 25 Feb 2022 15:13 UTC; 7 points) (
- 12 Nov 2021 11:49 UTC; 7 points) 's comment on Investigating Fabrication by (
- Rationality Reading Group: Part G: Against Rationalization by 12 Aug 2015 22:09 UTC; 7 points) (
- [SEQ RERUN] Entangled Truths, Contagious Lies by 24 Sep 2012 3:48 UTC; 6 points) (
- 2 Jan 2012 5:07 UTC; 6 points) 's comment on [SEQ RERUN] Something to Protect by (
- The Twin Webs of Knowledge by 28 Aug 2009 9:45 UTC; 6 points) (
- 3 May 2012 12:31 UTC; 6 points) 's comment on Rationality Quotes May 2012 by (
- 24 Mar 2012 0:14 UTC; 5 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 11 by (
- 13 Apr 2014 20:16 UTC; 5 points) 's comment on Open Thread April 8 - April 14 2014 by (
- 21 Apr 2010 12:00 UTC; 5 points) 's comment on Fusing AI with Superstition by (
- 3 Jul 2009 0:56 UTC; 5 points) 's comment on Harnessing Your Biases by (
- 21 May 2020 22:40 UTC; 5 points) 's comment on What Money Cannot Buy by (
- 22 Mar 2009 19:38 UTC; 5 points) 's comment on The Power of Positivist Thinking by (
- 10 Feb 2014 10:16 UTC; 5 points) 's comment on Publication: the “anti-science” trope is culturally polarizing and makes people distrust scientists by (
- 28 Oct 2021 17:08 UTC; 4 points) 's comment on Selfishness, preference falsification, and AI alignment by (
- 6 Jun 2020 22:01 UTC; 4 points) 's comment on Reality-Revealing and Reality-Masking Puzzles by (
- 21 Jan 2022 7:20 UTC; 4 points) 's comment on Open Thread—Jan 2022 [Vote Experiment!] by (
- 10 May 2014 20:35 UTC; 3 points) 's comment on Rationality Quotes May 2014 by (
- 20 Nov 2015 9:04 UTC; 3 points) 's comment on Open thread, Nov. 16 - Nov. 22, 2015 by (
- 31 May 2022 22:34 UTC; 3 points) 's comment on happyfellow’s Shortform by (
- 2 Sep 2009 15:40 UTC; 3 points) 's comment on Decision theory: Why we need to reduce “could”, “would”, “should” by (
- 17 Jun 2013 4:13 UTC; 3 points) 's comment on Robust Cooperation in the Prisoner’s Dilemma by (
- 31 Jan 2010 4:15 UTC; 2 points) 's comment on Bizarre Illusions by (
- 28 Nov 2019 4:57 UTC; 2 points) 's comment on Maybe Lying Doesn’t Exist by (
- 22 Jan 2017 1:24 UTC; 2 points) 's comment on If rationality is purely winning there is a minimal shared art by (
- 15 Sep 2023 18:36 UTC; 2 points) 's comment on Using Negative Hallucinations to Manage Sexual Desire by (
- 5 Oct 2015 11:09 UTC; 2 points) 's comment on Approaching rationality via a slippery slope by (
- 26 Aug 2009 17:47 UTC; 2 points) 's comment on Decision theory: An outline of some upcoming posts by (
- Against Belief-Labels by 9 Mar 2017 20:01 UTC; 2 points) (
- 2 Jan 2011 18:10 UTC; 2 points) 's comment on Choose To Be Happy by (
- 15 Aug 2010 23:02 UTC; 2 points) 's comment on Existential Risk and Public Relations by (
- 21 Dec 2010 1:05 UTC; 1 point) 's comment on What can you do with an Unfriendly AI? by (
- 12 Mar 2009 13:34 UTC; 1 point) 's comment on Raising the Sanity Waterline by (
- 18 Oct 2011 12:13 UTC; 1 point) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 28 Aug 2009 14:36 UTC; 1 point) 's comment on Decision theory: An outline of some upcoming posts by (
- 19 Jul 2009 5:59 UTC; 1 point) 's comment on Are You Anosognosic? by (
- 19 Apr 2009 12:52 UTC; 0 points) 's comment on The True Epistemic Prisoner’s Dilemma by (
- 3 Feb 2014 18:31 UTC; 0 points) 's comment on The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom by (
- 13 Oct 2015 8:03 UTC; 0 points) 's comment on Emotional tools for the beginner rationalist by (
- 22 Jul 2009 5:09 UTC; 0 points) 's comment on Of Exclusionary Speech and Gender Politics by (
- 9 Aug 2009 4:14 UTC; 0 points) 's comment on Unspeakable Morality by (
- 21 Jun 2024 0:25 UTC; 0 points) 's comment on I would have shit in that alley, too by (
- 17 Feb 2016 18:33 UTC; -4 points) 's comment on Open Thread, January 11-17, 2016 by (
- 28 Apr 2016 2:16 UTC; -4 points) 's comment on Suppose HBD is True by (
It’s amazing how many lies go undetected because people simply don’t care. I can’t tell a lie to fool God, but I can certainly achieve my aims by telling even blatant, obvious lies to human beings, who rarely bother trying to sort out the lies and when they do aren’t very good at it.
It sounds to me like you’re overreaching for a pragmatic reason not to lie, when you either need to admit that honesty is an end in itself or admit that lies are useful.
Honesty is an end in itself, but because the benefits involve unknown unknowns and black-swan bets, they are underrated.
And this doesn’t sound like a teleological argument to yourself?
Can someone expand on this? I don’t understand how Eliezer’s comment was a teleological argument.
I think you enormously over-state the difficulty of lying well, as well as the advantages of honesty.
I agree with Nominull, a good number of lies are undetectable without having access to some sort of lie detector or the agent’s source code. If an AI wanted to lie “my recursive modification of my goal systems hasn’t led me to accept a goal that involves eventually destroying all human life” I don’t see any way we could bust that lie via the ‘Web’ until the AI was actively pursuing that goal. I value honesty not for the trouble it saves me but because I find (sometimes only hope) that the real world free of distortion is more interesting than any misrepresentation humans can conjure for selfish means.
@”Oh what a tangled web we weave, when first we practise to deceive,” said Shakespeare.
Hopefully, the FAI will know that the author was Sir Walter Scott.
“Human beings, who are not gods, often fail to imagine all the facts they would need to distort to tell a truly plausible lie.”
One of my pet hobbies is constructing metaphors for reality which are blatantly, factually wrong, but which share enough of the deep structure of reality to be internally consistent. Suppose that you have good evidence for facts A, B, and C. If you think about A, B, and C, you can deduce facts D, E, F, and so forth. But given how tangled reality is, it’s effectively impossible to come up with a complete list of humanly-deducible facts in advance; there’s always going to be some fact, Q, which you just didn’t think of. Hence, if you map A, B, and C to A’, B’, and C’, use A’, B’, and C’ to deduce Q’, and map Q’ back to Q, how accurate Q is is a good check for how well you understand A, B, and C.
It’s definitely a check, but not a very good check. There are too many in between facts in this case. It really depends on whether Q is solely dependent on Q’ or whether it depends on a number of other things (Q″,Q‴......), provided of course that Q″ and Q‴ are not in themselves dependent on A, B and C.
If a lie is defined as the avoidance of truthfully satisfying interrogative sentences (this includes remaining silent), then it wouldn’t be honest, under request, to withhold details of a referent. But privacy depends on the existence of some unireferents, as opposed to none and to coreferents. If all privacy shouldn’t be abolished, then it isn’t clear that the benefits of honesty as an end in itself are underrated.
Personally, I prefer “Great Romance of Determinism.”
“Other planets in space and time, other Everett branches, would generate the same pebble.”
But not very likely! At least some of them not. What tells you something abut the Multiverse, if you buy it’s idea.
A new method of ‘lie detection’ is being perfected using functional near infrared imaging of the prefrontal cortex:
http://www.biomed.drexel.edu/fNIR/Contents/deception/
In this technique the device actually measures whether or not a certain memory is being recalled or is being generated on the spot. For example, if you are interrogating a criminal who denies ever being at a crime scene, and you show them a picture of the scene, you can deduce whether he/she has actually seen it or not by measuring if their brain is recalling some sensory data from memory or newly creating and storing it.
@Retired: Huh, I thought I checked that, but I guess I only checked the text instead of the attribution. Fixed.
Tom, I can’t visualize your technique: example?
It seems doubtful to me that a pebble includes in it the law of gravity in the sense of determining it. The internal structure of the pebble, the reason it stays solid, locations of its atoms in relation to each other, are all due to electromagnetism (and strong/weak interactions inside the nucleus). Gravity is completely dominated by other forces, to such a degree that it seems plausible to me that an essentially indistinguishable pebble could exist in a universe with a very different gravity law (although in absence of planets it might be more difficult to explain its formation).
@Nominull: “I can certainly achieve my aims by telling even blatant, obvious lies to human beings”
You are leaving digital crumb trails that the technology of the present day can follow and the technology of 20 years hence will be able to fluidly integrate into a universal public panopticon / rewind button. I don’t personally bank on keeping any secret at all in that sort of time-frame.
It is in any case a good general heuristic to never do anything that people would still be upset about twenty years later.
So a single pebble probably does not imply our whole Earth. But a single pebble implies a very great deal. From the study of that single pebble you could see the laws of physics and all they imply. Thinking about those laws of physics, you can see that planets will form, and you can guess that the pebble came from such a planet. The internal crystals and molecular formations of the pebble formed under gravity, which tells you something about the planet’s mass; the mix of elements in the pebble tells you something about the planet’s formation.
Call me sceptical about this. We can deduce a lot from a pebble ourselves because we know a lot about our universe, and about our earth.
But are you sure that there are no exotic laws of physics, across all possible universes, that would give rise to the same structure? Or, more simply, with the powers of a god, could you not lie—change the laws of physics and the structure of the universe, until you produce exactly the same pebble in completely different circumstances?
Oh what a tangled web we weave when first we practice to deceive But- practice makes perfect. Soon, fair youth, Your lies will seem as pure as truth.
I thought quite hard before I came up with an answer to Sir Walter which rhymed and scanned. The hero of that poem, whose name I cannot remember at the moment, is fair haired. Perhaps it is not also true, but perhaps that is the point.
Oh what a tangled web we weave, when first we practise to deceive. But if we practise for a bit, we tend to get quite good at it.
Doesn’t this depend heavily upon the sensitivity and discrimination of our observing phenomena, as well as whether we examine the pebble as a static, frozen moment or as a phenomenon occurring in time?
For the pebble to truly be completely identical, you might need for it to be embedded in a completely identical cosmos. How small does the difference have to be before it distinguishes one from the other, and do the effects of any one thing on the rest of the cosmos (and vice versa) ever drop to nothing?
No gravity—matter wouldn’t have coalesced. It wouldn’t have become stars, or fused or been caught up in supernovas, and so a pebble would be an unrealized theoretical possibility.
Caledonian, quantum mechanics may limit the sensitivity and discrimination of our observations. Also, if gravity’s so weak on the atomic level in the pebble that its effects would cause a shift in the arrangement of the atoms smaller than the Planck length, it’s not even clear that such a shift exists at all, or what meaning it has.
Julian, I suggested that a very different gravity law might be compatible with the existence of a pebble, not no gravity at all.
In fact, all kinds of things might be different about the laws of physics and the pebble could still exist. E.g. the second Newton’s law could be wrong (look up MOND), which would change the story on galaxies in a big way, but not affect the pebble at all.
It seems plausible that a small familiar object like a pebble already has all the fundamental physical laws baked into it, so to speak, and that these laws could be deduced from its structure. But it isn’t true. It’s easy to overestimate how entangled the tangled web is, too.
Nothing is lost; the universe is honest, Time, like the sea, gives all back in the end, But only in its own way, on its own conditions: Empires as grains of sand, forests as coal, Mountains as pebbles. Be still, be still, I say; You were never the water, only a wave; Not substance, but a form substance assumed.
Elder Olson, 1968
“Every shrub, every tree -
if one has not forgotten
where they were planted -
has beneath the fallen snow
some vestige of its form.”
--Shōtetsu
FLOWER in the crannied wall, I pluck you out of the crannies;— Hold you here, root and all, in my hand, Little flower—but if I could understand What you are, root and all, and all in all, 5 I should know what God and man is.
I dare not confess that, lest I should compare with him in excellence; but, to know a man well, were to know himself.
″ ‘God made me pregnant’ sounded a tad more likely in the old days before our models of the world contained (quotations of) Y chromosomes. ”
I don’t know about that; the whole point about the “virgin birth” was that it was miraculous, i.e. physically impossible. Had they known about DNA, the story would have included God creating some DNA for “his” side of the deal. Saying that knowledge of DNA would have made the virgin birth less believable is like saying greater knowledge of classical physics would have made people more skeptical of Jesus walking on water. Impossible == Impossible.
“So wait, that means … Samson the TallDarkHandsome Bard is God!” *worships*
What’s a light cone?
A future light cone is the part of space-time that can be affected by our actions in the present. Its boundaries are defined by the speed of light. If you imagine the Universe as having only two dimensions in space, then the area of space that you can affect 5 years in the future is a circle with a radius of 5 light-years; if you drew many such circles at different points in time, they would look like a cone. To affect a point in space outside your future light cone, you would have to send out some kind of order or projectile or information faster than the speed of light, and current physics says that this is impossible.
Upvote for people asking simple questions!
There is a way to flawlessly lie, at least for the moment: to lie about what goes on in your minds. Specifically, lie about the motivations for past actions, especially when those motivations were nebulous in the first place and the lie is more plausible than the actual truth.
Lies requiring new lies and having a risk of growing out of control is indeed a very fundamental reason for which “don’t lie” is part of my ethics. But it’s not an “absolute” ethical rule like “don’t kill”, “don’t torture” or “don’t use violence against someone you just disagree with”. Because there are many situations in which lying is worth the risk and the “inehrent” badness of not following the truth.
When my grandmother hid Jews during Nazi occupation, and answered “no, there is on one here” to the Gestapo officer asking her, she lied, and she indeed took a great risk—she was risking her life. But she definitely right to do so. Sure, arguing by WW2 for general ethics is well… not the wisest. WW2 was an exceptional situation, which justified exceptional means.
But I’ve a similar example in my own personal life. During the Rwanda genocide (I was then a teenager), my family hid in my home (for a few weeks) a Hutu whose whole family was killed, and who was himself threatened, because his family was helping the Tutsi to avoid the genocide. This guy was an “illegal alien”, and he could have been legally expelled from France, since, according to the legal authorities, he was a Hutu, and only Tutsi were endangered. To protect him I had to lie—like make excuses to not invite friends at home (since, well, teenagers tend to speak a lot, if the secret started to spread, it would quickly spread out of control, so even my friends were not allowed to know).
Lying is ethically bad, yes. But not near the level of endangering someone’s life, or risking to have him exposed to torture. Sadly, we live in a world in which sometimes have to lie to protect.
I find it much more convenient to, instead of lying, simply using ambiguous phrases to plant the false idea into someone else’s mind. The important part is to make the phrase ambiguous in such a way that it can be plausibly interpreted truthfully. Say you don’t want someone to know you went up the stairs, then you say “I didn’t walk up the stairs” because you in fact ran up the stairs. Even if your lie is found out, this reduces the social cost since, if you are political enough, you can convince others that you didn’t actually lie. And if you are very good at it, you can tailor the deception so that only a minority of people (which includes the addressee) would interpret it falsely; and you can then let the majority construe it as misunderstanding on behalf of the deceived.
So, where do you provide a manner to select the ethical premises which create the moral system in which honesty and lies are meaningful distinctions?
Counter-argument:
The truth can just as easily “spin out of control” as a lie, if people are sufficiently powerful to create the appearance of a lie. It may sound absurd for a boss to go out of their way to cause an employee to appear to be lying to their spouse, but it does happen, and frighteningly regularly. Humans are masters of perception-manipulation for social gain; it’s been part of the evolutionary landscape we developed in for at least O(million years), and is theorized as one of the reasons for our big brains. A sufficiently constructed lie will make all truth-speakers that disagree with it sound like liars. The assertion that the probability for the truth to spin out of control is greater than the probability for any given lie to spin out of control in any given situation is amenable to evidence—is there some way that we could categorize situations, and then examine their tendencies to spin out of control when told the truth vs. told a lie, such that more specifically accurate theories could be developed?
My own meager evidence has suggested that the truth is more likely to spin out of control than a lie when the truth conflicts with a sufficiently-prepared lie told by a social superior, for example.
The “forensic astronomer” is a dead link, here’s the last version of it on archive.org.
A single pebble contains a lot of atoms an those atoms interact via gravitational forces with the world around them. Heisenberg’s uncertainty principle might prevent you from knowing everything about earth from a single pebble but otherwise you just would have to measure the movement of the atoms in the pebble closely enough.
Many different things can cause similar movements, you could detect something pulling those atoms in 1 direction, and something else pushing them back at the edge of the pebble which is closest to earths center of gravity. But you would not know what is causing that pull, only from where it is coming and how strong it is.
I don‘t understand the meaning of the sentence „And since inferences can propagate backward and forward through causal networks, epistemic entanglements can easily cross the borders of light cones. “
Suppose I have two cards, A and B, that I shuffle and then blindly place in two spaceships, pointed at opposite ends of the galaxy. If they go quickly enough, it can be the case that they get far enough apart that they will never be able to meet again. But if you’re in one of the spaceships, and turn the card over to learn that it’s card A, then you learn something about the world on the other side of the light cone boundary.
Only if you value unblemished reputation over the short term gain provided by the lie. Fooling some of the people some of the time might be sufficient for an unscrupulous agent.