Epistemic: Intend as a (half-baked) serious proposal
I’ve been thinking about ways to signal truth value in speech- in our modern society, we have no way to readily tell when a person is being 100% honest- we have to trust that a communicator is being honest, or otherwise verify for ourselves if what they are saying is true, and if I want to tell a joke, speak ironically, or communicate things which aren’t-literally-the-truth-but-point-to-the-truth, my listeners need to deduce this for themselves from the context in which I say something not-literally-true. This means that almost always, common knowledge of honesty never exists, which significantly slows down positive effects from Aumann’s Agreement Theorem
In language, we speak with different registers. Different registers are different ways of speaking, depending on the context of the speech. The way a salesman speaks to a potential customer, will be distinct from the way he speaks to his pals over a beer—he speaks in different registers in these different situations. But registers can also be used to communicate information about the intentions of the speaker—when a speaker is being ironic, he will intone his voice in a particular way, to signal to his listeners that he shouldn’t be taken 100% literally.
There are two points that come to my mind here: One: establishing a register of communication that is reserved for speaking literally true statements, and Two: expanding the ability to use registers to communicate not-literally-true intent, particularly in text.
On the first point, a large part of the reason why people speaking in a natural register cannot always be assumed to be saying something literally true, is that there is no external incentive to not lie. Well, sometimes there are incentives to not lie, but oftentimes these incentives are weak, and especially in a society built upon free speech, it is hard to—on a large scale—enforce a norm against not lying in natural-register speech. Now my mind imagines a protected register of speech, perhaps copyrighted by some organization (and which includes unique manners of speech which are distinctive enough to be eligible for copyright), which that organization vows to take action against anybody who speaks not-literally-true statements (i.e., which communicate a world model that does not reliably communicate the actual state of the world) in that register; anybody is free (according to a legally enforcable license) to speak whatever literally-true statements they want in that register, but may not speak non-truths in that register, at pain of legal action.
If such a register was created, and was reliably enforced, it would help create a society where people could readily trust strangers saying things that they are not otherwise inclined to believe, given that the statement is spoken in the protected register. I think such a society would look different from current society, and would have benefits compared to current society. I also think a less-strict version of this could be implemented by a single platform (perhaps LessWrong?), replacing legal action with the threat of being suspended for speaking not-literal-truths in a protected register, and I also suspect that it would have a non-zero positive effect. This also has the benefit of being probably cheaper, and in a less unclear legal position related to speech.
I don’t currently have time to get into details on the second point, but I will highlight a few things: Poe’s law states that even the most extreme parody can be readily mistaken for a serious position;; Whereas spoken language can clearly be inflected to indicate ironic intent, or humor, or perhaps even not-literally-true-but-pointing-to-the-truth, the carriers of this inflection are not replicated in written language—therefore, written language, which the internet is largely based upon, lacks the same richness of registers that allows a clear distinction between extreme-but-serious postitions from humor. There are attempts to inflect writing in such a way as to provide this richness, but as far as I know, there is no clear standard that is widely understood that actually accomplishes this. This is worth exploring in the future. Finally, I think it is worthwhile to spend time reflecting on intentionally creating more registers that are explicitly intended to communicate varying levels of seriousness and intent.
most extreme parody can be readily mistaken for a serious position
I may be doing just that by replying seriously. If this was intended as a “modest proposal”, good on you, but you probably should have included some penalty for being caught, like surgery to remove the truth-register.
Humans have been practicing lying for about a million years. We’re _VERY_ good at difficult-to-legislate communication and misleading speech that’s not unambiguously a lie.
Until you can get to a simple (simple enough for cheap enforcement) detection of lies, an outside enforcement is probably not feasible. And if you CAN detect it, the enforcement isn’t necessary. If people really wanted to punish lying, this regime would be unnecessary—just directly punish lying based on context/medium, not caring about tone of voice.
Until you can get to a simple (simple enough for cheap enforcement) detection of lies, an outside enforcement is probably not feasible.
There’s plenty of blatant lying out there in the real world, which would be easily detectable by a person with access to reliable sources and their head screwed on straight- I think one important facet of my model of this proposal, that isn’t explicitly mentioned in this shortform, is that validating statements is relatively cheap, but expensive enough that for every single person to validate every single sentence they hear is infeasible. By having a central arbiter of truth that enforces honesty, it allows one person doing the heavy lifting to save a million people from having to each individually do the same task.
If people wanted to punish lying this regime would be unnecessary—just directly punish lying based on context/medium, not caring about tone of voice.
The point of having a protected register (in the general, not platform-specific case), is that it would be enforceable even when the audience and platform are happy to accept lies- since the identifiable features of the register would be protected as intellectual property, the organization that owned the IP could enforce a violation of the intellectual property, even when there would be no legal basis for violating norms of honesty
The point of having a protected register (in the general, not platform-specific case), is that it would be enforceable even when the audience and platform are happy to accept lies
Oh, I’d taken that as a fanciful example, which didn’t need to be taken literally for the main point, which I thought was detecting and prosecuting lies. I don’t think that part of your proposal works—“intellectual property” isn’t an actual law or single concept, it’s an umbrella for trademark, copyright, patent, and a few other regimes. None of which apply to such a broad category of communication as register or accent.
You probably _CAN_ trademark a phrase or word, perhaps “This statement is endorsed by TruthDetector(TM)”. It has the advantage that it applies in written or spoken media, has no accessibility issues, works for tonal languages, etc. And then prosecute uses that you don’t actually endorse.
Endorsing only true statements is left as an excercise, which I suspect is non-trivial on it’s own.
I suspect there’s a difference between what I see in my head when I say “protected register”, compared to the image you receive when you hear it. Hopefully I’ll be able to write down a more specific proposal in the future, and provide a legal analysis of whether what I envision would actually be enforceable. I’m not a lawyer, but it seems that what I’m thinking of (i.e., the model in my head) shouldn’t be dismissed out of hand (although I think you are correct to dismiss what you envision that I intended)
Epistemic: Intend as a (half-baked) serious proposal
I’ve been thinking about ways to signal truth value in speech- in our modern society, we have no way to readily tell when a person is being 100% honest- we have to trust that a communicator is being honest, or otherwise verify for ourselves if what they are saying is true, and if I want to tell a joke, speak ironically, or communicate things which aren’t-literally-the-truth-but-point-to-the-truth, my listeners need to deduce this for themselves from the context in which I say something not-literally-true. This means that almost always, common knowledge of honesty never exists, which significantly slows down positive effects from Aumann’s Agreement Theorem
In language, we speak with different registers. Different registers are different ways of speaking, depending on the context of the speech. The way a salesman speaks to a potential customer, will be distinct from the way he speaks to his pals over a beer—he speaks in different registers in these different situations. But registers can also be used to communicate information about the intentions of the speaker—when a speaker is being ironic, he will intone his voice in a particular way, to signal to his listeners that he shouldn’t be taken 100% literally.
There are two points that come to my mind here: One: establishing a register of communication that is reserved for speaking literally true statements, and Two: expanding the ability to use registers to communicate not-literally-true intent, particularly in text.
On the first point, a large part of the reason why people speaking in a natural register cannot always be assumed to be saying something literally true, is that there is no external incentive to not lie. Well, sometimes there are incentives to not lie, but oftentimes these incentives are weak, and especially in a society built upon free speech, it is hard to—on a large scale—enforce a norm against not lying in natural-register speech. Now my mind imagines a protected register of speech, perhaps copyrighted by some organization (and which includes unique manners of speech which are distinctive enough to be eligible for copyright), which that organization vows to take action against anybody who speaks not-literally-true statements (i.e., which communicate a world model that does not reliably communicate the actual state of the world) in that register; anybody is free (according to a legally enforcable license) to speak whatever literally-true statements they want in that register, but may not speak non-truths in that register, at pain of legal action.
If such a register was created, and was reliably enforced, it would help create a society where people could readily trust strangers saying things that they are not otherwise inclined to believe, given that the statement is spoken in the protected register. I think such a society would look different from current society, and would have benefits compared to current society. I also think a less-strict version of this could be implemented by a single platform (perhaps LessWrong?), replacing legal action with the threat of being suspended for speaking not-literal-truths in a protected register, and I also suspect that it would have a non-zero positive effect. This also has the benefit of being probably cheaper, and in a less unclear legal position related to speech.
I don’t currently have time to get into details on the second point, but I will highlight a few things: Poe’s law states that even the most extreme parody can be readily mistaken for a serious position;; Whereas spoken language can clearly be inflected to indicate ironic intent, or humor, or perhaps even not-literally-true-but-pointing-to-the-truth, the carriers of this inflection are not replicated in written language—therefore, written language, which the internet is largely based upon, lacks the same richness of registers that allows a clear distinction between extreme-but-serious postitions from humor. There are attempts to inflect writing in such a way as to provide this richness, but as far as I know, there is no clear standard that is widely understood that actually accomplishes this. This is worth exploring in the future. Finally, I think it is worthwhile to spend time reflecting on intentionally creating more registers that are explicitly intended to communicate varying levels of seriousness and intent.
I may be doing just that by replying seriously. If this was intended as a “modest proposal”, good on you, but you probably should have included some penalty for being caught, like surgery to remove the truth-register.
Humans have been practicing lying for about a million years. We’re _VERY_ good at difficult-to-legislate communication and misleading speech that’s not unambiguously a lie.
Until you can get to a simple (simple enough for cheap enforcement) detection of lies, an outside enforcement is probably not feasible. And if you CAN detect it, the enforcement isn’t necessary. If people really wanted to punish lying, this regime would be unnecessary—just directly punish lying based on context/medium, not caring about tone of voice.
I assure you this is meant seriously.
There’s plenty of blatant lying out there in the real world, which would be easily detectable by a person with access to reliable sources and their head screwed on straight- I think one important facet of my model of this proposal, that isn’t explicitly mentioned in this shortform, is that validating statements is relatively cheap, but expensive enough that for every single person to validate every single sentence they hear is infeasible. By having a central arbiter of truth that enforces honesty, it allows one person doing the heavy lifting to save a million people from having to each individually do the same task.
The point of having a protected register (in the general, not platform-specific case), is that it would be enforceable even when the audience and platform are happy to accept lies- since the identifiable features of the register would be protected as intellectual property, the organization that owned the IP could enforce a violation of the intellectual property, even when there would be no legal basis for violating norms of honesty
Oh, I’d taken that as a fanciful example, which didn’t need to be taken literally for the main point, which I thought was detecting and prosecuting lies. I don’t think that part of your proposal works—“intellectual property” isn’t an actual law or single concept, it’s an umbrella for trademark, copyright, patent, and a few other regimes. None of which apply to such a broad category of communication as register or accent.
You probably _CAN_ trademark a phrase or word, perhaps “This statement is endorsed by TruthDetector(TM)”. It has the advantage that it applies in written or spoken media, has no accessibility issues, works for tonal languages, etc. And then prosecute uses that you don’t actually endorse.
Endorsing only true statements is left as an excercise, which I suspect is non-trivial on it’s own.
I suspect there’s a difference between what I see in my head when I say “protected register”, compared to the image you receive when you hear it. Hopefully I’ll be able to write down a more specific proposal in the future, and provide a legal analysis of whether what I envision would actually be enforceable. I’m not a lawyer, but it seems that what I’m thinking of (i.e., the model in my head) shouldn’t be dismissed out of hand (although I think you are correct to dismiss what you envision that I intended)