Thoth Hermes
You write in an extremely fuzzy way that I find hard to understand.
This does. This is a type of criticism that one can’t easily translate into an update that can be made to one’s practice. You’re not saying if I always do this or just in this particular spot, nor are you saying whether it’s due to my “writing” (i.e. style) or actually using confused concepts. Also, it’s usually not the case that anyone is trying to be worse at communicating, that’s why it sounds like a scold.
You have to be careful using blanket “this is false” or “I can’t understand any of this,” as these statements are inherently difficult to extract from moral judgements.
I’m sorry if it was hard to understand, you are always free to ask more specific questions.
To attempt to clarify it a bit more, I’m not trying to say that worse is better. It’s that you can’t consider rules (i.e. yes / no conditionals) to be absolutely indispensable.
It is probably indeed a crux but I don’t see the reason for needing to scold someone over it.
(That’s against my commenting norms by the way, which I’ll note that so far you, TAG, and Richard_Kennaway have violated, but I am not going to ban anyone over it. I still appreciate comments on my posts at all, and do hope that everyone still participates. In the olden days, it was Lumifer that used to come and do the same thing.)
I have an expectation that people do not continually mix up critique from scorn, and please keep those things separate as much as possible, as well as only applying the latter with solid justification.
You can see that yes, one of the points I am trying to make is that an assertion / insistence on consistency seems to generally make things worse. This itself isn’t that controversial, but what I’d like to do is find better ways to articulate whatever the alternatives to that may be, here.
It’s true that one of the main implications of the post is that imprecision is not enough to kill us (but that precision is still a desirable thing). We don’t have rules that are simply tautologies or simply false anymore.
At least we’re not physicists. They have to deal with things like negative probability, and I’m not even anywhere close to that yet.
First, a question, am I correct in understanding that when you write ~(A and ~A), the first ~ is a typo and you meant to write A and ~A (without the first ~)? Because is a tautology and thus maps to true rather than to false.
I thought of this shortly before you posted this response, and I think that we are probably still okay (even though strictly speaking yes, there was a typo).
Normally we have that ~A means: ~A --> A --> False. However, remember than I am now saying that we can no longer say that “~A” means that “A is False.”
So I wrote:
~(A and ~A) --> A or ~A or (A and ~A)
And it could / should have been:
~(A and ~A) --> (A and ~A) --> False (can omit) [1]or A or ~A or (A and ~A).
So, because of False now being something that an operator “bounces off of”, technically, we can kind of shorten those formulas.
Of course this sort of proof doesn’t capture the paradoxicalness that you are aiming to capture. But in order for the proof to be invalid, you’d have to invalidate one of and , both of which seem really fundamental to logic. I mean, what do the operators “and” and “or” even mean, if they don’t validate this?
Well, I’d have to conclude that we no longer consider any rules indispensable, per se. However, I do think “and” and “or” are more indispensable and map to “not not” (two negations) and one negation, respectively.
- ^
False can be re-omitted if we were decide, for example, that whatever we just wrote was wrong and we needed to exit the chain there and restart. However, I don’t usually prefer that option.
- ^
Well, to use your “real world” example, isn’t that just the definition of a manifold (a space that when zoomed in far enough, looks flat)?
I think it satisfies the either-or-”mysterious third thing” formulae.
~(Earth flat and earth ~flat) --> Earth flat (zoomed in) or earth spherical (zoomed out) or (earth more flat-ish the more zoomed in and vice-versa).
So suppose I have ~(A and ~A). Rather than have this map to False, I say that “False” is an object that you always bounce off of; It causes you to reverse-course, in the following way:
~(A and ~A) --> False --> A or ~A or (some mysterious third thing). What is this mysterious third thing? Well, if you insist that A and ~A is possible, then it must be an admixture of these two things, but you’d need to show me what it is for that to be allowed. In other words:
~(A and ~A) --> A or ~A or (A and ~A).
What this statement means in semantic terms is: Suppose you give me a contradiction. Rather than simply try really hard to believe it, or throw everything away entirely, I have a choice between believing A, believing ~A, or believing a synthesis between these two things.
The most important feature of this construction is that I am no longer faced with simply concluding “false” and throwing it all away.
Two examples:
Suppose we have the statement 1 = 2[1]. In most default contexts, this statement simply maps to “false,” because it is assumed that this statement is an assertion that the two symbols to the left and right of the equals sign are indistinguishable from one another.
But what I’m arguing is that “False” is not the end-all, be-all of what this statement can or will be said to mean in all possible universes forever unto eternity. “False” is one possible meaning which is also valid, but it cannot be the only thing that this means.
So, using our formula from above:
1 = 2 -->[2] 1 or 2 or (1 and 2). So if you tell me “1 = 2”, in return I tell you that you can have either 1, either 2, or either some mysterious third thing which is somehow both 1 and 2 at the same time.
So you propose to me that (1 and 2) might mean something like 2 (1/2), that is, two halves, which mysteriously are somehow both 1 and 2 at the same time when put together. Great! We’ve invented the concept of 1⁄2.
Second example:
We don’t know if A is T and thus that ~A is F or vice-versa. Therefore we do not know if A and ~A is TF or FT. Somehow, it’s got to be mysteriously both of these at the same time. And it’s totally fine if you don’t get what I’m about to say because I haven’t really written it anywhere else yet, but this seems to produce two operators, call them “S” (for swap) and “2″ (for 2), each duals of one another.
S is the Swaperator, and 2 is the Two...perator. These also buy you the concept of 1⁄2 as well. But all that deserves more spelling out, I was just excited to bring it up.
- ^
It is arguably appropriate to use 1 == 2 as well, but I want to show that a single equals sign “=” is open to more interpretations because it is more basic. This also has a slightly different meaning too, which is that the symbols 1 and 2 are swappable with one another.
- ^
You could possibly say “--> False or 1 or 2 or …”, too, but then you’d probably not select False from those options, so I think it’s okay to omit it.
- ^
I give only maybe a 50% chance that any of the following adequately addresses your concern.
I think the succinct answer to your question is that it only matters if you happened to give me, e.g., a “2” (or anything else) and you asked me what it was and gave me your {0,1} set. In other words, you lose the ability to prove that 2 is 1 because it’s not 0, but I’m not that worried about that.
It appears to be commonly said (see the last paragraph of “Mathematical Constructivism”), that proof assistants like Agda or Coq rely on not assuming LoEM. I think this is because proof assistants rely on the principle of “you can’t prove something false, only true.” Theorems are the [return] types of proofs, and the “False” theorem has no inhabitants (proofs).
The law of the excluded middle also seems to me like an insistence that certain questions (like paradoxes) actually remain unanswered.
That’s an argument that it might not be true at all, rather than simply partially true or only not true in weird, esoteric logics.
Besides the one use-case for the paradoxical market: “Will this market resolve to no?” Which resolves to 1⁄2 (I expect), there may be also:
Start with two-valued logic and negation as well as a two-member set, e.g., {blue, yellow}. I suppose we could also include a . So including the excluded middle might make this set no longer closed under negation, i.e., ~blue = yellow, and ~yellow = blue, but what about green, which is neither blue nor yellow, but somehow both, mysteriously? Additionally, we might not be able to say for sure that it is neither blue nor yellow, as there are greens which can be close to blue and look bluish, or look close to yellow and look yellowish. You can also imagine pixels in a green square actually being tiled blue next to yellow next to blue etc., or simply green pixels, each seem to produce the same effect viewed from far away.
So a statement like “x = blue” evaluates to true in an ordinary two-valued logic if x = blue, and false otherwise. But in a {0, 1⁄2, 1} logic, that statement evaluates to 1⁄2 if x is green, for example.
I really don’t think I can accept this objection. They are clearly considered both of these, most of the time.
I would really prefer that if you really want to find something to have a problem with, first it’s got to be true, then it’s got to be meaningful.
I created this self-referential market on Manifold to test the prediction that the truth-value of such a paradox is in fact 1⁄2. Very few participated, but I think it should always resolve to around 50%. Rather than say such paradoxes are meaningless, I think they can be meaningfully assigned a truth-value of 1⁄2.
what I think is “of course there are strong and weak beliefs!” but true and false is only defined relative to who is asking and why (in some cases), so you need to consider the context in which you’re applying LoEM.
Like in my comment to Richard_Kennaway about probability, I am not just talking about beliefs, but about what is. Do we take it as an axiom or a theorem that A or ~A? Likewise for ~(A and ~A)? I admit to being confused about this. Also, does “A” mean the same thing as “A = True”? Does “~A” mean the same thing as “A = False”? If so, in what sense do we say that A literally equals True / False, respectively? Which things are axioms and which things are theorems, here? All of that confuses me.
Since we are often permitted to change our axioms and arrive at systems we either like or don’t like, or like better than others, I think it’s relevant to ask about our choice of axioms and whether or not logic is or should be considered a set of “pre-axioms.”
It seemed like tailcalled was implying that the law of non-contradiction was a theorem, and I’m confused about that as well. Under which axioms?
If I decide that ~(A and ~A) is not an axiom, then I can potentially have A and ~A either be true or not false. Then we would need some other arguments to support that choice. Without absolute truth and absolute falsehood, we’d have to move back to the concept of “we like [it] better or worse” which would make the latter more fundamental. Does allowing A and ~A to mean something get us any utility?
In order for it to get us any utility, there would have to be things that we’d agree were validly described by A and ~A.
Admittedly, it does seem like these or’s and and’s and =’s keep appearing regardless of my choices, here (because I need them for the concept of choice).
In a quasi-philosophical and quasi-logical post I have not posted to LessWrong yet, I argue that negation seems likely to be the most fundamental thing to me (besides the concept of “exists / is”, which is what “true” means). “False” is thus not quite the same thing as negation, and instead means something more like “nonsense gibberish” which is actually far stronger than negation.
A succinct way of putting this would be to ask: If I were to swap the phrase “law of the excluded middle” in the piece for the phrase “principle of bivalence” how much would the meaning of it change as well as overall correctness?
Additionally, suppose I changed the phrases in just “the correct spots.” Does the whole piece still retain any coherence?
If there are propositions or axioms that imply each other fairly easily under common contextual assumptions, then I think it’s reasonable to consider it not-quite-a-mistake to use the same name for such propositions.
One of the things I’m arguing is that I’m not convinced that imprecision is enough to render a work “false.”
Are you convinced those mistakes are enough to render this piece false or incoherent?
That’s a relevant question to the whole point of the post, too.
Indeed. (You don’t need to link the main wiki entry, thanks.)
There’s some subtlety though. Because either P might be true or not P, and p(P) expresses belief that P is true. So I think probability merely implies that the LoEM might be unnecessary, but it itself pretty much assumes it.
It is sometimes, but not always the case, that p(P) = 0.5 resolves to P being “half-true” once observed. It also can mean that P resolves to true half the time, or just that we only know that it might be true with 0.5 certainty (the default meaning).
The issue that I’m primarily talking about is not so much in the way that errors are handled, it’s more about the way of deciding what constitutes an exception to a general rule, as Google defines the word “exception”:
a person or thing that is excluded from a general statement or does not follow a rule.
In other words, does everything need a rule to be applied to it? Does every rule need there to be some set of objects under which the rule is applied that lie on one side of the rule rather than the other (namely, the smaller side)?
As soon as we step outside of binary rules, we are in Case-when-land where each category of objects is treated with a part of the automation that is expected to continue. There is no longer a “does not follow” sense of the rule. The negation there is the part doing the work that I take issue with.
Raemon’s comment below indicates mostly what I meant by:
It seems from talking to the mods here and reading a few of their comments on this topic that they tend to learn towards them being harmful on average and thus need to be pushed down a bit.
Furthermore, I think the mods’ stance on this is based primarily on Yudkowsky’s piece here. I think the relevant portion of that piece is this (emphases mine):
But into this garden comes a fool, and the level of discussion drops a little—or more than a little, if the fool is very prolific in their posting. (It is worse if the fool is just articulate enough that the former inhabitants of the garden feel obliged to respond, and correct misapprehensions—for then the fool dominates conversations.)
So the garden is tainted now, and it is less fun to play in; the old inhabitants, already invested there, will stay, but they are that much less likely to attract new blood. Or if there are new members, their quality also has gone down.
Then another fool joins, and the two fools begin talking to each other, and at that point some of the old members, those with the highest standards and the best opportunities elsewhere, leave...
So, it seems to me that the relevant issues are the following. Being more tolerant of lower-quality discussion will cause:
Higher-quality members’ efforts being directed toward less fruitful endeavors than they would otherwise be.
Higher-quality existing members to leave the community.
Higher-quality potential members who would otherwise have joined the community, not to.
My previous comment primarily refers to the notion of the first bullet-point in this list. But “harmful on average” also means all three.
The issue I have most concern with is the belief that lower-quality members are capable of dominating the environment over higher-quality ones, with all-else-being-equal, and all members having roughly the same rights to interact with one another as they see fit.
This mimics a conversation I was having with someone else recently about Musk’s Twitter / X. They have different beliefs than I do about what happens when you try to implement a system that is inspired by Musk’s ideology. But I encountered an obstacle in this conversation: I said I have always liked using it [Twitter / X], and it also seems to be slightly more enjoyable to use post-acquisition. He said he did not really enjoy using it, and also that it seems to be less enjoyable to use post-acquisition. Unfortunately, if it comes down to a matter of pure preferences like this, than I am not sure how one ought to proceed with such a debate.
However, there is an empirical observation that one can make comparing environments that use voting systems or rank-based attention mechanisms: It should appear to one as though units of work that feel like more or better effort was applied to create them correlate with higher approval and lower disapproval. If this is not the case, then it is much harder to actually utilize feedback to improve one’s own output incrementally. [1]
On LessWrong, that seems to me to be less the case than it does on Twitter / X. Karma does not seem correlated to my perceptions about my own work quality, whereas impressions and likes on Twitter / X do seem correlated. But this is only one person’s observation, of course. Nonetheless I think it should be treated as useful data.
- ^
That being said, it may be that the intention of the voting system matters: Upvotes / downvotes here mean “I want to see more of / I want to see less of” respectively. They aren’t explicitly used to provide helpful feedback, and that may be why they seem uncorrelated with useful signal.
Both views seem symmetric to me:
They were downvoted because they were controversial (and I agree with it / like it).
They were downvoted because they were low-quality (and I disagree with it / dislike it).
Because I can sympathize with both views here, I think we should consider remaining agnostic to which is actually the case.
It seems like the major crux here is whether we think that debates over claim and counter-claim (basically, other cruxes) are likely to be useful or likely to cause harm. It seems from talking to the mods here and reading a few of their comments on this topic that they tend to learn towards them being harmful on average and thus need to be pushed down a bit.
Since omnizoid’s issue is not merely over issues of quality, but both over quality as well as being counter-claims to specific claims that have been dominant on LessWrong for some time.
The most agnostic side of the “top-level” crux that I mentioned above seems to point towards favoring agnosticism and furthermore that if we predict debates to be more fruitful than not, then one needn’t be too worried even if one is sure that one side of another crux is truly the lower-quality side of it.
It seems like a big part of this story is mainly about people who have relatively strict preferences kind of aggressively defending their territory and boundaries, and how when you have multiple people like this working together on relatively difficult tasks (like managing the logistics of travel), it creates an engine for lots of potential friction.
Furthermore, when you add the status hierarchy of a typical organization, combined with the social norms that dictate how people’s preferences and rights ought to be respected (and implicit agreements being made about how people have chosen to sacrifice some of those rights for altruism’s sake), you add even more fuel to the aforementioned engine.
I think complaints such as these are probably okay to post, as long as everyone mentioned is afforded the right to update their behavior after enough time has passed to reflect and discuss these things (since actually negotiating what norms are appropriate here might end up being somewhat difficult).
Edit: I want to clarify that when there is a situation in which people have conflicting preferences and boundaries as I described, I do personally feel that those in leadership positions / higher status probably bear the responsibility of satisfying their subordinates’ preferences to their satisfaction, given that the higher status people are having their own higher, longer-term preferences satisfied with the help of their subordinates.
I don’t want to make it seem as though the ones bringing the complaints are as equally responsible for this situation as the ones being complained about.
I think it might actually be better if you just went ahead with a rebuttal, piece by piece, starting with whatever seems most pressing and you have an answer for.
I don’t know if it is all that advantageous to put together a long mega-rebuttal post that counters everything at once.
Then you don’t have that demand nagging at you for a week while you write the perfect presentation of your side of the story.
I think it would be difficult to implement what you’re asking for without needing to make the decision about whether investing time in this (or other) subjects is worth anyone’s time on behalf of others.
If you notice in yourself that you have conflicting feelings about whether something is good for you to be doing, e.g., in the sense which you’ve described: that you feel pulled in by this, but have misgivings about it, then I recommend considering this situation to be that you have uncertainty about what you ought to be doing, as opposed to being more certain that you should be doing something else, and only that you have some kind of addiction to drama or something like that.
It may in fact be that you feel pulled in because you actually can add value to the discussion, or at least that watching this is giving you some new knowledge in some way. It’s at least a possibility.
Ultimately, it should be up to you, so if you’re convinced it’s not for you, so be it. However, I feel uncomfortable not allowing people to decide that for themselves.
It seems plausible that there is no such thing as “correct” metaphilosophy, and humans are just making up random stuff based on our priors and environment and that’s it and there is no “right way” to do philosophy, similar to how there are no “right preferences”.
We can always fall back to “well, we do seem to know what we and other people are talking about fairly often” whenever we encounter the problem of whether-or-not a “correct” this-or-that actually exists. Likewise, we can also reach a point where we seem to agree that “everyone seems to agree that our problems seem more-or-less solved” (or that they haven’t been).
I personally feel that there are strong reasons to believe that when those moments have been reached they are indeed rather correlated with reality itself, or at least correlated well-enough (even if there’s always room to better correlate).
Relatedly, philosophy is incredibly ungrounded and epistemologically fraught. It is extremely hard to think about these topics in ways that actually eventually cash out into something tangible
Thus, for said reasons I probably feel more optimistically than you do about how difficult our philosophical problems are. My intuition about this is that the more it is true that “there is no problem to solve” then the less we would feel that there is a problem to solve.
It was a mistake to reject this post. This seems like a case where both the rule that was applied is a mis-rule, as well as that it was applied inaccurately—which makes the rejection even harder to justify. It is also not easy to determine which “prior discussion” is being referred to by the rejection reasons.
It doesn’t seem like the post was political...at all? Let alone “overly political” which I think is perhaps kind of mind-killy be applied frequently as a reason for rejection. It also is about a subject that is fairly interesting to me, at least: Sentiment drift on Wikipedia.
It seems the author is a 17-year old girl, by the way.
This isn’t just about standards being too harsh, but about whether they are even being applied correctly to begin with.