Meta-rationality
I’ve seen there’s discussion on LW about rationality, namely, about what it means. I don’t think a satisfactory answer can be found without defining what rationality is not. And this seems to be a problem. As far as I know, rationality on LW does not include systematic methods for categorizing and analyzing irrational things. Instead, the discussion seems to draw a circle around rationality. Everyone on LW is excepted to be inside this circle—think of it as a set in a Venn diagram. On the border of the circle there is a sign saying: “Here be dragons”. And beyond the circle there is irrationality.
How can we differentiate the irrational from the rational, if we do not know what the irrational is?
But how can we approach the irrational, if we want to be rational?
It seems to me there is no way to give a satisfactory account of rationality from within rationality itself. If we presuppose rationality is the only way to attain justification, and then try to find justification for rationalism (the doctrine according to which we should strive for rationality), we are simply making a circular argument. We already presupposed rationalism before trying to find justification for doing so.
Therefore it seems to me we ought to make a metatheory of rationality in order to find out what is rational and what is irrational. The metatheory itself has to be as rational as possible. That would include having an analytically defined structure, which permits us to at least examine whether the metatheory is logically consistent or inconsistent. This would also allow us to also examine whether the metatheory is mathematically elegant, or whether the same thing could be expressed in a simpler form. The metatheory should also correspond with our actual observations so that we could figure out whether it contradicts empirical findings or not.
How much interest is there for such a metatheory?
- 13 Oct 2012 15:58 UTC; 1 point) 's comment on A possible solution to pascals mugging. by (
None, unless you have compelling credentials, formal theorems, or empirical results so discussion is not wasted space & breath. Philosophers have been doing ‘meta-rationality’ forever… anytime they discuss epistemology or other standard topics.
Well I do. The following Venn diagram describes the basic concepts of the theory. As far as we are being rational, classical quality means references and romantic quality means referents. The referents are sense-data, and the references are language. You may ignore the rest of the graph for now.
The following directed graph expresses an overview of the categories the metatheory is about. Note how some of the categories are rational, and others are irrational. The different categories are created by using two binary variables. One of them denotes whether the category is internalistic or externalistic, and another one whether it is rational or irrational. The arrows denote set membership. I like to think of it as “strong emergence”, but formally it suffices to say it is set membership. In the theory, these categories are called continua.
Instead of using the graph we could define these relationships with formal logic. Let us denote a continuum by {{k}_{l}{S}} so that k denotes external metaspace and l denotes rationality.
kveebarpRightarrow{{k}_{l}{S}}in{_{p}{q}{S}}
Each continuum can be split into an arbitrary amount of levels. The four continuums also form reciprocal continuum pairs, which means that the referents of each continuum are the same as the referents of some other continuum, but this continuum orders the references to those referents differently. Ordering of references is modeled as subsethood in the following directed acyclic graph:
Note that in the graph I have split each continuum into four levels. This is arbitrary. The following formula defines m levels.
)That is the structure of the theory. Now, as for theorems, what kind of theorems would you like? I’ve already arrived at the conclusion that knowledge by description consists of members of the rational continua, and knowledge by acquaintance (aka. gnosis) consists of members of the irrational continua. But that is mainstream philosophy. Maybe you would be more interested of a formal model of “maps” and “territories”, as these concepts are used frequently by you. Yudkowsky says:
In the LW lingo, continua are “maps” and romantic quality is the “territory”. Maps that form reciprocal pairs are maps about the same territory, but the projection is different—compare it to polar coordinates as opposed to rectangular coordinates. Two maps that do not form reciprocal pairs are about different territories. The different territories could could be called natural and transcendental. Insofar as we are being rational, the former is the domain of empirical science, the latter the domain of pure maths.
The merit of this theory is that irrational things, which are called subjective or mystical, are defined in relation to rational things. The ontology of irrational things is constructed by ordering the references to the referents oppositely than they are ordered in the ontology of rational things. You can see the inversion of order from the latter graph. As you can see, subjective references consist of various kinds of beliefs, and mystical references consist of various kinds of synchronicities. These are irrational, which roughly means that no argument suffices to justify their existence, but their existence is obvious.
How do you like it?
Your theory seems completely arbitrary to me and I can only stare in perplexity at the graphs you build on top of it, but moving on:
Really? Maybe you should restate it all in mainstream terms and you won’t look crazier than a bug in a rug.
Incidentally, would I be correct in guessing that Robert Pirsig never replied to you?
That quotation looked crazy to me too. But maybe it’s a way of saying “experience is analog, symbols are discrete”. Tuukka’s system looks like a case study in how a handful of potentially valid insights can be buried under a structure made of wordplay (multiple uses of “irrational”); networks of concepts in which formal structures are artificially repeated but the actual relations between concepts are fatally vague (his big flowchart); and a severe misuse of mathematical objects and propositions in an attempt to be rigorous.
When an ordinary crackpot does something like this, they’ve sealed themselves into an earnest personal theory of everything, and the only way out would be for someone smarter to come along, decode their work, and patiently isolate and explain all the methodological fallacies, something which never happens. Occasionally you get someone who constructs their system in the awareness that it’s a product of their own mind and not just an objective depiction of the facts as they were found—someone who knowingly creates a crackpot synthesis out of choice, rather than just being driven to do so by unexamined compulsions. That’s less pitiful, but it’s still annoying. I’m not sure where Tuukka lies on this spectrum.
ETA In retrospect I regret the somewhat abusive character of this description. But I believe the bitter fact to be that Tuukka needs help in precisely the sense that I said will never happen. Even though he talks about all sorts of very interesting topics, what he says about them is mostly idiosyncratic interlocking nonsense. The aspiration to discover and convey truth, in a hostile and uncomprehending environment, has produced, as if by perverse chemical reaction, a set of exterior traits which serve to repel precisely the people he wants to attract. Having written his sequel to Pirsig he now needs to outgrow that act as soon as possible, and acquire some genuine expertise in an intersubjectively recognized domain, so that he has people to talk with and not just talk at.
It doesn’t seem likely to me. The quotation contains “continua” twice (I assume that would be the “analog”) but I can’t find anything that could be plausibly interpreted as referring to either discreetness or experience. How did you arrive to your suggested interpretation?
The jargon of “knowledge by acquaintance” and “knowledge by description” comes from Bertrand Russell. Knowledge by acquaintance is “direct” or “experiential” knowledge, such as knowledge of a pain or other sensation that you’re having. Knowledge by description is second-hand knowledge obtained by processing a proposition, e.g. your knowledge of my pain on the basis of what I tell you about it.
What I was picking up on in Tuukka’s statement was that the irrationals are uncountable whereas the rationals are countable. So the rationals have the cardinality of a set of discrete combinatorial structures, like possible sentences in a language, whereas the irrationals have the cardinality of a true continuum, like a set of possible experiences, if you imagined qualia to be genuinely real-valued properties and e.g. the visual field to be a manifold in the topological sense. It would be a way of saying “descriptions are countable in number, experiences are uncountable”.
Unless I missed something, I’m only seeing one out of the three things he was stating were necessary.
Must … not … respond …
If you respond to that letter, I will not engage in conversation, because the letter is a badly written outdated progress report of my work. The work is now done, it will be published as a book, and I already have a publisher. If you want to know when the book comes out, you might want to join this Facebook community.
As luck would have it, I always land on the following page when I start typing “less...” in my browser. http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/
I find it useful consider epistemic rationality a subtype of instrumental rationality, and identify other types of instrumental rationality such as social rationality.
EDIT: I went on about this recently in: http://lesswrong.com/lw/eqn/the_useful_idea_of_truth/7jyn
Yudkowsky says:
A normative belief in rationality is, as far as I can tell, not possible for someone who does not have a clear concpetion of what rationality is. I am trying to present tools for forming such a conception. The theory I am presenting is, most accurately, a rationally constructed language, not a prescriptive theory on whether it is moral to be rational. The merit of this language is that it should allow you to converse about rationality with mysticists or religious people so that you both understand what you are talking about. It seems to me the ID vs. evolution debate remains unresolved among the general public (in the USA) because neither side has managed to speak the same language as the other side. My language is not formally defined in the sense of being a formal language, but it has formally defined ontological types.
I think the most you can hope for is a model of rationality and irrationality that can model mysticists or religious people as well as rationalists. I don’t think you can expect everyone to grok that model. That model may not be expressible in a mysticist’s model of reality.
Irrationality is just less instrumentally rational—less likely to win. You seem to have split rational and irrational into two categories, and I think this is just a methodological mistake. To understand and compare the two, you need to put both on the same scale, and then show how they have different measures on that scale.
Also, now that I look at more of your responses, it seems that you have your own highly developed theory, with your own highly developed language, and you’re speaking that language to us. We don’t speak your language. If you’re going to try to talk to people in a new language, you need to start simple, like “this is a ball”, so that we have some meaningful context from which to understand “I hit the ball.”
Quickly thereafter, you have to demonstrate, and not just assert, some value to your language to motivate any readers you have to continue learning your language.
Agree. The Pirahã could not use my model because abstract concepts are banned in their culture. I read from New Scientist that white man tried to teach them numbers so that they wouldn’t be cheated in trade so much, but upon getting some insight of what a number is, they refused to think that way. The analytic Metaphysics of Quality (my theory) would say that the Pirahã do not use transcendental language. They somehow know what it is and avoid it despite not having a name for it in their language. That language has only a few words.
The point is not to have everyone to grok at this model, but to use this model to explain reality. The differences between the concepts of “abstract” and “concrete” have been difficult to sort out by philosophers, but in this case the Pirahã behavior seems to be adequately explicable by using the concepts of “natural quality” and “transcendental quality” in the analytic Metaphysics of Quality.
Do you mean by “irrationality” something like a biased way of thinking whose existence can be objectively determined? I don’t mean that by irrationality. I mean things whose existence has no rational justification, such as stream of consciousness. Things like dreams. If you are in a dream, and open your (working) wrist watch, and find out it contains coins instead of clockwork, and behave as if that were normal, there is no rational justification for you doing so—at least none that you know of while seeing the dream.
You’re perfectly right. I’d like to go for the dialogue option, but obviously, if it’s too exhausting for you because my point of view is too remote, nobody will participate. That’s all I’m offering right now, though—dialogue. Maybe something else later, maybe not. I’ve had some fun already despite losing a lot of “karma”.
The problem with simple examples is that, for example, I’d have to start a discussion on what is “useful”. It seems to me the question is almost the same as “What is Quality?” The Metaphysics of Quality insists that Quality is undefinable. Although I’ve noticed some on LW have liked Pirsig’s book Zen and the Art of Motorcycle Maintenance, it seems this would already cause a debate in its own right. I’d prefer not to get stuck on that debate and risk missing the chance of saying what I actually wanted to say.
If that discussion, however, is necessary, then I’d like to point out irrational behavior, that is, a somewhat uncritical habit of doing the first thing that pops into my mind, has been very useful for me. It has improved my efficiency in doing things I could rationally justify despite not actually performing the justification except rarely. If I am behaving that way—without keeping any justifications in my mind—I would say I am operating in the subjective or mystical continuum. When I do produce the justification, I do it in the objective or normative continuum by having either one of those emerge from the earlier subjective or mystical continuum via strong emergence. But I am not being rational before I have done this in spite of ending up with results that later appear rationally good.
EDIT: Moved this post here upon finding out that I can reply to this comment. This 10 minute lag is pretty inconvenient.
If neither side accepts the other side’s language as meaningful, why do you believe they would accept the new language?
Somehow related: http://xkcd.com/927/
That’s a very good point. Gonna give you +1 on that. The language, or type system, I am offering has the merit of no such type system being devised before. I stick to this unless proven wrong.
Academic philosophy has it’s good sides. “Vagrant predicates” by Rescher are an impressive and pretty recent invention. I also like confirmation holism. But as far as I know, nobody has tried to do an ontology with the following features:
Is analytically defined
Explains both strong and weak emergence
Precision of conceptual differentiation can be expanded arbitrarily (in this case by splitting continua into a greater amount of levels)
Includes its own incompleteness as a non-well-formed set (Dynamic Quality)
Uses an assumption of symmetry to figure out the contents and structure of irrational ontological categories which are inherently unable to account for their structure, with no apparent problems
Once you grasp the scope of this theory I don’t think you’ll find a simpler theory to include all that meaningfully—but please do tell me if you do. I still think my theory is relatively simple when compared to quantum mechanics, except that it has a broad scope.
In any case, the point is that on a closer look it appears that my theory has no viable competition, hence, it is the first standard and not the 15th. No other ontology attempts to cover this broad a scope into a formal model.
Those are the labels used to describe the issue by the participants. But taking an outside view, the issue is inconsistent principles between the two sides. The fact that true religious believers reject the need for beliefs to pay rent in anticipated experience won’t be solved by new vocabulary.
A bit late to this, but I think I figured out what the basic problem here is: Robert Pirsig is an archer, while LW (and folk like Judea Pearl, Gary Drescher and Marcus Hutter) are building hot-air balloons. And we’re talking about doing a Moon shot, building an artificial general intelligence, here.
Archers think that if they get their bowyery really good and train to shoot really, really well, they might eventually land an arrow on the Moon. Maybe they’ll need to build some kind of ballista type thing that needs five people to draw, but archery is awesome at skewering all sorts of things, so it should definitely be the way to go.
Hot-air balloonists on the other hand are pretty sure bows and arrows aren’t the way to go, despite balloons being a pretty recent invention while archery has been practiced for millennia and has a very distinguished pedigree of masters. Balloons seem to get you higher up than you can get things to go with any sort of throwing device, even one of those fancy newfangled trebuchet things. Sure, nobody has managed to land a balloon on the Moon either, despite decades of trying, so obviously we’re still missing something important that nobody really has a good idea about.
But it does look like figuring out how stuff like balloons work and trying to think of something new along similar lines, instead of developing a really good archery style is the way to go if you want to actually land something on the Moon at some point.
Would you find a space rocket to resemble either a balloon or an arrow, but not both?
I didn’t imply something Pirsig wrote would, in and of itself, have much to do with artificial intelligence.
LessWrong is like a sieve, that only collects stuff that looks like I need it, but on a closer look I don’t. You won’t come until the table is already set. Fine.
The point is that the people who build it will resemble balloon-builders, not archers. People who are obsessed with getting machines to do things, not people who are obsessed with human performance.
My work is a type theory for AI for conceptualizing the input it receives via its artificial senses. If it weren’t, I would have never come here.
The conceptualization faculty is accompanied with a formula for making moral evaluations, which is the basis of advanced decision making. Whatever the AI can conceptualize, it can also project as a vector on a Cartesian plane. The direction and magnitude of that vector are the data used in this decision making.
The actual decision making algorithm may begin by making random decisions and filtering good decisions from bad with the mathematical model I developed. Based on this filtering, the AI would begin to develop a self-modifying heuristic algorithm for making good decisions and, in general, for behaving in a good manner. What the AI would perceive as good behavior would of course, to some extent, depend of the environment in which the AI is placed.
If you had an AI making random actions and changing it’s behavior according to heuristic rules, it could learn things in a similar way as a baby learns things. If you’re not interested of that, I don’t know what you’re interested of.
I didn’t come here to talk about some philosophy. I know you’re not interested of that. I’ve done the math, but not the algorithm, because I’m not much of a coder. If you don’t want to code a program that implements my mathematical model, that’s no reason to give me −54 karma.
I really don’t understand why you don’t want a mathematical model of moral decision making, even for discussion. “Moral” is not a philosophical concept here. It is just the thing that makes some decisions better than others. I didn’t have the formula when I came here in October. Now I have it. Maybe later I will have something more. And all you can do, with the exception of Risto, is to give me −1. Can you recommend me some transhumanist community?
How do you expect an AI to be rational, if you yourselves don’t want to be metarational? Do you want some “pocket calculator” AI?
Too bad you don’t like philosophical concepts. I thought you knew computer science is oozing over philosophy, which has all but died on its feet as far as we’re concerned of the academia.
One thing’s for sure: you don’t know whack about karma. The AI could actually differentiate karma, in the proper sense of the word, from “reputation”. You keep playing with your lego blocks until you grow up.
It would have been really neat to do this on LessWrong. It would have made for a good story. It would have also been practical. The academia isn’t interested of this—there is no academic discipline for studying AI theory at this level of abstraction. I don’t even have any AI expertise, and I didn’t intend to develop a mathematical model for AI in the first place. That’s just what I got when I worked on this for long enough.
I don’t like stereotypical LessWrongians—I think they are boring and narrow-minded. I think we could have had something to do together despite the fact that our personalities don’t make it easy for us to be friends. Almost anyone with AI expertise is competent enough to help me get started with this. You are not likely to get a better deal to get famous by doing so little work. But some deals of course seem too good to be true. So call me the “snake oil man” and go play with your legos.
I just gave my girlfriend an orgasm. Come on, give me another −1.
I suppose you got the joke above (Right? You did, right?) but you were simply too down not to fall for it. What’s the use of acquiring all that theoretical information, if it doesn’t make you happy? Spending your days hanging around on some LessWrong with polyamorous dom king Eliezer Yudkowsky as your idol. You wish you could be like him, right? You wish the cool guys would be on the losing side, like he is?
Why do you give me all the minus? Just asking.
In any case, this “hot-air balloonist vs. archer” (POP!) comparison seems like some sort of an argument ad hominem -type fallacy, and that’s why I reacted with an ad hominem attack about legos and stuff. First of all, ad hominem is a fallacy, and does nothing to undermine my case. It does undermine the notion that you are being rational.
Secondly, if my person is that interesting, I’d say I resemble the mathematician C. S. Peirce more than Ramakrishna. It seems to me mathematics is not necessarily considered completely acceptable by the notion of rationality you are advocating, as pure mathematics is only concerned about rules regarding what you’d call “maps” but not rules regarding what you’d call “territory”. That’s a weird problem, though.
I didn’t intend it as much of an ad hominem, after all both groups in the comparison are so far quite unprepared for the undertaking they’re trying to do. Just trying to find ways to try to describe the cultural mismatch that seems to be going on here.
I understand that math is starting to have some stuff dealing with how to make good maps from a territory. Only that’s inside the difficult and technical stuff like Jaynes’ Probability Theory or Pearl’s Causality, instead of somebody just making a nice new logical calculus with an operator for doing induction. There’s already some actual philosophically interesting results like an inductive learner needing to have innate biases to be able to learn anything.
That’s a good result. However, the necessity of innate biases undermines the notion of rationality, unless we have a system for differentiating the rational cognitive faculty from the innately biased cognitive faculty. I am proposing that this differentiation faculty be rational, hence “Metarationality”.
In the Cartesian coordinate system I devised object-level entities are projected as vectors. Vectors with a positive Y coordinate are rational. The only defined operation so far is addition: vectors can be added to each other. In this metasystem we are able to combine object-level entities (events, objects, “things”) by adding them to each other as vectors. This system can be used to examine individual object-level entities within the context other entities create by virtue of their existence. Because the coordinate system assigns a moral value to each entity it can express, it can be used for decision making. Obviously, it values morally good decisions over morally bad ones.
Every entity in my system is an ordered pair of the form
). Here x and y are propositional variables whose truth values can be −1 (false) or 1 (true). x denotes whether the entity is tangible and y whether it is placed within a rational epistemology. p is the entity. &p is the conceptual part of the entity (a philosopher would call that an “intension”). p is the sensory part of the entity, ie. what sensory input is considered to be the referent of the entity’s conceptual part. A philosopher would call p an extension. a, b and c are numerical values, which denote the value of the entity itself, of its intension, and of its extension, respectively.The right side of the following formula (right to the equivalence operator) tells how b and c are used to calculate a. The left side of the formula tells how any entity is converted to vector a. The vector conversion allows both innate cognitive bias and object-level rationality to influence decision making within the same metasystem.
%20\Leftrightarrow%20{%5E{x}_{y}p}_{\frac{%20\textup{min}(m,n)%20}%20{%20\textup{max}(m,n)%20}(m+n)}%20=(%5E{x}_{y}{\&}p%20_n%20,%20{%5E{x}_{y}*p}%20_m%20)))If someone says that it’s just a hypothesis this model works, I agree! But I’m eager to test it. However, this would require some teamwork.
If you compactify the plane correctly, the exterior of a circle is homeomorphic to a disk. This follows from the Jordan-Schoenflies theorem. Defining what something is is the same as defining what it is not.
This whole conversation seems a little awkward now.
I apologize. I no longer feel a need to behave in the way I did.
Isn’t this and its associated posts an account of meta-rationality?
That post in particular is a vague overview of meta-rationality, not a systematic account of it. It doesn’t describe meta-rationality as something that qualifies as a theory. It just says there is such a thing without telling exactly what it is.
Sorry, I meant that that series of posts addresses the justification issue, if somewhat informally.
Do you mean the sequence “Map and Territory”? I don’t find it to include a comprehensive and well-defined taxonomy of ways of being rational and irrational. I was investigating whether I should present a certain theory here. Does this −4 mean you don’t want it?
Insofar as LW is interested of irrationality, it seems interested of some kind of pseudo-irrationality: reasoning mistakes whose existence is affirmed by resorting to rational argumentation. I call that pseudo-irrationality, because its existence is affirmed rationally instead of irrationally.
I am talking about the kind of irrationality whose existence can be observed, but cannot be argued for, because it is obvious. Examples of such forms of irrationality include synchronicities. An example of a synchronicity would be you talking about a bee, and a bee appearing in the room. There is no rational reason (ostensibly) why these two events would happen simultaneously, and it could rightly be deemed a coincidence. But how does it exist as a coincidence? If we notice it, it exists as something we pay attention to, but is there any way we could be more specific about this?
If we could categorize such irrationally existing things comprehensively, we would have a clearer grasp on what is the rationality that we are advocating. We would know what that rationality is not.
I didn’t vote on this article, as it happens.
This post is another one of the ones I was talking about. I wasn’t really paying attention to where in the sequences anything was (it’s been so long since I read them that they’re all blurred together in my mind).
There are certainly strong arguments against the meaningfulness of coincidence (and I think the heuristics and biases program does address some of when and why people think coincidences are meaningful).
The page says:
I do not assume that every belief must be justified, except possibly within rationality.
Do the arguments against the meaningfulness of coincidence state that coincidences do not exist?
...but I don’t want to be rational for deep philosophical reasons. My justification is that (instrumental) rationality is useful. To demonstrate that, one would have to look at outcomes for those behaving rationally and those behaving irrational—not necessarily easy, but definitely a tractable problem.
I am not talking about a prescriptive theory that tells, whether one should be rational or not. I am talking about a rational theory, that produces a taxonomy of different ways of being rational or irrational without making a stance on which way should be chosen. Such a theory already implicitly advocates rationality, so it doesn’t need to explicitly arrive at conclusions about whether one ought to be rational or not.
Test
buybuydandavis said:
Do you mean by “irrationality” something like a biased way of thinking whose existence can be objectively determined? I don’t mean that by irrationality. I mean things whose existence has no rational justification, such as stream of consciousness. Things like dreams. If you are in a dream, and open your (working) wrist watch, and find out it contains coins instead of clockwork, and behave as if that were normal, there is no rational justification for you doing so—at least none that you know of while seeing the dream.
You’re perfectly right. I’d like to go for the dialogue option, but obviously, if it’s too exhausting for you because my point of view is too remote, nobody will participate. That’s all I’m offering right now, though—dialogue. Maybe something else later, maybe not. I’ve had some fun already despite losing a lot of “karma”.
The problem with simple examples is that, for example, I’d have to start a discussion on what is “useful”. It seems to me the question is almost the same as “What is Quality?” The Metaphysics of Quality insists that Quality is undefinable. Although I’ve noticed some on LW have liked Pirsig’s book Zen and the Art of Motorcycle Maintenance, it seems this would already cause a debate in its own right. I’d prefer not to get stuck on that debate and risk missing the chance of saying what I actually wanted to say.
If that discussion, however, is necessary, then I’d like to point out irrational behavior, that is, a somewhat uncritical habit of doing the first thing that pops into my mind, has been very useful for me. It has improved my efficiency in doing things I could rationally justify despite not actually performing the justification except rarely. If I am behaving that way—without keeping any justifications in my mind—I would say I am operating in the subjective or mystical continuum. When I do produce the justification, I do it in the objective or normative continuum by having either one of those emerge from the earlier subjective or mystical continuum via strong emergence. But I am not being rational before I have done this in spite of ending up with results that later appear rationally good.
I can’t reply to some of the comments, because they are below the threshold. Replies to downvoted comments are apparently “discouraged” but not banned, and I’m not on LW for any other reason than this, so let’s give it a shot. I don’t suppose I am simply required to not reply to a critical post about my own work.
First of all, thanks for the replies, and I no longer feel bad for the about −35 “karma” points I received. I could have tried to write some sort of a general introduction to you, but I’ve attempted to write them earlier, and I’ve found dialogue to be a better way. The book I wrote is a general introduction, but it’s 140 pages long. Furthermore, my published wouldn’t want me to give it away for free, and the style isn’t very fitting to LessWrong. I’d perhaps hape to write another book and publish it for free as a series of LessWrong articles.
Mitchell_Porter said:
The contents of the normative and objective continua are relatively easily processed by an average LW user. The objective continuum consists of dialectic (classical quality) about sensory input. Sensory input is categorized as it is categorized in Maslow’s hierarchy of needs. I know there is some criticism of Maslow’s theory, but can be accept it as a starting point? “Lower needs” includes homeostasis, eating, sex, excretion and such. “Higher needs” includes reputation, respect, intimacy and such. “Deliberation” includes Maslow’s “self-actuation”, that is, problem solving, creativity, learning and such. Sense-data is not included in Maslow’s theory, but it could be assumed that humans have a need to have sensory experiences, and that this need is so easy to satisfy that it did not occur to Maslow to include it as the lowest need of his hierarchy.
The normative continuum is similarily split to a dialectic portion and a “sensory” portion. That is to say, a central thesis of the work is that there are some kind of mathematical intuitions that are not language, but that are used to operate in the domain of pure math and logic. In order to demonstrate that “mathematical intuitions” really do exist, let us consider the case of a synesthetic savant, who is able to evaluate numbers according to how they “feel”, and use this feeling to determine whether the number is a prime. The “feeling” is sense-data, but the correlation between the feeling and primality is some other kind of non-lingual intuition.
If synesthetic primality checks exist, it follows that mathematical ability is not entirely based on language. Synesthetic primality checks do exist for some people, and not for others. However, I believe we all experience mathematical intuitions—for most, the experiences are just not as clear as they are for synesthetic savants. If the existence of mathematical intuition is denied, synesthetic primality checks are claimed impossible due to mere metaphysical skepticism in spite of lots of evidence that they do exist and produce strikingly accurate results.
Does this make sense? If so, I can continue.
Mitchell_Porter also said:
I’m aware of that. Objectivity is just one continuum in the theory.
I’m not exactly in trouble. I have a publisher and I have people to talk with. I can talk with a mathematician I know and on LilaSquad. But given that Pirsig’s legacy appears to be continental philosophy, nobody on LilaSquad can help me improve the formal approach even though some are interested of it. I can talk about everything else with them. Likewise, the mathematician is only interested of the formal structure of the theory and perhaps slightly of the normative continuum, but not of anything else. I wouldn’t say I have something to prove or that I need something in particular. I’m mostly just interested to find out how you will react to this.
Something to that effect. This is another reason why I like talking with people. They express things I’ve thought about with a different wording. I could never make progress just stuck in my head.
I’d say the irrational continua do not have fixed notions of truth and falsehood. If something is “true” now, there is no guarantee it will persist as a rule in the future. There are no proof methods of methods of justification. In a sense, the notions of truth and falsehood are so distorted in the irrational continua that they hardly qualify as truth or falsehood—even if the Bible, operating in the subjective continuum, would proclaim that it’s “the truth” that Jesus is the Christ.
Mitchell asked:
As far as I know, the letter was never delivered to Pirsig. The insiders of MoQ-Discuss said their mailing list is strictly for discussing Pirsig’s thoughts, not any derivative work. The only active member of Lila Squad who I presume to have Pirsig’s e-mail address said Pirsig doesn’t understand the Metaphysics of Quality himself anymore. It seemed pointless to press the issue that the letter be delivered to him. When the book is out, I can that to him via his publisher and hope he’ll receive it. The letter wasn’t even very good—the book is better.
I thought Pirsig might want to help me with development of the theory, but it turned out I didn’t require his help. Now I only hope he’ll enjoy reading the book.
Sorry for being cruel. It didn’t occur to me that LessWrong is “an online community for people who want to apply the discovery of biases like the conjunction fallacy, the affect heuristic, and scope insensitivity in order to fix their own thinking.” I thought this is a community for people who “apply the discovery of biases and, hence, their thinking is not broken”.
I didn’t notice “Less Wrong users aim to develop accurate predictive models of the world, and change their mind when they find evidence disconfirming those models”. I thought LessWrong users actually do that instead of aiming to do that.
I didn’t understand this is a low self-esteem support group for people who want to live up to preconceived notions of morality. I probably don’t have anything to do here. Goodbye.
The foundations of rationality, as LW knows it, are not defined with logical rigour. Are you adamant this is not a problem?
http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/ says:
I don’t think it’s very helpful to oppose a logical definition for a certain language that would allow you to do this. As it is, you currently have no logical definition. You have this:
That is not a language with a formalized type system. If you oppose a formalized type system, even if it were for the advancement of your purely practical goal, why? Wikipedia says:
What in a type system is undesirable to you? The “snake oil that cures lung cancer”—I’m pretty sure you’ve heard about that one—is a value whose type is irrational. If you may use natural language to declare that value as irrational, why do you oppose using a type system for doing the same thing?