Looks like rubbish to me, I’m afraid. If what’s on this site interests you, I think you’ll get a lot more out of the Sequences, including the tools to see why the ideas in the site above aren’t really worth pursuing.
Yeah, I know what it looks like: meta-physical rubbish. But my dilemma is that Chris Langan is the smartest known living man, which makes it really hard for me to shrug the CTMU off as nonsense. Also, from what I skimmed, it looks like a much deeper examination of reductionism and strange loops, which are ideas that I hold to dearly.
I’ve read and understand the sequences, though I’m not familiar enough with them to use them without a rationalist context.
More to the point, you do not immediately fail the “common ground” test.
Pragmatically, I don’t care how smart you are, but whether you can make me smarter. If you are so much smarter than I as to not even bother, I’d be wasting my time engaging your material.
I should note that the ability to explain things isn’t the same attribute as intelligence. I am lucky enough to have it. Other legitimately intelligent people do not.
Considering the extraordinary rarity of good explainers in this entire civilization, I’m saddened to say that talent may have something to do with it, not just practice.
I wonder what I should do. I’m smart, I seem to be able to explain things that I know to people well.. to my lament, I got the same problem as Thomas: I apparently suck at learning things so that they’re internalized and in my long term memory.
I didn’t use the word “learn”. My point is about a smart person conveying their ideas to someone. Taboo “smart”. Distinguish ability to reach goals, and ability to score high on mental aptitude tests. If they are goal-smart, and their goal is to convince, they will use their iq-smarts to develop the capacity to convince.
However intelligent he is, he fails to present his ideas so as to gradually build a common ground with lay readers. “If you’re so smart, how come you ain’t convincing?”
The “intelligent design” references on his Wikipedia bio are enough to turn me away. Can you point us to a well-regarded intellectual who has taken his work seriously and recommends his work? (I’ve used that sort of bridging tactic at least once, Dennett convincing me to read Julian Jaynes.)
“If you’re so smart, how come you ain’t convincing?”
“Convincing” has long been a problem for Chris Langan. Malcolm Gladwell relates a story about Langan attending a calculus course in first year undergrad. After the first lecture, he went to offer criticism of the prof’s pedagogy. The prof thought he was complaining that the material was too hard; Langan was unable to convey that he had understood the material perfectly for years, and wanted to see better teaching.
You just get to take bigger mistakes than others.
From the youtube videos Langan looks like a really bright fellow that has a very broken toolbox, and little correction. Argh!
Yeah, I know what it looks like: meta-physical rubbish.
It is. I got as far as this paragraph of the introduction to his paper before I found a critical flaw:
Of particular interest to natural scientists is the fact that the laws of nature are a language. To some extent, nature is regular; the basic patterns or general aspects of structure in terms of which it is apprehended, whether or not they have been categorically identified, are its “laws”. The existence of these laws is given by the stability of perception.
At this point, he’s already begging the question, i.e. presupposing the existence of supernatural entities. These “laws” he’s talking about are in his head, not in the world.
In other words, he hasn’t even got done presenting what problem he’s trying to solve, and he’s already got it completely wrong, and so it’s doubtful he can get to correct conclusions from such a faulty premise.
That’s not a critical flaw. In metaphysics, you can’t take for granted that the world is not in your head. The only thing you really can do is to find an inconsistency, if you want to prove someone wrong.
Langan has no problems convincing me. His attempt at constructing a reality theory is serious and mature and I think he conducts his business about the way an ordinary person with such aims would. He’s not a literary genius like Robert Pirsig, he’s just really smart otherwise.
I’ve never heard anyone to present such criticism of the CTMU that would actually imply understanding of what Langan is trying to do. The CTMU has a mistake. It’s that Langan believes (p. 49) the CTMU to satisfy the Law Without Law condition, which states: “Concisely, nothing can be taken as given when it comes to cosmogony.” (p. 8)
According to the Mind Equals Reality Principle, the CTMU is comprehensive. This principle “makes the syntax of this theory comprehensive by ensuring that nothing which can be cognitively or perceptually recognized as a part of reality is excluded for want of syntax”. (p. 15) But undefinable concepts can neither be proven to exist nor proven not to exist. This means the Mind Equals Reality Principle must be assumed as an axiom. But to do so would violate the Law Without Law condition.
The Metaphysical Autology Principle could be stated as an axiom, which would entail the nonexistence of undefinable concepts. This principle “tautologically renders this syntax closed or self-contained in the definitive, descriptive and interpretational senses”. (p. 15) But it would be arbitrary to have such an axiom, and the CTMU would again fail to fulfill Law Without Law.
If that makes the CTMU rubbish, then Russell’s Principia Mathematica is also rubbish, because it has a similar problem which was pointed out by Gödel. EDIT: Actually the problem is somewhat different than the one addressed by Gödel.
Langan’s paper can be found here EDIT: Fixed link.
To clarify, I’m not the generic “skeptic” of philosophical thought experiments. I am not at all doubting the existence of the world outside my head. I am just an apparently competent metaphysician in the sense that I require a Wheeler-style reality theory to actually be a Wheeler-style reality theory with respect to not having arbitrary declarations.
There might not be many people here to who are sufficiently up to speed on philosophical metaphysics to have any idea what a Wheeler-style reality theory, for example, is. My stereotypical notion is that the people at LW have been pretty much ignoring philosophy that isn’t grounded in mathematics, physics or cognitive science from Kant onwards, and won’t bother with stuff that doesn’t seem readable from this viewpoint. The tricky thing that would help would be to somehow translate the philosopher-speak into lesswronger-speak. Unfortunately this’d require some fluency in both.
It’s not like your average “competent metaphysicist” would understand Langan either. He wouldn’t possibly even understand Wheeler. Langan’s undoing is to have the goals of a metaphysicist and the methods of a computer scientist. He is trying to construct a metaphysical theory which structurally resebles a programming language with dynamic type checking, as opposed to static typing. Now, metaphysicists do not tend to construct such theories, and computer scientists do not tend to be very familiar with metaphysics. Metaphysical theories tend to be deterministic instead of recursive, and have a finite preset amount of states that an object can have. I find the CTMU paper a bit sketchy and missing important content besides having the mistake. If you’re interested in the mathematical structure of a recursive metaphysical theory, here’s one: http://www.moq.fi/?p=242
Formal RP doesn’t require metaphysical background knowledge. The point is that because the theory includes a cycle of emergence, represented by the power set function, any state of the cycle can be defined in relation to other states and prior cycles, and the amount of possible states is infinite. The power set function will generate a staggering amount of information in just a few cycles, though. Set R is supposed to contain sensory input and thus solve the symbol grounding problem.
Of course the symbol grounding problem is rather important, so it doesn’t really suffice to say that “set R is supposed to contain sensory input”. The metaphysical idea of RP is something to the effect of the following:
Let n be 4.
R contains everything that could be used to ground the meaning of symbols.
R1 contains sensory perceptions
R2 contains biological needs such as eating and sex, and emotions
R3 contains social needs such as friendship and respect
R4 contains mental needs such as perceptions of symmetry and beauty (the latter is sometimes reducible to the Golden ratio)
N contains relations of purely abstract symbols.
N1 contains the elementary abstract entities, such as symbols and their basic operations in a formal system
N2 contains functions of symbols
N3 contains functions of functions. In mathematics I suppose this would include topology.
N4 contains information of the limits of the system, such as completeness or consistency. This information form the basis of what “truth” is like.
Let ℘(T) be the power set of T.
The solving of the symbol grounding problem requires R and N to be connected. Let us assume that ℘(Rn) ⊆ Rn+1. R5 hasn’t been defined, though. If we don’t assume subsets of R to emerge from each other, we’ll have to construct a lot more complicated theories that are more difficult to understand.
This way we can assume there are two ways of connecting R and N. One is to connect them in the same order, and one in the inverse order. The former is set O and the latter is set S.
O set includes the “realistic” theories, which assume the existence of an “objective reality”.
℘(R1) ⊆ O1 includes theories regarding sensory perceptions, such as physics.
℘(R2) ⊆ O2 includes theories regarding biological needs, such as the theory of evolution
℘(R3) ⊆ O3 includes theories regarding social affairs, such as anthropology
℘(R4) ⊆ O4 includes theories regarding rational analysis and judgement of the way in which social affairs are conducted
The relationship between O and N:
N1 ⊆ O1 means that physical entities are the elementary entities of the objective portion of the theory of reality. Likewise:
N2 ⊆ O2
N3 ⊆ O3
N4 ⊆ O4
S set includes “solipsistic” ideas in which “mind focuses to itself”.
℘(R4) ⊆ S1 includes ideas regarding what one believes
℘(R3) ⊆ S2 includes ideas regarding learning, that is, adoption of new beliefs from one’s surroundings. Here social matters such as prestige, credibility and persuasiveness affect which beliefs are adopted.
℘(R2) ⊆ S3 includes ideas regarding judgement of ideas. Here, ideas are mostly judged by how they feel. Ie. if a person is revolted by the idea of creationism, they are inclined to reject it even without rational grounds, and if it makes them happy, they are inclined to adopt it.
℘(R1) ⊆ S4 includes ideas regarding the limits of the solipsistic viewpoint. Sensory perceptions of objectively existing physical entities obviously present some kind of a challenge to it.
The relationship between S and N:
N4 ⊆ S1 means that beliefs are the elementary entities of the solipsistic portion of the theory of reality. Likewise:
N3 ⊆ S2
N2 ⊆ S3
N1 ⊆ S4
That’s the metaphysical portion in a nutshell. I hope someone was interested!
We were talking about applying the metaphysics system to making an AI earlier in IRC, and the symbol grounding problem came up there as a basic difficulty in binding formal reasoning systems to real-time actions. It doesn’t look like this was mentioned here before.
I’m assuming I’d want to actually build an AI that needs to deal with symbol grounding, that is, it needs to usefully match some manner of declarative knowledge it represents in its internal state to the perceptions it receives from the outside world and to the actions it performs on it. Given this, I’m getting almost no notion of what useful work this theory would do for me.
Mathematical descriptions can be useful for people, but it’s not given that they do useful work for actually implementing things. I can define a self-improving friendly general artificial intelligence mathematically by defining
FAI = <S, P*> as an artificial intelligence instance, consisting of its current internal state S and the history of its perceptions up to the present P*,
a: FAI -> A* as a function that gives the list of possible actions for a given FAI instance
u: A -> Real as a function that gives the utility of each action as a real number, with higher numbers given to actions that advance the purposes of the FAI better based on its current state and perception history and
f: FAI * A -> S, P as an update function that takes an action and returns a new FAI internal state with any possible self-modifications involved in the action applied, and a new perception item that contains whatever new observations the FAI made as a direct result of its action.
And there’s a quite complete mathematical description of a friendly artificial intelligence, you could probably even write a bit of neat pseudocode using the pieces there, but that’s still not likely to land me a cushy job supervising the rapid implementation of the design at SIAI, since I don’t have anything that does actual work there. All I did was push all the complexity into the black boxes of the u, a and f.
I also implied a computational approach where the system enumerates every possible action, evaluates them all and then picks a winner with how I decided to split up the definition. This is mathematically expedient, given that in mathematics any concerns of computation time can be pretty much waved off, but appears rather naive computationally, as it is likely that both coming up with possible actions and evaluating them can get extremely expensive in the artificial general intelligence domain.
With the metaphysics thing, beyond not getting a sense of it doing any work, I’m not even seeing where the work would hide. I’m not seeing black box functions that need to do an unknowable amount of work, just sets with strange elements being connected to other sets with strange elements. What should you be able to do with this thing?
You probably have a much more grassroot-level understanding of the symbol grounding problem. I have only solved the symbol grounding problem to the extent that I have formal understanding of its nature.
In any case, I am probably approaching AI from a point of view that is far from the symbol grounding problem. My theory does not need to be seen as an useful solution to that problem. But when an useful solution is created, I postulate it can be placed within RP. Such a solution would have to be an algorithm for creating S-type or O-type sets of members of R.
More generally, I would find RP to be useful as an extremely general framework of how AI or parts of AI can be constructed in relation to each other, ecspecially with regards to understanding lanugage and the notion of consciousness. This doesn’t necessarily have anything to do with some more atomistic AI projects, such as trying to make a robot vacuum cleaner find its way back to the charging dock.
At some point, philosophical questions and AI will collide. Suppose the following thought experiment:
We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?
1) The brain scanner is broken
2) The person is broken
In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.
RP should help with such problems because it is intended as an elegant, compact and flexible way of defining recursion while allowing the solution of the symbol grounding problem to be contained in the definition in a nontrivial way. That is, RP as a framework of AI is not something that says: “Okay, this here is RP. Just perform the function RP(sensory input) and it works, voilá.” Instead, it manages to express two different ways of solving the symbol grounding problem and to define their accuracy as a natural number n. In addition, many emergence relations in RP are logical consequences of the way RP solves the symbol grounding problem (or, if you prefer, “categorizes the parts of the actual solution to the symbol grounding problem”).
In the previous thought experiment, the AI should manage to understand that the scanner deterministically performs the operation ℘(R) ⊆ S, and does not define S in terms of anything else. The person, on the other hand, is someone whose information processing is based on RP or something similar.
But what you read from moq.fi is something we wrote just a few days ago. It is by no means complete.
One problem is that ℘(T) does not seem to define actual emergences, but only all possible emergences.
We should define functions for “generalizing” and “specifying” sets or predicates, in which generalization would create a new set or predicate from an existing one by adding members, and specifying would do so by reducing members.
We should add a discard order to sets. Sets that are used often have a high discard order, but sets that are never used end up erased from memory. This is similar to nonused pathways in the brain dying out, and often used pathways becoming stronger.
The theory does not yet have an algorithmic part, but it should have. That’s why it doesn’t yet do anything.
℘(Rn) should be defined to include a metatheoretic approach to the theory itself, facilitating modification of the theory with the yet-undefined generalizing and specifying functions.
Questions to you:
Is T → U the Cartesian product of T and U?
What is *?
I will not guarantee having discussions with me is useful for attaining a good job. ;)
We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?
1) The brain scanner is broken
2) The person is broken
In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.
I don’t really understand this part.
“The scanner does not understand the information but the person does” sounds like some variant of Searle’s Chinese Room argument when presented without further qualifiers. People in AI tend to regard Searle as a confused distraction.
The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent’s internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.
I suppose the idea here is that there is some difference whether there is a human being sitting in the scanner, or, say, a toy robot with a state of two bits where one is I am thinking about cats and the other is I am broken and will lie about thinking about cats. With the robot, we could just check the “broken” bit as well from the scan when the robot is disagreeing with the scanner, and if it is set, conclude that the robot is broken.
I’m not seeing how humans must be fundamentally different. The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat. Things being deterministic and predictable from knowing their initial state doesn’t mean they can’t have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding.
Sorry I keep skipping over your formalism stuff, but I’m still not really grasping the underlying assumptions behind this approach. (The underlying approach in the computer science approach are, roughly, “the physical world exists, and is made of lots of interacting, simple, Turing-computable stuff and nothing else”, “animals and humans are just clever robots made of the stuff”, “magical souls aren’t involved, not even if they wear a paper bag that says ‘conscious experience’ on their head”)
The whole philosophical theory of everything thing does remind me of this strange thing from a year ago, where the building blocks for the theory were made out of nowadays more fashionable category theory rather than set theory though.
I’ve read some of this Universal Induction article. It seems to operate from flawed premises.
If we prescribe Occam’s razor principle [3] to select the simplest theory consistent with the training
examples and assume some general bias towards structured environments, one can prove that inductive
learning “works”. These assumptions are an integral part of our scientific method. Whether they admit it
or not, every scientist, and in fact every person, is continuously using this implicit bias towards simplicity
and structure to some degree.
Suppose the brain uses algorithms. An uncontroversial supposition. From a computational point of view, the former citation is like saying: “In order for a computer to not run a program, such as Indiana Jones and the Fate of Atlantis, the computer must be executing some command to the effect of “DoNotExecuteProgram(‘IndianaJonesAndTheFateOfAtlantis’)”.
That’s not how computers operate. They just don’t run the program. They don’t need a special process for not running the program. Instead, not running the program is “implicitly contained” in the state of affairs that the computer is not running it. But this notion of implicit containment makes no sense for the computer. There are infinitely many programs the computer is not running at a given moment, so it can’t process the state of affairs that it is not running any of them.
Likewise, the use of an implicit bias towards simplicity cannot be meaningfully conceptualized by humans. In order to know how this bias simplifies everything, one would have to know, what information regarding “everything” is omitted by the bias. But if we knew that, the bias would not exist in the sense the author intends it to exist.
Furthermore:
This is in some way a contradiction to the well-known no-free-lunch theorems which state that, when averaged over all possible data sets, all learning algorithms perform equally well, and actually, equally poorly [11]. There are several variations of the no-free-lunch theorem for particular contexts but they all rely on the assumption that for a general learner there is no underlying bias to exploit because any observations are equally possible at any point. In other words, any arbitrarily complex environments are just as likely as simple ones, or entirely random data sets are just as likely as structured data. This assumption is misguided and seems absurd when applied to any real world situations. If every raven we have ever seen has been black, does it really seem equally plausible that there is equal chance that the next raven we see will be black, or white, or half black half white, or red etc. In life it is a necessity to make general assumptions about the world and our observation sequences and these assumptions
generally perform well in practice.
The author says that there are variations of the no free lunch theorem for particular contexts. But he goes on to generalize that the notion of no free lunch theorem means something independent of context. What could that possibly be? Also, such notions as “arbitrary complexity” or “randomness” seem intuitively meaningful, but what is their context?
The problem is, if there is no context, the solution cannot be proven to address the problem of induction. But if there is a context, it addresses the problem of induction only within that context. Then philosophers will say that the context was arbitrary, and formulate the problem again in another context where previous results will not apply.
In a way, this makes the problem of induction seem like a waste of time. But the real problem is about formalizing the notion of context in such a way, that it becomes possible to identify ambiguous assumptions about context. That would be what separates scientific thought from poetry. In science, ambiguity is not desired and should therefore be identified. But philosophers tend to place little emphasis on this, and rather spend time dwelling on problems they should, in my opinion, recognize as unsolvable due to ambiguity of context.
The omitted information in this approach is information with a high Kolmogorov complexity, which is omitted in favor of information with low Kolmogorov complexity. A very rough analogy would be to describe humans as having a bias towards ideas expressible in few words of English in favor of ideas that need many words of English to express. Using Kolmogorov complexity for sequence prediction instead of English language for ideas in the construction gets rid of the very many problems of rigor involved in the latter, but the basic idea is pretty much the same. You look into things that are briefly expressible in favor of things that must be expressed in length. The information isn’t permanently omitted, it’s just depriorized. The algorithm doesn’t start looking at the stuff you need long sentences to describe before it has convinced itself that there are no short sentences that describe the observations it wants to explain in a satisfactory way.
One bit of context that is assumed is that the surrounding universe is somewhat amenable to being Kolmogorov-compressed. That is, there are some recurring regularities that you can begin to discover. The term “lawful universe” sometimes thrown around in LW probably refers to something similar.
Solomonoff’s universal induction would not work in a completely chaotic universe, where there are no regularities for Kolmogorov compression to latch on. You’d also be unlikely to find any sort of native intelligent entities in such universes. I’m not sure if this means that the Solomonoff approach is philosophically untenable, but needing to have some discoverable regularities to begin with before discovering regularities with induction becomes possible doesn’t strike me as that great a requirement.
If the problem of context is about exactly where you draw the data for the sequence which you will then try to predict with Solomonoff induction, in a lawless universe you wouldn’t be able to infer things no matter which simple instrumentation you picked, while in a lawful universe you could pick all sorts of instruments, tracking the change of light during time, tracking temperature, tracking the luminousity of the Moon, for simple examples, and you’d start getting Kolmogorov-compressible data where the induction system could start figuring repeating periods.
The core thing “independent of context” in all this is that all the universal induction systems are reduced to basically taking a series of numbers as input, and trying to develop an efficient predictor for what the next number will be. The argument in the paper is that this construction is basically sufficient for all the interesting things an induction solution could do, and that all the various real-world cases where induction is needed can be basically reduced into such a system by describing the instrumentation which turns real-world input into a time series of numbers.
Okay. In this case, the article does seem to begin to make sense. Its connection to the problem of induction is perhaps rather thin. The idea of using low Kolmogorov complexity as justification for an inductive argument cannot be deduced as a theorem of something that’s “surely true”, whatever that might mean. And if it were taken as an axiom, philosophers would say: “That’s not an axiom. That’s the conclusion of an inductive argument you made! You are begging the question!”
However, it seems like advancements in computation theory have made people able to do at least remotely practical stuff on areas, that bear resemblance to more inert philosophical ponderings. That’s good, and this article might even be used as justification for my theory RP—given that the use of Kolmogorov complexity is accepted. I was not familiar with the concept of Kolmogorov complexity despite having heard of it a few times, but my intuitive goal was to minimize the theory’s Kolmogorov complexity by removing arbitrary declarations and favoring symmetry.
I would say, that there are many ways of solving the problem of induction. Whether a theory is a solution to the problem of induction depends on whether it covers the entire scope of the problem. I would say this article covers half of the scope. The rest is not covered, to my knowledge, by anyone else than Robert Pirsig and experts of Buddhism, but these writings are very difficult to approach analytically. Regrettably, I am still unable to publish the relativizability article, which is intended to succeed in the analytic approach.
In any case, even though the widely rejected “statistical relevance” and this “Kolmogorov complexity relevance” share the same flaw, if presented as an explanation of inductive justification, the approach is interesting. Perhaps, even, this paper should be titled: “A Formalization of Occam’s Razor Principle”. Because that’s what it surely seems to be. And I think it’s actually an achievement to formalize that principle—an achievement more than sufficient to justify the writing of the article.
“When artificial intelligence researchers attempted to capture everyday statements of inference using classical logic they began to realize this was a difficult if not impossible task.”
I hope nobody’s doing this anymore. It’s obviously impossible. “Everyday statements of inference”, whatever that might mean, are not exclusively statements of first-order logic, because Russell’s paradox is simple enough to be formulated by talking about barbers. The liar paradox is also expressible with simple, practical language.
Wait a second. Wikipedia already knows this stuff is a formalization of Occam’s razor. One article seems to attribute the formalization of that principle to Solomonoff, another one to Hutter. In addition, Solomonoff induction, that is essential for both, is not computable. Ugh. So Hutter and Rathmanner actually have the nerve to begin that article by talking about the problem of induction, when the goal is obviously to introduce concepts of computation theory? And they are already familiar with Occam’s razor, and aware of it having, at least probably, been formalized?
Okay then, but this doesn’t solve the problem of induction. They have not even formalized the problem of induction in a way that accounts for the logical structure of inductive inference, and leaves room for various relevance operators to take place. Nobody else has done that either, though. I should get back to this later.
“When artificial intelligence researchers attempted to capture everyday statements of inference using classical logic they began to realize this was a difficult if not impossible task.”
I hope nobody’s doing this anymore. It’s obviously impossible. “Everyday statements of inference”, whatever that might mean, are not exclusively statements of first-order logic, because Russell’s paradox is simple enough to be formulated by talking about barbers. The liar paradox is also expressible with simple, practical language.
The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent’s internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.
At first, I didn’t quite understand this. But I’m reading Introduction to Automata Theory, Languages and Computation. Are you using the * in the same sense here as it is used in the following UNIX-style regular expression?
‘[A-Z][a-z]*’
This expression is intended to refer to all word that begin with a capital letter and do not contain any surprising characters such as ö or -. Examples: “Jennifer”, “Washington”, “Terminator”. The * means [a-z] may have an arbitrary amount of iterations.
Yeah, that’s probably where it comes from. The [A-Z] can be read as “the set of every possible English capital letter” just like X can be read as “the set of every possible perception to an agent”, and the * denotes some ordered sequence of elements from the set exactly the same way in both cases.
I don’t find the Chinese room argument related to our work—besides, it seems to possibly vaguely try to state that what we are doing can’t be done. What I meant is that AI should be able to:
Observe behavior
Categorize entities into deterministic machines which cannot take a metatheoretic approach to their data processing habits and alter them.
Categorize entities into agencies who process information recursively and can consciously alter their own data processing or explain it to others.
Use this categorization ability to differentiate entities whose behavior can be corrected or explained by means of social interaction.
Use the differentiation ability to develop the “common sense” view that, given permission by the owner of the scanner and if deemed interesting, the robot could not ask for the consent of the brain scanner to take it apart and fix it.
Understand that even if the robot were capable of performing incredibly precise neurosurgery, the person will understand the notion, that the robot wishes to use surgery to alter his thoughts to correspond with the result of the brain scanner, and could consent to this or deny consent.
Possibly try to have a conversation with the person in order to find out, why they said that they were not thinking of a cat.
Failure to understand this could make the robot naively both take machines apart and cut peoples brains in order to experimentally verify, which approach produces better results. Of course there are also other things to consider when the robot tries to figure out what to do.
I don’t consider robots and humans fundamentally different. If the AI were complex enough to understand the aforementioned things, it also would understand the notion that someone wants to take it apart and reprogam it, and could consent or object.
The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat.
The latter has, to my knowledge, never been done. Arguably, the latter task requires different ability which the scanner may not have. The former requires acquiring a bitmap and using image recognition. It has already been done with simple images such as parallel black and white lines, but I don’t know whether bitmaps or image recognition were involved in that. If the cat is a problem, let’s simplify the image to the black and white lines.
Things being deterministic and predictable from knowing their initial state doesn’t mean they can’t have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding.
Even the simplest entities, such as irrational numbers or cellular automata, can have complex behavior. Humans, too, could be deterministic and predictable given that the one analyzing a human has enough data and computing power. But RP is about the understanding a consciousness could attain of itself. Such an understanding could not be deterministic within the viewpoint of that consciousness. That would be like trying to have a map contain itself. Every iteration of the map representing itself needs also to be included in the map, resulting in a requirement that the map should contain an infinite amount of information. Only an external observer could make a finite map, but that’s not what I had in mind when beginning this RP project. I do consider the goals of RP somehow relevant to AI, because I don’t suppose it’s ok a robot cannot conceptualize its own thought very elaborately, if it were intended to be as much human as possible, and maybe even be able to write novels.
I am interested in the ability to genuinely understand the worldviews of other people. For example, the gap between scientific and religious people. In the extreme, these people think of each other in such a derogatory way, that it would be as if they would view each other as having failed the Turing test. I would like robots to understand also the goals and values of religious people.
I’m still not really grasping the underlying assumptions behind this approach.
Well, that’s supposed to be a good thing, because there are supposed to be none. But saying that might not help. If you don’t know what consciousness or the experience of reality mean in my use (perhaps because you would reduce such experiences to theoretical models of physical entities and states of neural networks), you will probably not understand what I’m doing. That would suggest you cannot conceptualize idealistic ontology or you believe “mind” to refer to an empty set.
I see here the danger for rather trivial debates, such as whether I believe an AI could “experience” consciousness or reality. I don’t know what such a question would even mean. I am interested of whether it can conceptualize them in ways a human could.
(The underlying approach in the computer science approach are, roughly, “the physical world exists, and is made of lots of interacting, simple, Turing-computable stuff and nothing else”
The CTMU also states something to the effect of this. In that case, Langan is making a mistake, because he believes the CTMU to be a Wheeler-style reality theory, which contradicts the earlier statement. In your case, I guess it’s just an opinion, and I don’t feel a need to say you should believe otherwise. But I suppose I can present a rather cogent argument against that within a few days. The argument would be in the language of formal logic, so you should be able to understand it. Stay tuned...
, “animals and humans are just clever robots made of the stuff”, “magical souls aren’t involved, not even if they wear a paper bag that says ‘conscious experience’ on their head”)
I don’t wish to be unpolite, but I consider these topics boring and obvious. Hopefully I haven’t missed anything important when making this judgement.
Your strange link is very intriguing. I like very much being given this kind of links. Thank you.
About the classification thing: Agree that it’s very important that a general AI be able to classify entities into “dumb machines” and things complex enough to be self-aware, warrant an intentional stance and require ethical consideration. Even putting aside the ethical concerns, being able to recognize complex agents with intentions and model their intentions instead of their most likely massively complex physical machinery is probably vital to any sort of meaningful ability to act in a social domain with many other complex agents (cf. Dennett’s intentional stance)
The latter has, to my knowledge, never been done. Arguably, the latter task requires different ability which the scanner may not have. The former requires acquiring a bitmap and using image recognition. It has already been done with simple images such as parallel black and white lines, but I don’t know whether bitmaps or image recognition were involved in that. If the cat is a problem, let’s simplify the image to the black and white lines.
I understood the existing image reconstruction experiments measure the activation on the visual cortex when the subject is actually viewing an image, which does indeed get you a straightforward mapping to a bitmap. This isn’t the same as thinking about a cat, a person could be thinking about a cat while not looking at one, and they could have a cat in their visual field while daydreaming or suffering from hysterical blindness, so that they weren’t thinking about a cat despite having a cat image correctly show up in their visual cortex scan.
I don’t actually know what the neural correlate of thinking about a cat, as opposed to having one’s visual cortex activated by looking at one, would be like, but I was assuming interpreting it would require much more sophisticated understanding of the brain, basically at the level of difficult of telling whether a brain scan correlates with thinking about freedom, a theory of gravity or reciprocality. Basically something that’s entirely beyond current neuroscience and more indicative of some sort of Laplace’s demon like thought experiment where you can actually observe and understand the whole mechanical ensemble of the brain.
But RP is about the understanding a consciousness could attain of itself. Such an understanding could not be deterministic within the viewpoint of that consciousness. That would be like trying to have a map contain itself.
Quines are maps that contain themselves. A quining system could reflect on its entire static structure, though it would have to run some sort of emulation slower than its physical substrate to predict its future states. Hofstadter’s GEB links quines to reflection in AI.
Well, that’s supposed to be a good thing, because there are supposed to be none. But saying that might not help. If you don’t know what consciousness or the experience of reality mean in my use (perhaps because you would reduce such experiences to theoretical models of physical entities and states of neural networks), you will probably not understand what I’m doing. That would suggest you cannot conceptualize idealistic ontology or you believe “mind” to refer to an empty set.
“There aren’t any assumptions” is just a plain non-starter. There’s the natural language we’re using that’s used to present the theory and ground the concepts in the theory, and natural language basically carries a billion years of evolution leading to the three billion base pair human genome loaded with accidental complexity, leading to something from ten to a hundred thousand years of human cultural evolution with even more accidental complexity that probably gets us something in the ballpark of 100 megabytes irreducible complexity from the human DNA that you need to build up a newborn brain and another 100 megabytes (going by the heuristic of one bit of permanently learned knowledge per one second) for the kernel of the cultural stuff a human needs to learn from their perceptions to be able to competently deal with concepts like “income tax” or “calculus”. You get both of those for free when talking with other people, and neither when trying to build an AGI-grade theory of the mind.
This is also why I spelled out the trivial basic assumptions I’m working from (and probably did a very poor job at actually conveying the whole idea complex). When you start doing set theory, I assume we’re dealing with things at the complexity of mathematical objects. Then you throw in something like “anthropology” as an element in a set, and I, still in math mode, start going, whaa, you need humans before you have anthropology, and you need the billion years of evolution leading to the accidental complexity in humans to have humans, and you need physics to have the humans live and run the societies for anthropology to study, and you need the rest of the biosphere for the humans to not just curl up and die in the featureless vacuum and, and.. and that’s a lot of math. While the actual system with the power sets looks just like uniform, featureless soup to me. Sure, there are all the labels, which make my brain do the above i-don’t-get-it dance, but the thing I’m actually looking for is the mathematical structure. And that’s just really simple, nowhere near what you’d need to model a loose cloud of hydrogen floating in empty space, not to mention something many orders of magnitude more complex like a society of human beings.
My confusion about the assumptions is basically that I get the sense that analytic philosophers seem to operate like they could just write the name of some complex human concept, like “morality”, then throw in some math notation like modal logic, quantified formulas and set memberships, and call it a day. But what I’m expecting is something that teaches me how to program a computer to do mind-stuff, and a computer won’t have the corresponding mental concept for the word “morality” like a human has, since the human has the ~200M special sauce kernel which gives them that. And I hardly ever see philosophers talking about this bit.
A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you’d have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell’s equations on a single sheet of paper won’t do. It isn’t answering the question of how you’d tell a computer how to be a mind, and that’s the question I keep looking at this stuff with.
It isn’t answering the question of how you’d tell a computer how to be a mind, and that’s the question I keep looking at this stuff with.
There are many ways to answer that question. I have a flowchart and formulae. The opposite of that would be something to the effect of having the source code. I’m not sure why you expect me to have that. Was it something I said?
I thought I’ve given you links to my actual work, but I can’t find them. Did I forget? Hmm...
If you dislike metaphysics, only the latter is for you. I can’t paste the content, because the formatting on this website apparently does not permit html formulae. Wait a second, it does permit formulae, but only LaTeX. I know LaTeX, but the formulae aren’t in that format right now. I should maybe convert them.
You won’t understand the flowchart if you don’t want to discuss metaphysics. I don’t think I can prove that something, of which you don’t know what it is, could be useful to you. You would have to know what it is and judge for yourself. If you don’t want to know, it’s ok.
I am currently not sure why you would want to discuss this thing at all, given that you do not seem quite interested of the formalisms, but you do not seem interested of metaphysics either. You seem to expect me to explain this stuff to you in terms of something that is familiar to you, yet you don’t seem very interested to have a discussion where I would actually do that. If you don’t know why you are having this discussion, maybe you would like to do something else?
There are quite probably others in LessWrong who would be interested of this, because there has been prior discussion of CTMU. People interested in fringe theories, unfortunately, are not always the brightest of the lot, and I respect your abilities to casually namedrop a bunch of things I will probably spend days thinking about.
But I don’t know why you wrote so much about billions of years, babies, human cultural evolution, 100 megabytes and such. I am troubled by the thought that you might think I’m some loony hippie who actually needs a recap on those things. I am not yet feeling very comfortable in this forum because I perceive myself as vulnerable to being misrepresented as some sort of a fool by people who don’t understand what I’m doing.
I’m not trying to change LessWrong. But if this forum has people criticizing the CTMU without having a clue of what it is, then I attain a certain feeling of entitlement. You can’t just go badmouthing people and their theories and not expect any consequences if you are mistaken. You don’t need to defend yourself either, because I’m here to tell you what recursive metaphysical theories such as the CTMU are about, or recommend you to shut up about the CTMU if you are not interested of metaphysics. I’m not here to bloat my ego by portraying other people as fools with witty rhetoric, and if you Google about the CTMU, you’ll find a lot of people doing precisely that to the CTMU, and you will understand why I fear that I, too, could be treated in such a way.
I’m mostly writing this stuff trying to explain what my mindset, which I guess to be somewhat coincident with the general LW one, is like, and where it seems to run into problems with trying to understand these theories. My question about the assumptions is basically poking at something like “what’s the informal explanation of why this is a good way to approach figuring out reality”, which isn’t really an easy thing to answer. I’m mostly writing about my own viewpoint instead of addressing the metaphysical theory, since it’s easy to write about stuff I already understand, and a lot harder to to try to understand something coming from a different tradition and make meaningful comments about it. Sorry if this feels like dismissing your stuff.
The reason I went on about the complexity of the DNA and the brain is that this is stuff that wasn’t really known before the mid-20th century. Most of modern philosophy was being done when people had some idea that the process of life is essentially mechanical and not magical, but no real idea on just how complex the mechanism is. People could still get away with assuming that intelligent thought is not that formally complex around the time of Russell and Wittgenstein, until it started dawning just what a massive hairball of a mess human intelligence working in the real world is after the 1950s. Still, most philosophy seems to be following the same mode of investigation as Wittgenstein or Kant did, despite the sudden unfortunate appearance of a bookshelf full of volumes written by insane aliens between the realm of human thought and basic logic discovered by molecular biologists and cognitive scientists.
I’m not expecting people to rewrite the 100 000 pages of complexity into human mathematics, but I’m always aware that it needs to be dealt with somehow. For one thing, it’s a reason to pay more attention to empiricism than philosophy has traditionally done. As in, actually do empirical stuff, not just go “ah, yes, empiricism is indeed a thing, it goes in that slot in the theory”. You can’t understand raw DNA much, but you can poke people with sticks, see what they do, and get some clues on what’s going on with them.
For another thing, being aware of the evolutionary history of humans and the current physical constraints of human cognition and DNA can guide making an actual theory of mind from the ground up. The kludged up and sorta-working naturally evolved version might be equal to 100 000 pages of math, which is quite a lot, but also tells us that we should be able to get where we want without having to write 1 000 000 000 pages of math. A straight-up mysterian could just go, yeah, the human intelligence might be infinitely complex and you’ll never come up with the formal theory. Before we knew about DNA, we would have had a harder time coming up with a counterargument.
I keep going on about the basic science stuff, since I have the feeling that the LW style of approaching things basically starts from mid-20th century computer science and natural science, not from the philosophical tradition going back to antiquity, and there’s some sort of slight mutual incomprehension between it and modern traditional philosophy. It’s a bit like C.P. Snow’s Two Cultures thing. Many philosophers seem to be from Culture One, while LW is people from Culture Two trying to set up a philosophy of their own. Some key posts about LW’s problems with philosophy are probably Against Modal Logics and
A Diseased Discipline. Also there’s the book Good and Real, which is philosophy being done by a computer scientist and which LW folk seem to find approachable.
The key ideas in the LW approach are that you’re running on top of a massive hairball of junky evolved cognitive machinery that will trip you up at any chance you get, so you’ll need to practice empirical science to figure out what’s actually going on with life, plain old thinking hard won’t help since that’ll just lead to your broken head machinery tripping you up again, and that the end result of what you’re trying to do should be a computable algorithm. Neither of these things show up in traditional philosophy, since traditional philosophy got started before there was computer science or cognitive science or molecular biology. So LessWrongers will be confused about non-empirical attempts to get to the bottom of real-world stuff and they will be confused if the get to the bottom attempt doesn’t look like it will end up being an algorithm.
I’m not saying this approach is better. Philosophers obviously spend a long time working through their stuff, and what I am doing here is basically just picking low-hanging fruits from science that’s so recent that it hasn’t percolated into the cultural background thought yet. But we are living in interesting times when philosophers can stay mulling through the conceptual analysis, and then all of a sudden scientists will barge in and go, hey, we were doing some empiric stuff with machines, and it turns out conterfactual worlds are actually sort of real.
You don’t have to apologize, because you have been useful already. I don’t require you to go out of your way to analyze this stuff, but of course it would also be nice if we could understand each other.
The reason I went on about the complexity of the DNA and the brain is that this is stuff that wasn’t really known before the mid-20th century. Most of modern philosophy was being done when people had some idea that the process of life is essentially mechanical and not magical, but no real idea on just how complex the mechanism is. People could still get away with assuming that intelligent thought is not that formally complex around the time of Russell and Wittgenstein, until it started dawning just what a massive hairball of a mess human intelligence working in the real world is after the 1950s. Still, most philosophy seems to be following the same mode of investigation as Wittgenstein or Kant did, despite the sudden unfortunate appearance of a bookshelf full of volumes written by insane aliens between the realm of human thought and basic logic discovered by molecular biologists and cognitive scientists.
That’s a good point. The philosophical tradition of discussion I belong to was started in 1974 as a radical deviation from contemporary philosophy, which makes it pretty fresh. My personal opinion is that within decades of centuries, the largely obsolete mode of investigation you referred to will be mostly replaced by something that resembles what I and a few others are currently doing. This is because the old mode of investigation does not produce results. Despite intense scrutiny for 300 years, it has not provided an answer to such a simple philosophical problem as the problem of induction. Instead, it has corrupted the very writing style of philosophers. When one is reading philosophical publications by authors with academic prestige, every other sentence seems somehow defensive, and the writer seems to be squirming in the inconvenience caused by his intuitive understanding that what he’s doing is barren but he doesn’t know of a better option. It’s very hard for a distinguished academic to go into the freaky realm and find out whether someone made sense but had a very different approach than the academic approach. Aloof but industrious young people, with lots of ability but little prestige, are more suitable for that.
Nowadays the relatively simple philosophical problem of induction (proof of the Poincare conjecture is relatiely extremely complex) has been portrayed as such a difficult problem, that if someone devises a theoretic framework which facilitates a relatively simple solution to the problem, academic people are very inclined to state that they don’t understand the solution. I believe this is because they insist the solution should be something produced by several authors working together for a century. Something that will make theoretical philosophy again appear glamorous. It’s not that glamorous, and I don’t think it was very glamorous to invent 0 either—whoever did that—but it was pretty important.
I’m not sure what good this ranting of mine is supposed to do, though.
I’m not expecting people to rewrite the 100 000 pages of complexity into human mathematics, but I’m always aware that it needs to be dealt with somehow. For one thing, it’s a reason to pay more attention to empiricism than philosophy has traditionally done. As in, actually do empirical stuff, not just go “ah, yes, empiricism is indeed a thing, it goes in that slot in the theory”. You can’t understand raw DNA much, but you can poke people with sticks, see what they do, and get some clues on what’s going on with them.
The metaphysics of quality, of which my RP is a much-altered instance, is an empiricist theory, written by someone who has taught creative writing in Uni, but who has also worked writing technical documents. The author has a pretty good understanding of evolution, social matters, computers, stuff like that. Formal logic is the only thing in which he does not seem proficient, which maybe explains why it took so long for me to analyze his theories. :)
If you want, you can buy his first book, Zen and the Art of Motorcycle Maintenance from Amazon at the price of a pint of beer. (Tap me in the shoulder if this is considered inappropriate advertising.) You seem to be logically rather demanding, which is good. It means I should tell you that in order to attain understanding of MOQ that explains a lot more of the metaphysical side of RP, you should also read his second book. They are also available in every Finnish public library I have checked (maybe three or four libraries).
What more to say… Pirsig is extremely critical of the philosophical tradition starting from antiquity. I already know LW does not think highly of contemporary philosophy, and that’s why I thought we might have something in common in the first place. I think we belong to the same world, because I’m pretty sure I don’t belong to Culture One.
The key ideas in the LW approach are that you’re running on top of a massive hairball of junky evolved cognitive machinery that will trip you up at any chance you get
Okay, but nobody truly understands that hairball, if it’s the brain.
the end result of what you’re trying to do should be a computable algorithm.
That’s what I’m trying to do! But it is not my only goal. I’m also trying to have at least some discourse with World One, because I want to finish a thing I began. My friend is currently in the process of writing a formal definition related to that thing, and I won’t get far with the algorithm approach before he’s finished that and is available for something else. But we are actually planning that. I’m not bullshitting you or anything. We have been planning to do that for some time already. And it won’t be fancy at first, but I suppose it could get better and better the more we work on it, or the approach would maybe prove a failure, but that, again, would be an interesting result. Our approach is maybe not easily understood, though...
My friend understands philosophy pretty well, but he’s not extremely interested of it. I have this abstract model of how this algortihm thing should be done, but I can’t prove to anyone that it’s correct. Not right now. It’s just something I have developed by analyzing an unusual metaphysical theory for years. The reason my friend wants to do this apparently is that my enthusiasm is contagious and he does enjoy maths for the sake of maths itself. But I don’t think I can convince people to do this with me on grounds that it would be useful! And some time ago, people thought number theory is a completely useless but a somehow “beautiful” form of mathematics. Now the products of number theory are used in top-secret military encryption, but the point is, nobody who originally developed number theory could have convinced anyone the theory would have such use in the future. So, I don’t think I can have people working with me in hopes of attaining grand personal success. But I think I could meet someone who finds this kind of activity very enjoyable.
The “state basic assumptions” approach is not good in the sense that it would go all the way to explaining RP. It’s maybe a good starter, but I can’t really transform RP into something that could be understood from an O point of view. That would be like me needing to express equation x + 7 = 20 to you in such terms that x + y = 20. You couldn’t make any sense of that.
I really have to go now, actually I’m already late from somewhere...
A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you’d have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell’s equations on a single sheet of paper won’t do. It isn’t answering the question of how you’d tell a computer how to be a mind, and that’s the question I keep looking at this stuff with.
You want a sweater. I give you a baby sheep, and it is the only baby sheep you have ever seen that is not completely lame or retarded. You need wool to produce the sweater, so why are you disappointed? Look, the mathematical part of the theory is something we wrote less than a week ago, and it is already better than any theory of this type I have ever heard of (three or four). The point is not that this would be excruciatingly difficult. The point is that for some reason, almost nobody is doing this. It probably has something to do with the severe stagnation in the field of philosophy. The people who could develop philosophy find the academic discipline so revolting they don’t.
I did not come to LessWrong to tell everyone I have solved the secrets of the universe, or that I am very smart. My ineptitude in math is the greatest single obstacle in my attempts to continue development. If I didn’t know exactly one person who is good at math and wants to do this kind of work with me, I might be in an insane asylum, but no more about that. I came here because this is my life… and even though I greatly value the MOQ community, everyone on those mailing lists is apparently even less proficient in maths and logic as I am. Maybe someone here thinks this is fun and wants to have a fun creative process with me.
I would like to write a few of those 100 000 pages that we need. I don’t get your point. You seem to require me to have written them before I have written them.
My confusion about the assumptions is basically that I get the sense that analytic philosophers seem to operate like they could just write the name of some complex human concept, like “morality”, then throw in some math notation like modal logic, quantified formulas and set memberships, and call it a day. But what I’m expecting is something that teaches me how to program a computer to do mind-stuff, and a computer won’t have the corresponding mental concept for the word “morality” like a human has, since the human has the ~200M special sauce kernel which gives them that. And I hardly ever see philosophers talking about this bit.
Do you expect to build the digital sauce kernel without any kind of a plan—not even a tentative one? If not, a few pages of extremely abstract formulae is all I have now, and frankly, I’m not happy about that either. I can’t teach you nearly anything you seem interested of, but I could really use some discussion with interested people. And you have already been helpful. You don’t need to consider me someone who is aggressively imposing his views on individual people. I would love to find people who are interested of these things because there are so few of them.
I had a hard time figuring out what you mean by basic assumptions, because I’ve been doing this for such a long time I tend to forget what kind of metaphysical assumptions are generally held by people who like science but are disinterested of metaphysics. I think I’ve now caught up with you. Here are some basic assumptions.
RP is about definable things. It is not supposed to make statements about undefinable things—not even that they don’t exist, like you would seem to believe.
Humans are before anthropology in RP. The former is in O2 and the latter in O4. I didn’t know how to tell you that because I didn’t know you wanted to hear that and not some other part of the theory in order to not go whaaa. I’d need to tell you everything but that would involve a lot of metaphysics. But the theory is not a theory of the history of the world, if “world” is something that begins with the Big Bang.
From your empirical scientific point of view, I suppose it would be correct to state that RP is a theory of how the self-conscious part of one person evolves during his lifetime.
At least in the current simple isntance of RP, you don’t need to know anything about the metaphysical content to understand the math. You don’t need to go out of math-mode, because there are no nonstandard metaphysical concepts among the formulae.
If you do go out of the math mode and want to know what the symbols stand foor, I think that’s very good. But this can only be explained to you in terms of metaphysics, because empirical science simply does not account for everything you experience. Suppose you stop by in the grocery store. Where’s the empirical theory that accounts for that? Maybe some general sociological theory would. But my point is, no such empirical theory is actually implemented. You don’t acquire a scientific explanation for the things you did in the store. Still you remember them. You experienced them. They exist in your self-conscious mind in some way, which is not dependent of your conceptions of what is the relationship between topology and model theory, or of your understanding of why fission of iron does not produce energy, or how one investor could single-handedly significantly affect whether a country joins the Euro. From your personal, what you might perhaps call “subjective”, point of view, it does not even depend on your conception of cognition science, unless you actually apply that knowledge to it. You probably don’t do that all the time although you do that sometimes.
I don’t subscribe to any kind of “subjectivism”, whatever that might be in this context, or idealism, in the sense that something like that would be “true” in a meaningful way. But you might agree that when trying to develop the theory underlying self-conscious phenomenal and abstract experience, you can’t begin from the Big Bang, because you weren’t there.
You could use RP to describe a world you experience in a dream, and the explanation would work as well as when you are awake. Physical theories don’t work in that world. For example, if you look at your watch in a dream, then look away, and look at it again, the watch may display a completely different time. Or the watch may function, but when you take it apart, you find that instead of clockwork, it contains something a functioning mechanical watch will not contain, such as coins.
RP is intended to relate abstract thought (O, N, S) to sensory perceptions, emotions and actions (R), but to define all relations between abstract entities to other abstract entities recursively.
One difference between RP and the empiric theories of cosmology and such, that you mentioned, is that the latter will not describe the ability of person X to conceptualize his own cognitive processess in a way that can actually be used right now to describe what, or rather, how, some person is thinking with respect to abstract concepts. RP does that.
RP can be used to estimate the metaphysical composure of other people. You seem to place most of the questions you label “metaphysical” or “philosophical” in O.
I don’t yet know if this forum tolerates much metaphysical discussion, but my theory is based on about six years of work on the Metaphysics of Quality. That is not mainstream philosophy and I don’t know how people here will perceive it. I have altered the MOQ a lot. It’s latest “authorized” variant in 1991 decisively included mostly just the O patterns. Analyzing the theory was very difficult for me in general. But maybe I will confuse people if I say nothing about the metaphysical side. So I’ll think what to say...
RP is not an instance of relativism (except in the Buddhist sense), absolutism, determinism, indeterminism, realism, antirealism or solipsism. Also, I consider all those theories to be some kind figures of speech, because I can’t find any use for them except to illustrate a certain point in a certain discussion in a metaphorical fashion. In logical analysis, these concepts do not necessarily retain the same meaning when they are used again in another discussion. These concepts acquire definable meaning only when detached from the philosophical use and being placed within a specific context.
Structurally RP resembles what I believe computer scientists call context-free languages, or programming languages with dynamic typing. I am not yet sure what is the exact definition of the former, but having written a few programs, I do understand what it means to do typing run-time. The Western mainstream philosophical tradition does not seem to include any theories that would be analogues of these computer science topics.
I have read GEB but don’t remember much. I’ll recap what a quine is. I tend to need to discuss mathematical things with someone face to face before I understand them, which slows down progress.
The cat/line thing is not very relevant, but apparently I didn’t remember the experiment right. However, if the person and the robot could not see the lines at the same time for some reason—such as the robot needing to operate the scanner and thus not seeing inside the scanner—the robot could alter the person’s brain to produce a very strong response to parallel lines in order to verify that the screen inside the scanner, which displays the lines, does not malfunction, is not unplugged, the person is not blind, etc. There could be more efficient ways of finding such things out, but if the robot has replaceable hardware and can thus live indefinitely, it has all the time in the world...
According to the abstract, the scope of the theory you linked is a subset of RP. :D I find this hilarious because the theory was described as “ridiculously broad”. It seems to attempt to encompass all of O, and may contain interesting insight my work clearly does not contain. But the RP defines a certain scope of things, and everything in this article seems to belong to O, with perhaps some N without clearly differentiating the two. S is missing, which is rather usual in science. From the scientific point of view, it may be hard to understand what Buddhists could conceivably believe to achieve by meditation. They have practiced it for millenia, yet they did not do brain scans that would have revealed its beneficial effects, and they did not perform questionnaires either and compile the results into a statistic. But they believed it is good to meditate, and were not very interested of knowing why it is good. That belongs to the realm of S.
In any case, this illustrates an essential feature of RP. It’s not so much a theory about “things”, you know, cars, flowers, finances, than a theory about what are the most basic kind of things, or about what kind of options for the scope of any theory or statement are intelligible. It doesn’t currently do much more because the algorith part is missing. It’s also not necessarily perfect or anything like that. If something apparently coherent cannot be included to the scope of RP in a way that makes sense, maybe the theory needs to be revised.
Perhaps I could give a weird link in return. This is written by someone who is currently a Professor of Analytic Philosophy at the University of Melbourne. I find the theory to mathematically outperform that of Langan in that it actually has mathematical content instead of some sort of a sketch. The writer expresses himself coherently and appears to understand in what style do people expect to read that kind of text. But the theory does not recurse in interesting ways. It seems quite naive and simple to me and ignores the symbol grounding problem. It is practically an N-type theory, which only allegedly has S or O content. The writer also seems to make exagerrating interpretations of what Nagarjuna said. These exagerrating interpretations lead to making the same assumptions which are the root of the contradiction in CTMU, but The Structure of Emptiness is not described as a Wheeler-style reality theory, so in that paper, the assumptions do not lead to a contradiction although they still seem to misunderstand Nagarjuna.
By the way, I have thought about your way of asking for basic assumptions. I guess I initially confused it with you asking for some sort of axioms, but since you weren’t interested of the formalisms, I didn’t understand what you wanted. But now I have the impression that you asked me to make general statements of what the theory can do that are readily understood from the O viewpoint, and I think it has been an interesting approach for me, because I didn’t use that in the MOQ community, which would have been unlikely to request that approach.
I’ll address the rest in a bit, but about the notation:
Questions to you:
Is T → U the Cartesian product of T and U?
What is *?
T -> U is a function from set T to set U.P* means a list of elements in set P, where the difference from set is that elements in a list are in a specific order.
The notation as a whole was a somewhat fudged version of intelligent agent formalism. The idea is to set up a skeleton for modeling any sort of intelligent entity, based on the idea that the entity only learns things from its surroundings though a series of perceptions, which might for example be a series of matrices corresponding to the images a robot’s eye camera sees, and can only affect its surroundings by choosing an action it is capable of, such as moving a robotic arm or displaying text to a terminal.
The agent model is pretty all-encompassing, but also not that useful except as the very first starting point, since all of the difficulty is in the exact details of the function that turns the most likely massive amount of data in the perception history into a well-chosen action that efficiently furthers the goals of the AI.
Modeling AIs as the function from a history of perceptions to an action is also related to thought experiments like Ned Block’s Blockhead, where a trivial AI that passes the Turing test with flying colors is constructed by merely enumerating every possible partial conversation up to a certain length, and writing up the response a human would make at that point of that conversation.
Scott Aaronson’s Why philosophers should care about computational complexity proposes to augment the usual high-level mathematical frameworks with some limits to the complexity of the black box functions, to make the framework reject cases like Blockhead, which seem to be very different from what we’d like to have when we’re looking for a computable function that implements an AI.
But my dilemma is that Chris Langan is the smartest known living man, which makes it really hard for me to shrug the CTMU off as nonsense.
You can’t rely too much on intelligence tests, especially in the super-high range. The tester himself admitted that Langan fell outside the design range of the test, so the listed score was an extrapolation. Further, IQ measurements, especially at the extremes and especially on only a single test (and as far as I could tell from the wikipedia article, he was only tested once) measure test-taking ability as much as general intelligence.
Even if he is the most intelligent man alive, intelligence does not automatically mean that you reach the right answer. All evidence points to it being rubbish.
Many smart people fool themselves in interesting ways thinking about this sort of thing. And of course, when predicting general intelligence based on IQ, remember to account for return to the mean: if there’s such a thing as the smartest person in the world by some measure of general intelligence, it’s very unlikely it’ll be the person with the highest IQ.
A powerful computer with a bad algorithm or bad information can produce a high volume of bad results that are all internally consistent.
(IQ may not be directly analogous to computing power, but there are a lot of factors that matter more than the author’s intelligence when assessing whether a model bears out in reality.)
Google suggests you mean this CTMU.
Looks like rubbish to me, I’m afraid. If what’s on this site interests you, I think you’ll get a lot more out of the Sequences, including the tools to see why the ideas in the site above aren’t really worth pursuing.
Introduction to the CTMU
Yeah, I know what it looks like: meta-physical rubbish. But my dilemma is that Chris Langan is the smartest known living man, which makes it really hard for me to shrug the CTMU off as nonsense. Also, from what I skimmed, it looks like a much deeper examination of reductionism and strange loops, which are ideas that I hold to dearly.
I’ve read and understand the sequences, though I’m not familiar enough with them to use them without a rationalist context.
Eh, I’m smart too. Looks to me like you were right the first time and need to have greater confidence in yourself.
More to the point, you do not immediately fail the “common ground” test.
Pragmatically, I don’t care how smart you are, but whether you can make me smarter. If you are so much smarter than I as to not even bother, I’d be wasting my time engaging your material.
I should note that the ability to explain things isn’t the same attribute as intelligence. I am lucky enough to have it. Other legitimately intelligent people do not.
If your goal is to convey ideas to others, instrumental rationality seems to demand you develop that capacity.
Considering the extraordinary rarity of good explainers in this entire civilization, I’m saddened to say that talent may have something to do with it, not just practice.
I wonder what I should do. I’m smart, I seem to be able to explain things that I know to people well.. to my lament, I got the same problem as Thomas: I apparently suck at learning things so that they’re internalized and in my long term memory.
easy now
I can learn from dead people, stupid people, or by watching a tree for an hour. I don’t think I understand your point.
I didn’t use the word “learn”. My point is about a smart person conveying their ideas to someone. Taboo “smart”. Distinguish ability to reach goals, and ability to score high on mental aptitude tests. If they are goal-smart, and their goal is to convince, they will use their iq-smarts to develop the capacity to convince.
However intelligent he is, he fails to present his ideas so as to gradually build a common ground with lay readers. “If you’re so smart, how come you ain’t convincing?”
The “intelligent design” references on his Wikipedia bio are enough to turn me away. Can you point us to a well-regarded intellectual who has taken his work seriously and recommends his work? (I’ve used that sort of bridging tactic at least once, Dennett convincing me to read Julian Jaynes.)
“Convincing” has long been a problem for Chris Langan. Malcolm Gladwell relates a story about Langan attending a calculus course in first year undergrad. After the first lecture, he went to offer criticism of the prof’s pedagogy. The prof thought he was complaining that the material was too hard; Langan was unable to convey that he had understood the material perfectly for years, and wanted to see better teaching.
Being very intelligent does not imply not being very wrong.
You just get to take bigger mistakes than others. From the youtube videos Langan looks like a really bright fellow that has a very broken toolbox, and little correction. Argh!
It is. I got as far as this paragraph of the introduction to his paper before I found a critical flaw:
At this point, he’s already begging the question, i.e. presupposing the existence of supernatural entities. These “laws” he’s talking about are in his head, not in the world.
In other words, he hasn’t even got done presenting what problem he’s trying to solve, and he’s already got it completely wrong, and so it’s doubtful he can get to correct conclusions from such a faulty premise.
That’s not a critical flaw. In metaphysics, you can’t take for granted that the world is not in your head. The only thing you really can do is to find an inconsistency, if you want to prove someone wrong.
Langan has no problems convincing me. His attempt at constructing a reality theory is serious and mature and I think he conducts his business about the way an ordinary person with such aims would. He’s not a literary genius like Robert Pirsig, he’s just really smart otherwise.
I’ve never heard anyone to present such criticism of the CTMU that would actually imply understanding of what Langan is trying to do. The CTMU has a mistake. It’s that Langan believes (p. 49) the CTMU to satisfy the Law Without Law condition, which states: “Concisely, nothing can be taken as given when it comes to cosmogony.” (p. 8)
According to the Mind Equals Reality Principle, the CTMU is comprehensive. This principle “makes the syntax of this theory comprehensive by ensuring that nothing which can be cognitively or perceptually recognized as a part of reality is excluded for want of syntax”. (p. 15) But undefinable concepts can neither be proven to exist nor proven not to exist. This means the Mind Equals Reality Principle must be assumed as an axiom. But to do so would violate the Law Without Law condition.
The Metaphysical Autology Principle could be stated as an axiom, which would entail the nonexistence of undefinable concepts. This principle “tautologically renders this syntax closed or self-contained in the definitive, descriptive and interpretational senses”. (p. 15) But it would be arbitrary to have such an axiom, and the CTMU would again fail to fulfill Law Without Law.
If that makes the CTMU rubbish, then Russell’s Principia Mathematica is also rubbish, because it has a similar problem which was pointed out by Gödel. EDIT: Actually the problem is somewhat different than the one addressed by Gödel.
Langan’s paper can be found here EDIT: Fixed link.
To clarify, I’m not the generic “skeptic” of philosophical thought experiments. I am not at all doubting the existence of the world outside my head. I am just an apparently competent metaphysician in the sense that I require a Wheeler-style reality theory to actually be a Wheeler-style reality theory with respect to not having arbitrary declarations.
There might not be many people here to who are sufficiently up to speed on philosophical metaphysics to have any idea what a Wheeler-style reality theory, for example, is. My stereotypical notion is that the people at LW have been pretty much ignoring philosophy that isn’t grounded in mathematics, physics or cognitive science from Kant onwards, and won’t bother with stuff that doesn’t seem readable from this viewpoint. The tricky thing that would help would be to somehow translate the philosopher-speak into lesswronger-speak. Unfortunately this’d require some fluency in both.
It’s not like your average “competent metaphysicist” would understand Langan either. He wouldn’t possibly even understand Wheeler. Langan’s undoing is to have the goals of a metaphysicist and the methods of a computer scientist. He is trying to construct a metaphysical theory which structurally resebles a programming language with dynamic type checking, as opposed to static typing. Now, metaphysicists do not tend to construct such theories, and computer scientists do not tend to be very familiar with metaphysics. Metaphysical theories tend to be deterministic instead of recursive, and have a finite preset amount of states that an object can have. I find the CTMU paper a bit sketchy and missing important content besides having the mistake. If you’re interested in the mathematical structure of a recursive metaphysical theory, here’s one: http://www.moq.fi/?p=242
Formal RP doesn’t require metaphysical background knowledge. The point is that because the theory includes a cycle of emergence, represented by the power set function, any state of the cycle can be defined in relation to other states and prior cycles, and the amount of possible states is infinite. The power set function will generate a staggering amount of information in just a few cycles, though. Set R is supposed to contain sensory input and thus solve the symbol grounding problem.
Of course the symbol grounding problem is rather important, so it doesn’t really suffice to say that “set R is supposed to contain sensory input”. The metaphysical idea of RP is something to the effect of the following:
Let n be 4.
R contains everything that could be used to ground the meaning of symbols.
R1 contains sensory perceptions
R2 contains biological needs such as eating and sex, and emotions
R3 contains social needs such as friendship and respect
R4 contains mental needs such as perceptions of symmetry and beauty (the latter is sometimes reducible to the Golden ratio)
N contains relations of purely abstract symbols.
N1 contains the elementary abstract entities, such as symbols and their basic operations in a formal system
N2 contains functions of symbols
N3 contains functions of functions. In mathematics I suppose this would include topology.
N4 contains information of the limits of the system, such as completeness or consistency. This information form the basis of what “truth” is like.
Let ℘(T) be the power set of T.
The solving of the symbol grounding problem requires R and N to be connected. Let us assume that ℘(Rn) ⊆ Rn+1. R5 hasn’t been defined, though. If we don’t assume subsets of R to emerge from each other, we’ll have to construct a lot more complicated theories that are more difficult to understand.
This way we can assume there are two ways of connecting R and N. One is to connect them in the same order, and one in the inverse order. The former is set O and the latter is set S.
O set includes the “realistic” theories, which assume the existence of an “objective reality”.
℘(R1) ⊆ O1 includes theories regarding sensory perceptions, such as physics.
℘(R2) ⊆ O2 includes theories regarding biological needs, such as the theory of evolution
℘(R3) ⊆ O3 includes theories regarding social affairs, such as anthropology
℘(R4) ⊆ O4 includes theories regarding rational analysis and judgement of the way in which social affairs are conducted
The relationship between O and N:
N1 ⊆ O1 means that physical entities are the elementary entities of the objective portion of the theory of reality. Likewise:
N2 ⊆ O2
N3 ⊆ O3
N4 ⊆ O4
S set includes “solipsistic” ideas in which “mind focuses to itself”.
℘(R4) ⊆ S1 includes ideas regarding what one believes
℘(R3) ⊆ S2 includes ideas regarding learning, that is, adoption of new beliefs from one’s surroundings. Here social matters such as prestige, credibility and persuasiveness affect which beliefs are adopted.
℘(R2) ⊆ S3 includes ideas regarding judgement of ideas. Here, ideas are mostly judged by how they feel. Ie. if a person is revolted by the idea of creationism, they are inclined to reject it even without rational grounds, and if it makes them happy, they are inclined to adopt it.
℘(R1) ⊆ S4 includes ideas regarding the limits of the solipsistic viewpoint. Sensory perceptions of objectively existing physical entities obviously present some kind of a challenge to it.
The relationship between S and N:
N4 ⊆ S1 means that beliefs are the elementary entities of the solipsistic portion of the theory of reality. Likewise:
N3 ⊆ S2
N2 ⊆ S3
N1 ⊆ S4
That’s the metaphysical portion in a nutshell. I hope someone was interested!
We were talking about applying the metaphysics system to making an AI earlier in IRC, and the symbol grounding problem came up there as a basic difficulty in binding formal reasoning systems to real-time actions. It doesn’t look like this was mentioned here before.
I’m assuming I’d want to actually build an AI that needs to deal with symbol grounding, that is, it needs to usefully match some manner of declarative knowledge it represents in its internal state to the perceptions it receives from the outside world and to the actions it performs on it. Given this, I’m getting almost no notion of what useful work this theory would do for me.
Mathematical descriptions can be useful for people, but it’s not given that they do useful work for actually implementing things. I can define a self-improving friendly general artificial intelligence mathematically by defining
FAI = <S, P*>
as an artificial intelligence instance, consisting of its current internal state S and the history of its perceptions up to the present P*,a: FAI -> A*
as a function that gives the list of possible actions for a given FAI instanceu: A -> Real
as a function that gives the utility of each action as a real number, with higher numbers given to actions that advance the purposes of the FAI better based on its current state and perception history andf: FAI * A -> S, P
as an update function that takes an action and returns a new FAI internal state with any possible self-modifications involved in the action applied, and a new perception item that contains whatever new observations the FAI made as a direct result of its action.And there’s a quite complete mathematical description of a friendly artificial intelligence, you could probably even write a bit of neat pseudocode using the pieces there, but that’s still not likely to land me a cushy job supervising the rapid implementation of the design at SIAI, since I don’t have anything that does actual work there. All I did was push all the complexity into the black boxes of the
u
,a
andf
.I also implied a computational approach where the system enumerates every possible action, evaluates them all and then picks a winner with how I decided to split up the definition. This is mathematically expedient, given that in mathematics any concerns of computation time can be pretty much waved off, but appears rather naive computationally, as it is likely that both coming up with possible actions and evaluating them can get extremely expensive in the artificial general intelligence domain.
With the metaphysics thing, beyond not getting a sense of it doing any work, I’m not even seeing where the work would hide. I’m not seeing black box functions that need to do an unknowable amount of work, just sets with strange elements being connected to other sets with strange elements. What should you be able to do with this thing?
You probably have a much more grassroot-level understanding of the symbol grounding problem. I have only solved the symbol grounding problem to the extent that I have formal understanding of its nature.
In any case, I am probably approaching AI from a point of view that is far from the symbol grounding problem. My theory does not need to be seen as an useful solution to that problem. But when an useful solution is created, I postulate it can be placed within RP. Such a solution would have to be an algorithm for creating S-type or O-type sets of members of R.
More generally, I would find RP to be useful as an extremely general framework of how AI or parts of AI can be constructed in relation to each other, ecspecially with regards to understanding lanugage and the notion of consciousness. This doesn’t necessarily have anything to do with some more atomistic AI projects, such as trying to make a robot vacuum cleaner find its way back to the charging dock.
At some point, philosophical questions and AI will collide. Suppose the following thought experiment:
We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?
1) The brain scanner is broken
2) The person is broken
In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.
RP should help with such problems because it is intended as an elegant, compact and flexible way of defining recursion while allowing the solution of the symbol grounding problem to be contained in the definition in a nontrivial way. That is, RP as a framework of AI is not something that says: “Okay, this here is RP. Just perform the function RP(sensory input) and it works, voilá.” Instead, it manages to express two different ways of solving the symbol grounding problem and to define their accuracy as a natural number n. In addition, many emergence relations in RP are logical consequences of the way RP solves the symbol grounding problem (or, if you prefer, “categorizes the parts of the actual solution to the symbol grounding problem”).
In the previous thought experiment, the AI should manage to understand that the scanner deterministically performs the operation ℘(R) ⊆ S, and does not define S in terms of anything else. The person, on the other hand, is someone whose information processing is based on RP or something similar.
But what you read from moq.fi is something we wrote just a few days ago. It is by no means complete.
One problem is that ℘(T) does not seem to define actual emergences, but only all possible emergences.
We should define functions for “generalizing” and “specifying” sets or predicates, in which generalization would create a new set or predicate from an existing one by adding members, and specifying would do so by reducing members.
We should add a discard order to sets. Sets that are used often have a high discard order, but sets that are never used end up erased from memory. This is similar to nonused pathways in the brain dying out, and often used pathways becoming stronger.
The theory does not yet have an algorithmic part, but it should have. That’s why it doesn’t yet do anything.
℘(Rn) should be defined to include a metatheoretic approach to the theory itself, facilitating modification of the theory with the yet-undefined generalizing and specifying functions.
Questions to you:
Is T → U the Cartesian product of T and U?
What is *?
I will not guarantee having discussions with me is useful for attaining a good job. ;)
I don’t really understand this part.
“The scanner does not understand the information but the person does” sounds like some variant of Searle’s Chinese Room argument when presented without further qualifiers. People in AI tend to regard Searle as a confused distraction.
The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent’s internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.
I suppose the idea here is that there is some difference whether there is a human being sitting in the scanner, or, say, a toy robot with a state of two bits where one is I am thinking about cats and the other is I am broken and will lie about thinking about cats. With the robot, we could just check the “broken” bit as well from the scan when the robot is disagreeing with the scanner, and if it is set, conclude that the robot is broken.
I’m not seeing how humans must be fundamentally different. The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat. Things being deterministic and predictable from knowing their initial state doesn’t mean they can’t have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding.
Sorry I keep skipping over your formalism stuff, but I’m still not really grasping the underlying assumptions behind this approach. (The underlying approach in the computer science approach are, roughly, “the physical world exists, and is made of lots of interacting, simple, Turing-computable stuff and nothing else”, “animals and humans are just clever robots made of the stuff”, “magical souls aren’t involved, not even if they wear a paper bag that says ‘conscious experience’ on their head”)
The whole philosophical theory of everything thing does remind me of this strange thing from a year ago, where the building blocks for the theory were made out of nowadays more fashionable category theory rather than set theory though.
I’ve read some of this Universal Induction article. It seems to operate from flawed premises.
Suppose the brain uses algorithms. An uncontroversial supposition. From a computational point of view, the former citation is like saying: “In order for a computer to not run a program, such as Indiana Jones and the Fate of Atlantis, the computer must be executing some command to the effect of “DoNotExecuteProgram(‘IndianaJonesAndTheFateOfAtlantis’)”.
That’s not how computers operate. They just don’t run the program. They don’t need a special process for not running the program. Instead, not running the program is “implicitly contained” in the state of affairs that the computer is not running it. But this notion of implicit containment makes no sense for the computer. There are infinitely many programs the computer is not running at a given moment, so it can’t process the state of affairs that it is not running any of them.
Likewise, the use of an implicit bias towards simplicity cannot be meaningfully conceptualized by humans. In order to know how this bias simplifies everything, one would have to know, what information regarding “everything” is omitted by the bias. But if we knew that, the bias would not exist in the sense the author intends it to exist.
Furthermore:
The author says that there are variations of the no free lunch theorem for particular contexts. But he goes on to generalize that the notion of no free lunch theorem means something independent of context. What could that possibly be? Also, such notions as “arbitrary complexity” or “randomness” seem intuitively meaningful, but what is their context?
The problem is, if there is no context, the solution cannot be proven to address the problem of induction. But if there is a context, it addresses the problem of induction only within that context. Then philosophers will say that the context was arbitrary, and formulate the problem again in another context where previous results will not apply.
In a way, this makes the problem of induction seem like a waste of time. But the real problem is about formalizing the notion of context in such a way, that it becomes possible to identify ambiguous assumptions about context. That would be what separates scientific thought from poetry. In science, ambiguity is not desired and should therefore be identified. But philosophers tend to place little emphasis on this, and rather spend time dwelling on problems they should, in my opinion, recognize as unsolvable due to ambiguity of context.
The omitted information in this approach is information with a high Kolmogorov complexity, which is omitted in favor of information with low Kolmogorov complexity. A very rough analogy would be to describe humans as having a bias towards ideas expressible in few words of English in favor of ideas that need many words of English to express. Using Kolmogorov complexity for sequence prediction instead of English language for ideas in the construction gets rid of the very many problems of rigor involved in the latter, but the basic idea is pretty much the same. You look into things that are briefly expressible in favor of things that must be expressed in length. The information isn’t permanently omitted, it’s just depriorized. The algorithm doesn’t start looking at the stuff you need long sentences to describe before it has convinced itself that there are no short sentences that describe the observations it wants to explain in a satisfactory way.
One bit of context that is assumed is that the surrounding universe is somewhat amenable to being Kolmogorov-compressed. That is, there are some recurring regularities that you can begin to discover. The term “lawful universe” sometimes thrown around in LW probably refers to something similar.
Solomonoff’s universal induction would not work in a completely chaotic universe, where there are no regularities for Kolmogorov compression to latch on. You’d also be unlikely to find any sort of native intelligent entities in such universes. I’m not sure if this means that the Solomonoff approach is philosophically untenable, but needing to have some discoverable regularities to begin with before discovering regularities with induction becomes possible doesn’t strike me as that great a requirement.
If the problem of context is about exactly where you draw the data for the sequence which you will then try to predict with Solomonoff induction, in a lawless universe you wouldn’t be able to infer things no matter which simple instrumentation you picked, while in a lawful universe you could pick all sorts of instruments, tracking the change of light during time, tracking temperature, tracking the luminousity of the Moon, for simple examples, and you’d start getting Kolmogorov-compressible data where the induction system could start figuring repeating periods.
The core thing “independent of context” in all this is that all the universal induction systems are reduced to basically taking a series of numbers as input, and trying to develop an efficient predictor for what the next number will be. The argument in the paper is that this construction is basically sufficient for all the interesting things an induction solution could do, and that all the various real-world cases where induction is needed can be basically reduced into such a system by describing the instrumentation which turns real-world input into a time series of numbers.
Okay. In this case, the article does seem to begin to make sense. Its connection to the problem of induction is perhaps rather thin. The idea of using low Kolmogorov complexity as justification for an inductive argument cannot be deduced as a theorem of something that’s “surely true”, whatever that might mean. And if it were taken as an axiom, philosophers would say: “That’s not an axiom. That’s the conclusion of an inductive argument you made! You are begging the question!”
However, it seems like advancements in computation theory have made people able to do at least remotely practical stuff on areas, that bear resemblance to more inert philosophical ponderings. That’s good, and this article might even be used as justification for my theory RP—given that the use of Kolmogorov complexity is accepted. I was not familiar with the concept of Kolmogorov complexity despite having heard of it a few times, but my intuitive goal was to minimize the theory’s Kolmogorov complexity by removing arbitrary declarations and favoring symmetry.
I would say, that there are many ways of solving the problem of induction. Whether a theory is a solution to the problem of induction depends on whether it covers the entire scope of the problem. I would say this article covers half of the scope. The rest is not covered, to my knowledge, by anyone else than Robert Pirsig and experts of Buddhism, but these writings are very difficult to approach analytically. Regrettably, I am still unable to publish the relativizability article, which is intended to succeed in the analytic approach.
In any case, even though the widely rejected “statistical relevance” and this “Kolmogorov complexity relevance” share the same flaw, if presented as an explanation of inductive justification, the approach is interesting. Perhaps, even, this paper should be titled: “A Formalization of Occam’s Razor Principle”. Because that’s what it surely seems to be. And I think it’s actually an achievement to formalize that principle—an achievement more than sufficient to justify the writing of the article.
Commenting the article:
“When artificial intelligence researchers attempted to capture everyday statements of inference using classical logic they began to realize this was a difficult if not impossible task.”
I hope nobody’s doing this anymore. It’s obviously impossible. “Everyday statements of inference”, whatever that might mean, are not exclusively statements of first-order logic, because Russell’s paradox is simple enough to be formulated by talking about barbers. The liar paradox is also expressible with simple, practical language.
Wait a second. Wikipedia already knows this stuff is a formalization of Occam’s razor. One article seems to attribute the formalization of that principle to Solomonoff, another one to Hutter. In addition, Solomonoff induction, that is essential for both, is not computable. Ugh. So Hutter and Rathmanner actually have the nerve to begin that article by talking about the problem of induction, when the goal is obviously to introduce concepts of computation theory? And they are already familiar with Occam’s razor, and aware of it having, at least probably, been formalized?
Okay then, but this doesn’t solve the problem of induction. They have not even formalized the problem of induction in a way that accounts for the logical structure of inductive inference, and leaves room for various relevance operators to take place. Nobody else has done that either, though. I should get back to this later.
Commenting the article:
“When artificial intelligence researchers attempted to capture everyday statements of inference using classical logic they began to realize this was a difficult if not impossible task.”
I hope nobody’s doing this anymore. It’s obviously impossible. “Everyday statements of inference”, whatever that might mean, are not exclusively statements of first-order logic, because Russell’s paradox is simple enough to be formulated by talking about barbers. The liar paradox is also expressible with simple, practical language.
At first, I didn’t quite understand this. But I’m reading Introduction to Automata Theory, Languages and Computation. Are you using the * in the same sense here as it is used in the following UNIX-style regular expression?
‘[A-Z][a-z]*’
This expression is intended to refer to all word that begin with a capital letter and do not contain any surprising characters such as ö or -. Examples: “Jennifer”, “Washington”, “Terminator”. The * means [a-z] may have an arbitrary amount of iterations.
Yeah, that’s probably where it comes from. The [A-Z] can be read as “the set of every possible English capital letter” just like X can be read as “the set of every possible perception to an agent”, and the * denotes some ordered sequence of elements from the set exactly the same way in both cases.
I don’t find the Chinese room argument related to our work—besides, it seems to possibly vaguely try to state that what we are doing can’t be done. What I meant is that AI should be able to:
Observe behavior
Categorize entities into deterministic machines which cannot take a metatheoretic approach to their data processing habits and alter them.
Categorize entities into agencies who process information recursively and can consciously alter their own data processing or explain it to others.
Use this categorization ability to differentiate entities whose behavior can be corrected or explained by means of social interaction.
Use the differentiation ability to develop the “common sense” view that, given permission by the owner of the scanner and if deemed interesting, the robot could not ask for the consent of the brain scanner to take it apart and fix it.
Understand that even if the robot were capable of performing incredibly precise neurosurgery, the person will understand the notion, that the robot wishes to use surgery to alter his thoughts to correspond with the result of the brain scanner, and could consent to this or deny consent.
Possibly try to have a conversation with the person in order to find out, why they said that they were not thinking of a cat.
Failure to understand this could make the robot naively both take machines apart and cut peoples brains in order to experimentally verify, which approach produces better results. Of course there are also other things to consider when the robot tries to figure out what to do.
I don’t consider robots and humans fundamentally different. If the AI were complex enough to understand the aforementioned things, it also would understand the notion that someone wants to take it apart and reprogam it, and could consent or object.
The latter has, to my knowledge, never been done. Arguably, the latter task requires different ability which the scanner may not have. The former requires acquiring a bitmap and using image recognition. It has already been done with simple images such as parallel black and white lines, but I don’t know whether bitmaps or image recognition were involved in that. If the cat is a problem, let’s simplify the image to the black and white lines.
Even the simplest entities, such as irrational numbers or cellular automata, can have complex behavior. Humans, too, could be deterministic and predictable given that the one analyzing a human has enough data and computing power. But RP is about the understanding a consciousness could attain of itself. Such an understanding could not be deterministic within the viewpoint of that consciousness. That would be like trying to have a map contain itself. Every iteration of the map representing itself needs also to be included in the map, resulting in a requirement that the map should contain an infinite amount of information. Only an external observer could make a finite map, but that’s not what I had in mind when beginning this RP project. I do consider the goals of RP somehow relevant to AI, because I don’t suppose it’s ok a robot cannot conceptualize its own thought very elaborately, if it were intended to be as much human as possible, and maybe even be able to write novels.
I am interested in the ability to genuinely understand the worldviews of other people. For example, the gap between scientific and religious people. In the extreme, these people think of each other in such a derogatory way, that it would be as if they would view each other as having failed the Turing test. I would like robots to understand also the goals and values of religious people.
Well, that’s supposed to be a good thing, because there are supposed to be none. But saying that might not help. If you don’t know what consciousness or the experience of reality mean in my use (perhaps because you would reduce such experiences to theoretical models of physical entities and states of neural networks), you will probably not understand what I’m doing. That would suggest you cannot conceptualize idealistic ontology or you believe “mind” to refer to an empty set.
I see here the danger for rather trivial debates, such as whether I believe an AI could “experience” consciousness or reality. I don’t know what such a question would even mean. I am interested of whether it can conceptualize them in ways a human could.
The CTMU also states something to the effect of this. In that case, Langan is making a mistake, because he believes the CTMU to be a Wheeler-style reality theory, which contradicts the earlier statement. In your case, I guess it’s just an opinion, and I don’t feel a need to say you should believe otherwise. But I suppose I can present a rather cogent argument against that within a few days. The argument would be in the language of formal logic, so you should be able to understand it. Stay tuned...
I don’t wish to be unpolite, but I consider these topics boring and obvious. Hopefully I haven’t missed anything important when making this judgement.
Your strange link is very intriguing. I like very much being given this kind of links. Thank you.
About the classification thing: Agree that it’s very important that a general AI be able to classify entities into “dumb machines” and things complex enough to be self-aware, warrant an intentional stance and require ethical consideration. Even putting aside the ethical concerns, being able to recognize complex agents with intentions and model their intentions instead of their most likely massively complex physical machinery is probably vital to any sort of meaningful ability to act in a social domain with many other complex agents (cf. Dennett’s intentional stance)
I understood the existing image reconstruction experiments measure the activation on the visual cortex when the subject is actually viewing an image, which does indeed get you a straightforward mapping to a bitmap. This isn’t the same as thinking about a cat, a person could be thinking about a cat while not looking at one, and they could have a cat in their visual field while daydreaming or suffering from hysterical blindness, so that they weren’t thinking about a cat despite having a cat image correctly show up in their visual cortex scan.
I don’t actually know what the neural correlate of thinking about a cat, as opposed to having one’s visual cortex activated by looking at one, would be like, but I was assuming interpreting it would require much more sophisticated understanding of the brain, basically at the level of difficult of telling whether a brain scan correlates with thinking about freedom, a theory of gravity or reciprocality. Basically something that’s entirely beyond current neuroscience and more indicative of some sort of Laplace’s demon like thought experiment where you can actually observe and understand the whole mechanical ensemble of the brain.
Quines are maps that contain themselves. A quining system could reflect on its entire static structure, though it would have to run some sort of emulation slower than its physical substrate to predict its future states. Hofstadter’s GEB links quines to reflection in AI.
“There aren’t any assumptions” is just a plain non-starter. There’s the natural language we’re using that’s used to present the theory and ground the concepts in the theory, and natural language basically carries a billion years of evolution leading to the three billion base pair human genome loaded with accidental complexity, leading to something from ten to a hundred thousand years of human cultural evolution with even more accidental complexity that probably gets us something in the ballpark of 100 megabytes irreducible complexity from the human DNA that you need to build up a newborn brain and another 100 megabytes (going by the heuristic of one bit of permanently learned knowledge per one second) for the kernel of the cultural stuff a human needs to learn from their perceptions to be able to competently deal with concepts like “income tax” or “calculus”. You get both of those for free when talking with other people, and neither when trying to build an AGI-grade theory of the mind.
This is also why I spelled out the trivial basic assumptions I’m working from (and probably did a very poor job at actually conveying the whole idea complex). When you start doing set theory, I assume we’re dealing with things at the complexity of mathematical objects. Then you throw in something like “anthropology” as an element in a set, and I, still in math mode, start going, whaa, you need humans before you have anthropology, and you need the billion years of evolution leading to the accidental complexity in humans to have humans, and you need physics to have the humans live and run the societies for anthropology to study, and you need the rest of the biosphere for the humans to not just curl up and die in the featureless vacuum and, and.. and that’s a lot of math. While the actual system with the power sets looks just like uniform, featureless soup to me. Sure, there are all the labels, which make my brain do the above i-don’t-get-it dance, but the thing I’m actually looking for is the mathematical structure. And that’s just really simple, nowhere near what you’d need to model a loose cloud of hydrogen floating in empty space, not to mention something many orders of magnitude more complex like a society of human beings.
My confusion about the assumptions is basically that I get the sense that analytic philosophers seem to operate like they could just write the name of some complex human concept, like “morality”, then throw in some math notation like modal logic, quantified formulas and set memberships, and call it a day. But what I’m expecting is something that teaches me how to program a computer to do mind-stuff, and a computer won’t have the corresponding mental concept for the word “morality” like a human has, since the human has the ~200M special sauce kernel which gives them that. And I hardly ever see philosophers talking about this bit.
A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you’d have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell’s equations on a single sheet of paper won’t do. It isn’t answering the question of how you’d tell a computer how to be a mind, and that’s the question I keep looking at this stuff with.
There are many ways to answer that question. I have a flowchart and formulae. The opposite of that would be something to the effect of having the source code. I’m not sure why you expect me to have that. Was it something I said?
I thought I’ve given you links to my actual work, but I can’t find them. Did I forget? Hmm...
The Metaphysical Origin of RP
Set Theoretic Explanation of the Main Recursion Loop
If you dislike metaphysics, only the latter is for you. I can’t paste the content, because the formatting on this website apparently does not permit html formulae. Wait a second, it does permit formulae, but only LaTeX. I know LaTeX, but the formulae aren’t in that format right now. I should maybe convert them.
You won’t understand the flowchart if you don’t want to discuss metaphysics. I don’t think I can prove that something, of which you don’t know what it is, could be useful to you. You would have to know what it is and judge for yourself. If you don’t want to know, it’s ok.
I am currently not sure why you would want to discuss this thing at all, given that you do not seem quite interested of the formalisms, but you do not seem interested of metaphysics either. You seem to expect me to explain this stuff to you in terms of something that is familiar to you, yet you don’t seem very interested to have a discussion where I would actually do that. If you don’t know why you are having this discussion, maybe you would like to do something else?
There are quite probably others in LessWrong who would be interested of this, because there has been prior discussion of CTMU. People interested in fringe theories, unfortunately, are not always the brightest of the lot, and I respect your abilities to casually namedrop a bunch of things I will probably spend days thinking about.
But I don’t know why you wrote so much about billions of years, babies, human cultural evolution, 100 megabytes and such. I am troubled by the thought that you might think I’m some loony hippie who actually needs a recap on those things. I am not yet feeling very comfortable in this forum because I perceive myself as vulnerable to being misrepresented as some sort of a fool by people who don’t understand what I’m doing.
I’m not trying to change LessWrong. But if this forum has people criticizing the CTMU without having a clue of what it is, then I attain a certain feeling of entitlement. You can’t just go badmouthing people and their theories and not expect any consequences if you are mistaken. You don’t need to defend yourself either, because I’m here to tell you what recursive metaphysical theories such as the CTMU are about, or recommend you to shut up about the CTMU if you are not interested of metaphysics. I’m not here to bloat my ego by portraying other people as fools with witty rhetoric, and if you Google about the CTMU, you’ll find a lot of people doing precisely that to the CTMU, and you will understand why I fear that I, too, could be treated in such a way.
I’m mostly writing this stuff trying to explain what my mindset, which I guess to be somewhat coincident with the general LW one, is like, and where it seems to run into problems with trying to understand these theories. My question about the assumptions is basically poking at something like “what’s the informal explanation of why this is a good way to approach figuring out reality”, which isn’t really an easy thing to answer. I’m mostly writing about my own viewpoint instead of addressing the metaphysical theory, since it’s easy to write about stuff I already understand, and a lot harder to to try to understand something coming from a different tradition and make meaningful comments about it. Sorry if this feels like dismissing your stuff.
The reason I went on about the complexity of the DNA and the brain is that this is stuff that wasn’t really known before the mid-20th century. Most of modern philosophy was being done when people had some idea that the process of life is essentially mechanical and not magical, but no real idea on just how complex the mechanism is. People could still get away with assuming that intelligent thought is not that formally complex around the time of Russell and Wittgenstein, until it started dawning just what a massive hairball of a mess human intelligence working in the real world is after the 1950s. Still, most philosophy seems to be following the same mode of investigation as Wittgenstein or Kant did, despite the sudden unfortunate appearance of a bookshelf full of volumes written by insane aliens between the realm of human thought and basic logic discovered by molecular biologists and cognitive scientists.
I’m not expecting people to rewrite the 100 000 pages of complexity into human mathematics, but I’m always aware that it needs to be dealt with somehow. For one thing, it’s a reason to pay more attention to empiricism than philosophy has traditionally done. As in, actually do empirical stuff, not just go “ah, yes, empiricism is indeed a thing, it goes in that slot in the theory”. You can’t understand raw DNA much, but you can poke people with sticks, see what they do, and get some clues on what’s going on with them.
For another thing, being aware of the evolutionary history of humans and the current physical constraints of human cognition and DNA can guide making an actual theory of mind from the ground up. The kludged up and sorta-working naturally evolved version might be equal to 100 000 pages of math, which is quite a lot, but also tells us that we should be able to get where we want without having to write 1 000 000 000 pages of math. A straight-up mysterian could just go, yeah, the human intelligence might be infinitely complex and you’ll never come up with the formal theory. Before we knew about DNA, we would have had a harder time coming up with a counterargument.
I keep going on about the basic science stuff, since I have the feeling that the LW style of approaching things basically starts from mid-20th century computer science and natural science, not from the philosophical tradition going back to antiquity, and there’s some sort of slight mutual incomprehension between it and modern traditional philosophy. It’s a bit like C.P. Snow’s Two Cultures thing. Many philosophers seem to be from Culture One, while LW is people from Culture Two trying to set up a philosophy of their own. Some key posts about LW’s problems with philosophy are probably Against Modal Logics and A Diseased Discipline. Also there’s the book Good and Real, which is philosophy being done by a computer scientist and which LW folk seem to find approachable.
The key ideas in the LW approach are that you’re running on top of a massive hairball of junky evolved cognitive machinery that will trip you up at any chance you get, so you’ll need to practice empirical science to figure out what’s actually going on with life, plain old thinking hard won’t help since that’ll just lead to your broken head machinery tripping you up again, and that the end result of what you’re trying to do should be a computable algorithm. Neither of these things show up in traditional philosophy, since traditional philosophy got started before there was computer science or cognitive science or molecular biology. So LessWrongers will be confused about non-empirical attempts to get to the bottom of real-world stuff and they will be confused if the get to the bottom attempt doesn’t look like it will end up being an algorithm.
I’m not saying this approach is better. Philosophers obviously spend a long time working through their stuff, and what I am doing here is basically just picking low-hanging fruits from science that’s so recent that it hasn’t percolated into the cultural background thought yet. But we are living in interesting times when philosophers can stay mulling through the conceptual analysis, and then all of a sudden scientists will barge in and go, hey, we were doing some empiric stuff with machines, and it turns out conterfactual worlds are actually sort of real.
You don’t have to apologize, because you have been useful already. I don’t require you to go out of your way to analyze this stuff, but of course it would also be nice if we could understand each other.
That’s a good point. The philosophical tradition of discussion I belong to was started in 1974 as a radical deviation from contemporary philosophy, which makes it pretty fresh. My personal opinion is that within decades of centuries, the largely obsolete mode of investigation you referred to will be mostly replaced by something that resembles what I and a few others are currently doing. This is because the old mode of investigation does not produce results. Despite intense scrutiny for 300 years, it has not provided an answer to such a simple philosophical problem as the problem of induction. Instead, it has corrupted the very writing style of philosophers. When one is reading philosophical publications by authors with academic prestige, every other sentence seems somehow defensive, and the writer seems to be squirming in the inconvenience caused by his intuitive understanding that what he’s doing is barren but he doesn’t know of a better option. It’s very hard for a distinguished academic to go into the freaky realm and find out whether someone made sense but had a very different approach than the academic approach. Aloof but industrious young people, with lots of ability but little prestige, are more suitable for that.
Nowadays the relatively simple philosophical problem of induction (proof of the Poincare conjecture is relatiely extremely complex) has been portrayed as such a difficult problem, that if someone devises a theoretic framework which facilitates a relatively simple solution to the problem, academic people are very inclined to state that they don’t understand the solution. I believe this is because they insist the solution should be something produced by several authors working together for a century. Something that will make theoretical philosophy again appear glamorous. It’s not that glamorous, and I don’t think it was very glamorous to invent 0 either—whoever did that—but it was pretty important.
I’m not sure what good this ranting of mine is supposed to do, though.
The metaphysics of quality, of which my RP is a much-altered instance, is an empiricist theory, written by someone who has taught creative writing in Uni, but who has also worked writing technical documents. The author has a pretty good understanding of evolution, social matters, computers, stuff like that. Formal logic is the only thing in which he does not seem proficient, which maybe explains why it took so long for me to analyze his theories. :)
If you want, you can buy his first book, Zen and the Art of Motorcycle Maintenance from Amazon at the price of a pint of beer. (Tap me in the shoulder if this is considered inappropriate advertising.) You seem to be logically rather demanding, which is good. It means I should tell you that in order to attain understanding of MOQ that explains a lot more of the metaphysical side of RP, you should also read his second book. They are also available in every Finnish public library I have checked (maybe three or four libraries).
What more to say… Pirsig is extremely critical of the philosophical tradition starting from antiquity. I already know LW does not think highly of contemporary philosophy, and that’s why I thought we might have something in common in the first place. I think we belong to the same world, because I’m pretty sure I don’t belong to Culture One.
Okay, but nobody truly understands that hairball, if it’s the brain.
That’s what I’m trying to do! But it is not my only goal. I’m also trying to have at least some discourse with World One, because I want to finish a thing I began. My friend is currently in the process of writing a formal definition related to that thing, and I won’t get far with the algorithm approach before he’s finished that and is available for something else. But we are actually planning that. I’m not bullshitting you or anything. We have been planning to do that for some time already. And it won’t be fancy at first, but I suppose it could get better and better the more we work on it, or the approach would maybe prove a failure, but that, again, would be an interesting result. Our approach is maybe not easily understood, though...
My friend understands philosophy pretty well, but he’s not extremely interested of it. I have this abstract model of how this algortihm thing should be done, but I can’t prove to anyone that it’s correct. Not right now. It’s just something I have developed by analyzing an unusual metaphysical theory for years. The reason my friend wants to do this apparently is that my enthusiasm is contagious and he does enjoy maths for the sake of maths itself. But I don’t think I can convince people to do this with me on grounds that it would be useful! And some time ago, people thought number theory is a completely useless but a somehow “beautiful” form of mathematics. Now the products of number theory are used in top-secret military encryption, but the point is, nobody who originally developed number theory could have convinced anyone the theory would have such use in the future. So, I don’t think I can have people working with me in hopes of attaining grand personal success. But I think I could meet someone who finds this kind of activity very enjoyable.
The “state basic assumptions” approach is not good in the sense that it would go all the way to explaining RP. It’s maybe a good starter, but I can’t really transform RP into something that could be understood from an O point of view. That would be like me needing to express equation x + 7 = 20 to you in such terms that x + y = 20. You couldn’t make any sense of that.
I really have to go now, actually I’m already late from somewhere...
I commented Against Modal Logics.
You want a sweater. I give you a baby sheep, and it is the only baby sheep you have ever seen that is not completely lame or retarded. You need wool to produce the sweater, so why are you disappointed? Look, the mathematical part of the theory is something we wrote less than a week ago, and it is already better than any theory of this type I have ever heard of (three or four). The point is not that this would be excruciatingly difficult. The point is that for some reason, almost nobody is doing this. It probably has something to do with the severe stagnation in the field of philosophy. The people who could develop philosophy find the academic discipline so revolting they don’t.
I did not come to LessWrong to tell everyone I have solved the secrets of the universe, or that I am very smart. My ineptitude in math is the greatest single obstacle in my attempts to continue development. If I didn’t know exactly one person who is good at math and wants to do this kind of work with me, I might be in an insane asylum, but no more about that. I came here because this is my life… and even though I greatly value the MOQ community, everyone on those mailing lists is apparently even less proficient in maths and logic as I am. Maybe someone here thinks this is fun and wants to have a fun creative process with me.
I would like to write a few of those 100 000 pages that we need. I don’t get your point. You seem to require me to have written them before I have written them.
Do you expect to build the digital sauce kernel without any kind of a plan—not even a tentative one? If not, a few pages of extremely abstract formulae is all I have now, and frankly, I’m not happy about that either. I can’t teach you nearly anything you seem interested of, but I could really use some discussion with interested people. And you have already been helpful. You don’t need to consider me someone who is aggressively imposing his views on individual people. I would love to find people who are interested of these things because there are so few of them.
I had a hard time figuring out what you mean by basic assumptions, because I’ve been doing this for such a long time I tend to forget what kind of metaphysical assumptions are generally held by people who like science but are disinterested of metaphysics. I think I’ve now caught up with you. Here are some basic assumptions.
RP is about definable things. It is not supposed to make statements about undefinable things—not even that they don’t exist, like you would seem to believe.
Humans are before anthropology in RP. The former is in O2 and the latter in O4. I didn’t know how to tell you that because I didn’t know you wanted to hear that and not some other part of the theory in order to not go whaaa. I’d need to tell you everything but that would involve a lot of metaphysics. But the theory is not a theory of the history of the world, if “world” is something that begins with the Big Bang.
From your empirical scientific point of view, I suppose it would be correct to state that RP is a theory of how the self-conscious part of one person evolves during his lifetime.
At least in the current simple isntance of RP, you don’t need to know anything about the metaphysical content to understand the math. You don’t need to go out of math-mode, because there are no nonstandard metaphysical concepts among the formulae.
If you do go out of the math mode and want to know what the symbols stand foor, I think that’s very good. But this can only be explained to you in terms of metaphysics, because empirical science simply does not account for everything you experience. Suppose you stop by in the grocery store. Where’s the empirical theory that accounts for that? Maybe some general sociological theory would. But my point is, no such empirical theory is actually implemented. You don’t acquire a scientific explanation for the things you did in the store. Still you remember them. You experienced them. They exist in your self-conscious mind in some way, which is not dependent of your conceptions of what is the relationship between topology and model theory, or of your understanding of why fission of iron does not produce energy, or how one investor could single-handedly significantly affect whether a country joins the Euro. From your personal, what you might perhaps call “subjective”, point of view, it does not even depend on your conception of cognition science, unless you actually apply that knowledge to it. You probably don’t do that all the time although you do that sometimes.
I don’t subscribe to any kind of “subjectivism”, whatever that might be in this context, or idealism, in the sense that something like that would be “true” in a meaningful way. But you might agree that when trying to develop the theory underlying self-conscious phenomenal and abstract experience, you can’t begin from the Big Bang, because you weren’t there.
You could use RP to describe a world you experience in a dream, and the explanation would work as well as when you are awake. Physical theories don’t work in that world. For example, if you look at your watch in a dream, then look away, and look at it again, the watch may display a completely different time. Or the watch may function, but when you take it apart, you find that instead of clockwork, it contains something a functioning mechanical watch will not contain, such as coins.
RP is intended to relate abstract thought (O, N, S) to sensory perceptions, emotions and actions (R), but to define all relations between abstract entities to other abstract entities recursively.
One difference between RP and the empiric theories of cosmology and such, that you mentioned, is that the latter will not describe the ability of person X to conceptualize his own cognitive processess in a way that can actually be used right now to describe what, or rather, how, some person is thinking with respect to abstract concepts. RP does that.
RP can be used to estimate the metaphysical composure of other people. You seem to place most of the questions you label “metaphysical” or “philosophical” in O.
I don’t yet know if this forum tolerates much metaphysical discussion, but my theory is based on about six years of work on the Metaphysics of Quality. That is not mainstream philosophy and I don’t know how people here will perceive it. I have altered the MOQ a lot. It’s latest “authorized” variant in 1991 decisively included mostly just the O patterns. Analyzing the theory was very difficult for me in general. But maybe I will confuse people if I say nothing about the metaphysical side. So I’ll think what to say...
RP is not an instance of relativism (except in the Buddhist sense), absolutism, determinism, indeterminism, realism, antirealism or solipsism. Also, I consider all those theories to be some kind figures of speech, because I can’t find any use for them except to illustrate a certain point in a certain discussion in a metaphorical fashion. In logical analysis, these concepts do not necessarily retain the same meaning when they are used again in another discussion. These concepts acquire definable meaning only when detached from the philosophical use and being placed within a specific context.
Structurally RP resembles what I believe computer scientists call context-free languages, or programming languages with dynamic typing. I am not yet sure what is the exact definition of the former, but having written a few programs, I do understand what it means to do typing run-time. The Western mainstream philosophical tradition does not seem to include any theories that would be analogues of these computer science topics.
I have read GEB but don’t remember much. I’ll recap what a quine is. I tend to need to discuss mathematical things with someone face to face before I understand them, which slows down progress.
The cat/line thing is not very relevant, but apparently I didn’t remember the experiment right. However, if the person and the robot could not see the lines at the same time for some reason—such as the robot needing to operate the scanner and thus not seeing inside the scanner—the robot could alter the person’s brain to produce a very strong response to parallel lines in order to verify that the screen inside the scanner, which displays the lines, does not malfunction, is not unplugged, the person is not blind, etc. There could be more efficient ways of finding such things out, but if the robot has replaceable hardware and can thus live indefinitely, it has all the time in the world...
According to the abstract, the scope of the theory you linked is a subset of RP. :D I find this hilarious because the theory was described as “ridiculously broad”. It seems to attempt to encompass all of O, and may contain interesting insight my work clearly does not contain. But the RP defines a certain scope of things, and everything in this article seems to belong to O, with perhaps some N without clearly differentiating the two. S is missing, which is rather usual in science. From the scientific point of view, it may be hard to understand what Buddhists could conceivably believe to achieve by meditation. They have practiced it for millenia, yet they did not do brain scans that would have revealed its beneficial effects, and they did not perform questionnaires either and compile the results into a statistic. But they believed it is good to meditate, and were not very interested of knowing why it is good. That belongs to the realm of S.
In any case, this illustrates an essential feature of RP. It’s not so much a theory about “things”, you know, cars, flowers, finances, than a theory about what are the most basic kind of things, or about what kind of options for the scope of any theory or statement are intelligible. It doesn’t currently do much more because the algorith part is missing. It’s also not necessarily perfect or anything like that. If something apparently coherent cannot be included to the scope of RP in a way that makes sense, maybe the theory needs to be revised.
Perhaps I could give a weird link in return. This is written by someone who is currently a Professor of Analytic Philosophy at the University of Melbourne. I find the theory to mathematically outperform that of Langan in that it actually has mathematical content instead of some sort of a sketch. The writer expresses himself coherently and appears to understand in what style do people expect to read that kind of text. But the theory does not recurse in interesting ways. It seems quite naive and simple to me and ignores the symbol grounding problem. It is practically an N-type theory, which only allegedly has S or O content. The writer also seems to make exagerrating interpretations of what Nagarjuna said. These exagerrating interpretations lead to making the same assumptions which are the root of the contradiction in CTMU, but The Structure of Emptiness is not described as a Wheeler-style reality theory, so in that paper, the assumptions do not lead to a contradiction although they still seem to misunderstand Nagarjuna.
By the way, I have thought about your way of asking for basic assumptions. I guess I initially confused it with you asking for some sort of axioms, but since you weren’t interested of the formalisms, I didn’t understand what you wanted. But now I have the impression that you asked me to make general statements of what the theory can do that are readily understood from the O viewpoint, and I think it has been an interesting approach for me, because I didn’t use that in the MOQ community, which would have been unlikely to request that approach.
I’ll address the rest in a bit, but about the notation:
T -> U
is a function from set T to set U.P*
means a list of elements in set P, where the difference from set is that elements in a list are in a specific order.The notation as a whole was a somewhat fudged version of intelligent agent formalism. The idea is to set up a skeleton for modeling any sort of intelligent entity, based on the idea that the entity only learns things from its surroundings though a series of perceptions, which might for example be a series of matrices corresponding to the images a robot’s eye camera sees, and can only affect its surroundings by choosing an action it is capable of, such as moving a robotic arm or displaying text to a terminal.
The agent model is pretty all-encompassing, but also not that useful except as the very first starting point, since all of the difficulty is in the exact details of the function that turns the most likely massive amount of data in the perception history into a well-chosen action that efficiently furthers the goals of the AI.
Modeling AIs as the function from a history of perceptions to an action is also related to thought experiments like Ned Block’s Blockhead, where a trivial AI that passes the Turing test with flying colors is constructed by merely enumerating every possible partial conversation up to a certain length, and writing up the response a human would make at that point of that conversation.
Scott Aaronson’s Why philosophers should care about computational complexity proposes to augment the usual high-level mathematical frameworks with some limits to the complexity of the black box functions, to make the framework reject cases like Blockhead, which seem to be very different from what we’d like to have when we’re looking for a computable function that implements an AI.
You can’t rely too much on intelligence tests, especially in the super-high range. The tester himself admitted that Langan fell outside the design range of the test, so the listed score was an extrapolation. Further, IQ measurements, especially at the extremes and especially on only a single test (and as far as I could tell from the wikipedia article, he was only tested once) measure test-taking ability as much as general intelligence.
Even if he is the most intelligent man alive, intelligence does not automatically mean that you reach the right answer. All evidence points to it being rubbish.
Many smart people fool themselves in interesting ways thinking about this sort of thing. And of course, when predicting general intelligence based on IQ, remember to account for return to the mean: if there’s such a thing as the smartest person in the world by some measure of general intelligence, it’s very unlikely it’ll be the person with the highest IQ.
A powerful computer with a bad algorithm or bad information can produce a high volume of bad results that are all internally consistent.
(IQ may not be directly analogous to computing power, but there are a lot of factors that matter more than the author’s intelligence when assessing whether a model bears out in reality.)