I wrote a bunch of comments to this work while discussing with Risto_Saarelma. But I thought I should rather post them here. I came here to discuss certain theories that are on the border between philosophy and something which could be useful for the construction of AI. I’ve developed my own such theory based on many years of work on an unusual metaphysical system called the Metaphysics of Quality, which is largely ignored in the academy and deviates from the tradition. It’s not very “old” stuff. The formation of that tradition of discussion began in 1974. So that’s my background.
The kind of work that I try to do is not about language. It is about reducing mentalistic models to purely causal models, about opening up black boxes to find complicated algorithms inside, about dissolving mysteries—in a word, about cognitive science.
What would I answer to the question whether my work is about language? I’d say it’s both about language and algorithms, but it’s not some Chomsky-style stuff. It does account for the symbol grounding problem in a way that is not typically expected of language theory. But the point is, and I think this is important: even the mentalistic models to not currently exist in a coherent manner. So how are people going to reduce something undefined to purely causal models? Well, that doesn’t sound very possible, so I’d say the goals of RP are relevant.
But this kind of reductionism is hard work.
I would imagine mainstream philosophy to be hard work, too. This work, unfortunately, would, to a great extent, consist of making correct references to highly illegible works.
Modern philosophy doesn’t enforce reductionism, or even strive for it.
Well… I wouldn’t say RP enforces reductionism or that it doesn’t enforce reductionism. It kinda ruins RP if you develop a metatheory where theories are classified either as reductionist or nonreductionist. You can do that—it’s not a logical contradiction—but the point of RP is to be such a theory, that even though we could construct such metatheoretic approaches to it, we don’t want to do so, because it’s not only useless, but also complicates things for no apparent benefit. Unless, of course, we are not interested of AI but trying to device some very grand philosophy of which I’m not sure what it could be used for. My intention is that things like “reductionism” are placed within RP instead of placing RP into a box labeled “reductionism”.
RP is supposed to define things recursively. That is not, to my knowledge, impossible. So I’m not sure why the definition would necessarily have to be reductive in some sense LISP, to my knowledge, is not reductive. But I’m not sure what Eliezer means with “reductive”. It seems like yet another philosophical concept. I’d better check if it’s defined somewhere on LW...
And then they publish it and say, “Look at how precisely I have defined my language!”
I’m not a fetishist. Not in this matter, at least. I want to define things formally because the structure of the theory is very hard to understand otherwise. The formal definitions make it easier to find out things I would not have otherwise noticed. That’s why I want to understand the formal definitions myself despite sometimes having other people practically do them for me.
Consider the popular philosophical notion of “possible worlds”. Have you ever seen a possible world?
I think that’s pretty cogent criticism. I’ve found the same kind of things troublesome.
Philosophers keep telling me that I should look at philosophy. I have, every now and then. But the main reason I look at philosophy is when I find it desirable to explain things to philosophers.
I understand how Eliezer feels. I guess I don’t even tell people they need to look at philosophy for its own sake. How should I know what someone else wants to do for its own sake? But it’s not so simple with RP, because it could actually work for something. The good philosophy is simply hard to find, and if I hadn’t studied the MOQ, I might very well now be laughing at Langan’s CTMU with many others, because I wouldn’t understand what that thing is he is a bit awkwardly trying to express.
I’d like to illustrate the stagnation of academic philosophy with the following thought experiment. Let’s suppose someone has solved the problem of induction. What is the solution like?
Ten pages?
Hundred pages?
Thousand pages?
Does it contain no formulae or few formulae?
Does it contain a lot of formulae?
I’ve read academic publications to the point that I don’t believe there is any work the academic community would, generally speaking, regard as a solution to the problem of induction. I simply don’t believe many scholars think there really can be such a thing. They are interested of “refining” the debate somehow. They don’t treat it as some matter that needs to be solved because it actually means something.
This example might not right a bell to someone completely unfamiliar with academic philosophy, but I think it does illustrate how the field is flawed.
I’d like to illustrate the stagnation of academic philosophy with the following thought experiment. Let’s suppose someone has solved the problem of induction. What is the solution like?
Ten pages?
Hundred pages?
Thousand pages?
Does it contain no formulae or few formulae?
Does it contain a lot of formulae?
I wrote a bunch of comments to this work while discussing with Risto_Saarelma. But I thought I should rather post them here. I came here to discuss certain theories that are on the border between philosophy and something which could be useful for the construction of AI. I’ve developed my own such theory based on many years of work on an unusual metaphysical system called the Metaphysics of Quality, which is largely ignored in the academy and deviates from the tradition. It’s not very “old” stuff. The formation of that tradition of discussion began in 1974. So that’s my background.
What would I answer to the question whether my work is about language? I’d say it’s both about language and algorithms, but it’s not some Chomsky-style stuff. It does account for the symbol grounding problem in a way that is not typically expected of language theory. But the point is, and I think this is important: even the mentalistic models to not currently exist in a coherent manner. So how are people going to reduce something undefined to purely causal models? Well, that doesn’t sound very possible, so I’d say the goals of RP are relevant.
I would imagine mainstream philosophy to be hard work, too. This work, unfortunately, would, to a great extent, consist of making correct references to highly illegible works.
Well… I wouldn’t say RP enforces reductionism or that it doesn’t enforce reductionism. It kinda ruins RP if you develop a metatheory where theories are classified either as reductionist or nonreductionist. You can do that—it’s not a logical contradiction—but the point of RP is to be such a theory, that even though we could construct such metatheoretic approaches to it, we don’t want to do so, because it’s not only useless, but also complicates things for no apparent benefit. Unless, of course, we are not interested of AI but trying to device some very grand philosophy of which I’m not sure what it could be used for. My intention is that things like “reductionism” are placed within RP instead of placing RP into a box labeled “reductionism”.
RP is supposed to define things recursively. That is not, to my knowledge, impossible. So I’m not sure why the definition would necessarily have to be reductive in some sense LISP, to my knowledge, is not reductive. But I’m not sure what Eliezer means with “reductive”. It seems like yet another philosophical concept. I’d better check if it’s defined somewhere on LW...
I’m not a fetishist. Not in this matter, at least. I want to define things formally because the structure of the theory is very hard to understand otherwise. The formal definitions make it easier to find out things I would not have otherwise noticed. That’s why I want to understand the formal definitions myself despite sometimes having other people practically do them for me.
I think that’s pretty cogent criticism. I’ve found the same kind of things troublesome.
I understand how Eliezer feels. I guess I don’t even tell people they need to look at philosophy for its own sake. How should I know what someone else wants to do for its own sake? But it’s not so simple with RP, because it could actually work for something. The good philosophy is simply hard to find, and if I hadn’t studied the MOQ, I might very well now be laughing at Langan’s CTMU with many others, because I wouldn’t understand what that thing is he is a bit awkwardly trying to express.
I’d like to illustrate the stagnation of academic philosophy with the following thought experiment. Let’s suppose someone has solved the problem of induction. What is the solution like?
Ten pages?
Hundred pages?
Thousand pages?
Does it contain no formulae or few formulae?
Does it contain a lot of formulae?
I’ve read academic publications to the point that I don’t believe there is any work the academic community would, generally speaking, regard as a solution to the problem of induction. I simply don’t believe many scholars think there really can be such a thing. They are interested of “refining” the debate somehow. They don’t treat it as some matter that needs to be solved because it actually means something.
This example might not right a bell to someone completely unfamiliar with academic philosophy, but I think it does illustrate how the field is flawed.
I’ll go with 61 pages and quite a few formulae.