I got “reductionism” wrong, actually. I thought the author was using some nonstandard definition of reductionism, which would have been something to the effect of not having unnecessary declarations in a theory. I did not take into account that the author could actually be what he says he is, no bells and whistles, because I didn’t take into account that reductionism could be taken seriously here. But that just means I misjudged. Of course I am not necessarily even supposed to be on this site. I am looking for people who might give useful ideas for theoretical work which could be useful for constructing AI, and I’m trying to check whether my approach is deemed intelligible here.
“Realism” is the belief that there is an external world, usually thought to consist of quarks, leptons, forces and such. It is typically thought of as a belief or a doctrine that is somehow true, instead of just an assumption an AI or a human makes because it needs to. Depending on who labels themself as a realist and on what mood is he, this can entail that everybody who is not a realist is considered mistaken.
An example of a problem whose solution does not need to involve realism is: “John is a small kid who seems to emulate his big brother almost all the time. Why is he doing this?” Possible answers would be: “He thinks his brother is cool” or “He wants to annoy his brother” or “He doesn’t emulate his brother, they are just very similar”. Of course you could just brain scan John. But if you really knew John, that’s not what you would do, unless brain scanners were about as common and inexpensive as laptops. And have much better functionality than they currently do.
In the John problem, there’s no need to construct the assumptions of a physical world, because the problem would be intelligible even in the case you meet John in a dream. You can’t take any physical brain scanner with you in a dream, so you can’t brain scan John. But you can analyze John’s behavior with the same criteria according to which you would analyze him had you met him while awake.
I’m not trying to impose any views on you, because I’m basically just trying to find out whether someone is interested of this kind of stuff. The point is that I’m trying to construct a framework theory for AI that is not grounded on anything else than sensory (or emotional etc.) perception—all the abstract parts are defined recursively. Structurally, the theory is intended to resemble a programming language with dynamic typing, as opposed to static typing. The theory would be pretty much both philosophy and AI.
The problem I see now is this. My theory, RP, is founded on the notion that important parts of thinking are based on metaphysical emergence. The main recursion loop of the theory, in its current form, will not create any information if only reduction is allowed. I would allow both, but if the people on LW are reductionist, I would suppose that the logical consequence of that would be they believe my theory cannot work. And that’s why I’m a bit troubled by the notion that you might accept reductionism as some sort of an axiom, because you don’t want to have a long philosophical conversation and would prefer to settle down with something that currently seems reasonable. So should I expect you to not want to consider other options? It’s strange that I should go elsewhere with my project, because that would amount to you rejecting an AI theory on grounds of contradicting your philosophical assumptions. Yet, my common sense expectation would be that you’d find AI more important than philosophy.
The point is that I’m trying to construct a framework theory for AI that is not grounded on anything else than sensory (or emotional etc.) perception—all the abstract parts are defined recursively. Structurally, the theory is intended to resemble a programming language with dynamic typing, as opposed to static typing. [...] The main recursion loop of the theory, in its current form, will not create any information if only reduction is allowed.
You seem to be overthinking this. Reductionism is “merely” a really useful cognition technique, because calculating everything at the finest possible level is hopelessly inefficient. Perhaps a practical simple example is needed:
An AI that can use reductionism can say “Oh, that collection of pixels within my current view is a dog, and this collection is a man, and the other collection is a leash”, and go on to match against (and develop on its own) patterns about objects at the coarser-than-pixel size of dogs, men, and leashes. Without reductionism, it would be forced to do the pattern matching for everything, even for complex concepts like “Man walking a dog”, directly at the pixel level, which is not impossible but is certainly a lot slower to run and harder to update.
If you’ve ever refactored a common element out in your code into its own module, or even if you’ve used a library or high-level language, you are also using reductionism. The non-reductionistic alternative would be something like writing every program from scratch, in machine code.
Okay. That sounds very good. And it would seem to be in accordance with this statement:
Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.
If reductionism does not entail that I must construct the notion of a territory and include it into my conceptualizations at all times, it’s not a problem. I now understand even better why I was confused by this. This kind of reductionism is not reductive physicalism. It’s hardly a philosophical statement at all, which is good. I would say that “the notion of higher levels being out there in the territory” is meaningless, but expressing disbelief to that notion is apparently intended to convey approximately the same meaning.
RP doesn’t yet actually include reduction. It’s about next on the to do list. Currently it includes an emergence loop that is based on the power set function. The function produces a staggering amount of information in just a few cycles. It seems to me that this is because instead of accounting for emergence relations the mind actually performs, it accounts for all defined emergence relations the mind could perform. So the theory is clearly still under construction, and it doesn’t yet have any kind of an algorithm part. I’m not much of a coder, so I need to work with someone who is. I already know one mathematician who likes to do this stuff with me. He’s not interested of the metaphysical part of the theory, and even said he doesn’t want to know too much about it. :) I’m not guaranteeing RP can be used for anything at all, but it’s interesting.
I got “reductionism” wrong, actually. I thought the author was using some nonstandard definition of reductionism, which would have been something to the effect of not having unnecessary declarations in a theory. I did not take into account that the author could actually be what he says he is, no bells and whistles, because I didn’t take into account that reductionism could be taken seriously here. But that just means I misjudged. Of course I am not necessarily even supposed to be on this site. I am looking for people who might give useful ideas for theoretical work which could be useful for constructing AI, and I’m trying to check whether my approach is deemed intelligible here.
“Realism” is the belief that there is an external world, usually thought to consist of quarks, leptons, forces and such. It is typically thought of as a belief or a doctrine that is somehow true, instead of just an assumption an AI or a human makes because it needs to. Depending on who labels themself as a realist and on what mood is he, this can entail that everybody who is not a realist is considered mistaken.
An example of a problem whose solution does not need to involve realism is: “John is a small kid who seems to emulate his big brother almost all the time. Why is he doing this?” Possible answers would be: “He thinks his brother is cool” or “He wants to annoy his brother” or “He doesn’t emulate his brother, they are just very similar”. Of course you could just brain scan John. But if you really knew John, that’s not what you would do, unless brain scanners were about as common and inexpensive as laptops. And have much better functionality than they currently do.
In the John problem, there’s no need to construct the assumptions of a physical world, because the problem would be intelligible even in the case you meet John in a dream. You can’t take any physical brain scanner with you in a dream, so you can’t brain scan John. But you can analyze John’s behavior with the same criteria according to which you would analyze him had you met him while awake.
I’m not trying to impose any views on you, because I’m basically just trying to find out whether someone is interested of this kind of stuff. The point is that I’m trying to construct a framework theory for AI that is not grounded on anything else than sensory (or emotional etc.) perception—all the abstract parts are defined recursively. Structurally, the theory is intended to resemble a programming language with dynamic typing, as opposed to static typing. The theory would be pretty much both philosophy and AI.
The problem I see now is this. My theory, RP, is founded on the notion that important parts of thinking are based on metaphysical emergence. The main recursion loop of the theory, in its current form, will not create any information if only reduction is allowed. I would allow both, but if the people on LW are reductionist, I would suppose that the logical consequence of that would be they believe my theory cannot work. And that’s why I’m a bit troubled by the notion that you might accept reductionism as some sort of an axiom, because you don’t want to have a long philosophical conversation and would prefer to settle down with something that currently seems reasonable. So should I expect you to not want to consider other options? It’s strange that I should go elsewhere with my project, because that would amount to you rejecting an AI theory on grounds of contradicting your philosophical assumptions. Yet, my common sense expectation would be that you’d find AI more important than philosophy.
You seem to be overthinking this. Reductionism is “merely” a really useful cognition technique, because calculating everything at the finest possible level is hopelessly inefficient. Perhaps a practical simple example is needed:
An AI that can use reductionism can say “Oh, that collection of pixels within my current view is a dog, and this collection is a man, and the other collection is a leash”, and go on to match against (and develop on its own) patterns about objects at the coarser-than-pixel size of dogs, men, and leashes. Without reductionism, it would be forced to do the pattern matching for everything, even for complex concepts like “Man walking a dog”, directly at the pixel level, which is not impossible but is certainly a lot slower to run and harder to update.
If you’ve ever refactored a common element out in your code into its own module, or even if you’ve used a library or high-level language, you are also using reductionism. The non-reductionistic alternative would be something like writing every program from scratch, in machine code.
Okay. That sounds very good. And it would seem to be in accordance with this statement:
If reductionism does not entail that I must construct the notion of a territory and include it into my conceptualizations at all times, it’s not a problem. I now understand even better why I was confused by this. This kind of reductionism is not reductive physicalism. It’s hardly a philosophical statement at all, which is good. I would say that “the notion of higher levels being out there in the territory” is meaningless, but expressing disbelief to that notion is apparently intended to convey approximately the same meaning.
RP doesn’t yet actually include reduction. It’s about next on the to do list. Currently it includes an emergence loop that is based on the power set function. The function produces a staggering amount of information in just a few cycles. It seems to me that this is because instead of accounting for emergence relations the mind actually performs, it accounts for all defined emergence relations the mind could perform. So the theory is clearly still under construction, and it doesn’t yet have any kind of an algorithm part. I’m not much of a coder, so I need to work with someone who is. I already know one mathematician who likes to do this stuff with me. He’s not interested of the metaphysical part of the theory, and even said he doesn’t want to know too much about it. :) I’m not guaranteeing RP can be used for anything at all, but it’s interesting.