But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can’t handle the truth.
Can you handle the truth then? I don’t understand the notion of truth you are using. In everyday language, when a person states something as “true”, it doesn’t usually need to be grounded to logic in order to work for a practical purpose. But you are making extremely abstract statements here. They just don’t mean anything unless you define truth and solve the symbol grounding problem. You have criticized philosophy in other threads, yet here you are making dubious arguments. The arguments are dubious because they are not clearly mere rhetoric, and not clearly philosophy. If someone tries to require you to explain the meaning of them, you could say you’re not interested of philosophy, so philosophical counterarguments are irrelevant to you. But you can’t be disinterested of philosophy if you make philosophical claims like that and actually consider them important.
I don’t like contemporary philosophy either, but I would suppose you are in trouble with these things, and I wonder if you are open to a solution? If not, fine.
But the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces. You can’t handle the raw truth, but reality can handle it without the slightest simplification. (I wish I knew where Reality got its computing power.)
But you haven’t defined reality. As long as you haven’t done so, “reality” will be a metaphorical, vague concept, which frequently changes its meaning in use. This means if you state something to be “reality” in one discussion, logical analysis would probably reveal you didn’t use it in the same meaning in another discussion.
You can have a deterministic definition of reality, but that will be arbitrary. Then people will start having completely pointless debates with you, and to make matters worse, you will perceive these debates as people trying to unjustify what you are doing. That’s a problem caused by you not realizing you didn’t have to justify your activities or approach in the first place. You didn’t need to make these philosophical claims, and I don’t suppose you would done so had you not felt threatened by something, such as religion or mysticism or people imposing their views on you.
This, as I see it, is the thesis of reductionism. Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.
If you categorize yourself as a reductionist, why don’t you go all the way? You can’t be both a reductionist and a realist. Ie. you can’t believe in reductionism and in the existence of a territory at the same time. You have to drop either one of them. But which one?
Drop the one you need to drop. I’m serious. You don’t need this metaphysical nonsense to justify something you are doing. Neither reductionism nor realism is “true” in any meaningful way. You are not doing anything wrong if you are a reductionist for 15 minutes, then switch to realism (ie. the belief in a “territory”) for ten seconds, then switch again into reductionism and then maybe to something else. And that is also the way you really live your life. I mean, think about your mind. I suppose it’s somewhat similar to mine. You don’t think about that metaphysical nonsense when you’re actually doing something practical. So you are not a metaphycisist when you’re riding a bike and enjoying the wind or something.
It’s just some conception of yourself which you have, that you have defined as someone who is an advocate of “reductionism and realism”. This conception is true only when you indeed are either one of those. It’s not true, when you’re neither of those. But you are operating in your mind. Suppose someone says to you you’re not a “reductionist and a realist” when you are, for example, in intense pain for some reason and are very unlikely to think about philosophy. Well, even in that case you could remind yourself of your own conception of yourself, that is, you are a “reductionist and a realist”, and argue that the person who said you are not was wrong. But why would you want to do so? The only reasons I see are some naive or egoistic or defensive reasons, such as:
You are afraid the person who said you’re not a “reductionist or realist” will try to waste your time by presenting stupid arguments according to which you may or may not or should or should not do something.
You believe your image of yourself as a “reductionist and realist” is somehow “true”. But you are able to decide at will whether that image is true. It is true when you are thinking in a certain way, and false when you are not thinking that way. So the statement conveys no useful information, except maybe on something you would like to be or something like that. But that is no longer philosophy.
You have some sort of a need to never get caught uttering something that’s not true. But in philosophy, it’s a really bad idea to want to make true statements all the time. Metaphysical theories in and of itself are neither true nor false. Instead, they are used to define truth and falsehood. They can be contradictory or silly or arbitrary, but they can’t be true or false.
If you state you to regard one state of mind or one theory, such as realism or reductionism, as some sort of an ultimate truth, you are simply putting yourself into a prison of words for no reason except that you apparently perceive some sort of safety in that prison or something like that. But its not safe. It exposes you to philosophical criticism you previously were invulnerable towards, because before you went to that prison, you didn’t even participate in that game.
If you actually care about philosophy, great. But I haven’t yet gotten such an impression. It seems like philosophy is an unpleasant chore to you. You want to use philosophy to obtain justification, a sense of entitlement, or something, and then throw it away because you think you’re already finished with it—that you’ve obtained a framework theory which already suits your needs, and you can now focus on the needs. But you’re not a true reductionist in the sense you defined reductionism, unless you also scrap the belief in the territory. I don’t care what you choose as long as you’re fine with it, but I don’t want you to contradict yourself.
There is no way to express the existence of the “territory” as a meaningfully true statement. Or if there is, I haven’t heard of it. It is a completely arbitrary declaration you use to create a framework for the rest of the things you do. You can’t construct a “metatheory of reality” which is about the territory, which you suppose to exist, and have that same territory prove the metatheory is right. The territory may contain empirical evidence that the metatheory is okay, but no algorithm can use that evidence to produce proof for the metatheory, because:
From “territory’s” point of view, the metatheory is undefined.
But the notion of gathering empirical evidence is meaningless if the metatheory, according to which the “territory” exists, is undefined.
Therefore, you have to define it if you want to use it for something, and just accept the fact that you can’t prove it to be somehow true, much less use its alleged truth to prove something else false. You can believe what you want, but you can’t make an AI that would use “territory” to construct a metatheory of territory, if it’s somehow true to the AI that territory is all there is. The AI can’t even construct a metatheory of “map and territory”, if it’s programmed to hold as somehow true that map and territory are the only things that exist. This entails that the AI cannot conceptualize its own metaphysical beliefs even as well as you can. It could not talk about them at all. To do so, it would have to be able to construct arbitrary metatheories on its own. This can only be done if the AI holds no metaphysical belief as infallible, that is, the AI is a reductionist in your meaning of the word.
I’ve seen some interest towards AI on LW. If you really would like to one day construct a very human-like AI, you will have problems if you cannot program an AI that can conceptualize the structure of its own cognitive processes also in terms that do not include realism. Because humans are not realists all the time. Their mind has a lot of features, and the metaphysical assumption of realism is usually only constructed when it is needed to perform some task. So if you want to have that assumption around all the time, you’ll just end up adding unnecessary extra baggage to the AI which will probably also make the code very difficult to comprehend. You don’t want to lug the assumption around all the time just because it’s supposed to be true in some way nobody can define.
You could as well have a reductionist theory, which only constructs realism (ie. the declaration that an external world exists) under certain conditions. Now, philosophy doesn’t usually include such theories, because the discipline is rather outdated, but there’s no inherent reason why it can’t be done. Realism is neither true not false in any meaningful and universal way. You are free to state it to exist if you are going to use that statement for something. But if you just say it, as if it would mean something in and of itself, you are not saying anything meaningful.
I don’t follow why you claim that reductionism and realism are incompatible. I think this may be because I’m very confused when I try to figure out, from context, what you mean by “realism”, and I strongly suspect that that’s because you don’t have a definition of that word which can be used in tests for updating predictions, which is the sort of thing LWers look for in a useful definition.
Basically, I’m inclined to agree with you when you say:
Realism is neither true not false in any meaningful and universal way. You are free to state it to exist if you are going to use that statement for something. But if you just say it, as if it would mean something in and of itself, you are not saying anything meaningful.
This is a really good reason in my experience for not getting into long discussions about “But what is reality, really?”
I don’t understand the notion of truth you are using.
A belief is true when it corresponds to reality. Or equivalently, “X” is true iff X.
But you haven’t defined reality.
In the map/territory distinction, reality is the territory. Less figuratively, reality is the thing that generates experimental results. From The Simple Truth:
I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.
Because humans are not realists all the time. Their mind has a lot of features, and the metaphysical assumption of realism is usually only constructed when it is needed to perform some task.
Actually, this may be a good point for me to try to figure out what you mean by “realism”, because here you seem to have connected that word to some but not all strategies of problem-solving. Can you give me some specific examples of problems which the mind tends to use realism in solving, and problems where it doesn’t?
I got “reductionism” wrong, actually. I thought the author was using some nonstandard definition of reductionism, which would have been something to the effect of not having unnecessary declarations in a theory. I did not take into account that the author could actually be what he says he is, no bells and whistles, because I didn’t take into account that reductionism could be taken seriously here. But that just means I misjudged. Of course I am not necessarily even supposed to be on this site. I am looking for people who might give useful ideas for theoretical work which could be useful for constructing AI, and I’m trying to check whether my approach is deemed intelligible here.
“Realism” is the belief that there is an external world, usually thought to consist of quarks, leptons, forces and such. It is typically thought of as a belief or a doctrine that is somehow true, instead of just an assumption an AI or a human makes because it needs to. Depending on who labels themself as a realist and on what mood is he, this can entail that everybody who is not a realist is considered mistaken.
An example of a problem whose solution does not need to involve realism is: “John is a small kid who seems to emulate his big brother almost all the time. Why is he doing this?” Possible answers would be: “He thinks his brother is cool” or “He wants to annoy his brother” or “He doesn’t emulate his brother, they are just very similar”. Of course you could just brain scan John. But if you really knew John, that’s not what you would do, unless brain scanners were about as common and inexpensive as laptops. And have much better functionality than they currently do.
In the John problem, there’s no need to construct the assumptions of a physical world, because the problem would be intelligible even in the case you meet John in a dream. You can’t take any physical brain scanner with you in a dream, so you can’t brain scan John. But you can analyze John’s behavior with the same criteria according to which you would analyze him had you met him while awake.
I’m not trying to impose any views on you, because I’m basically just trying to find out whether someone is interested of this kind of stuff. The point is that I’m trying to construct a framework theory for AI that is not grounded on anything else than sensory (or emotional etc.) perception—all the abstract parts are defined recursively. Structurally, the theory is intended to resemble a programming language with dynamic typing, as opposed to static typing. The theory would be pretty much both philosophy and AI.
The problem I see now is this. My theory, RP, is founded on the notion that important parts of thinking are based on metaphysical emergence. The main recursion loop of the theory, in its current form, will not create any information if only reduction is allowed. I would allow both, but if the people on LW are reductionist, I would suppose that the logical consequence of that would be they believe my theory cannot work. And that’s why I’m a bit troubled by the notion that you might accept reductionism as some sort of an axiom, because you don’t want to have a long philosophical conversation and would prefer to settle down with something that currently seems reasonable. So should I expect you to not want to consider other options? It’s strange that I should go elsewhere with my project, because that would amount to you rejecting an AI theory on grounds of contradicting your philosophical assumptions. Yet, my common sense expectation would be that you’d find AI more important than philosophy.
The point is that I’m trying to construct a framework theory for AI that is not grounded on anything else than sensory (or emotional etc.) perception—all the abstract parts are defined recursively. Structurally, the theory is intended to resemble a programming language with dynamic typing, as opposed to static typing. [...] The main recursion loop of the theory, in its current form, will not create any information if only reduction is allowed.
You seem to be overthinking this. Reductionism is “merely” a really useful cognition technique, because calculating everything at the finest possible level is hopelessly inefficient. Perhaps a practical simple example is needed:
An AI that can use reductionism can say “Oh, that collection of pixels within my current view is a dog, and this collection is a man, and the other collection is a leash”, and go on to match against (and develop on its own) patterns about objects at the coarser-than-pixel size of dogs, men, and leashes. Without reductionism, it would be forced to do the pattern matching for everything, even for complex concepts like “Man walking a dog”, directly at the pixel level, which is not impossible but is certainly a lot slower to run and harder to update.
If you’ve ever refactored a common element out in your code into its own module, or even if you’ve used a library or high-level language, you are also using reductionism. The non-reductionistic alternative would be something like writing every program from scratch, in machine code.
Okay. That sounds very good. And it would seem to be in accordance with this statement:
Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.
If reductionism does not entail that I must construct the notion of a territory and include it into my conceptualizations at all times, it’s not a problem. I now understand even better why I was confused by this. This kind of reductionism is not reductive physicalism. It’s hardly a philosophical statement at all, which is good. I would say that “the notion of higher levels being out there in the territory” is meaningless, but expressing disbelief to that notion is apparently intended to convey approximately the same meaning.
RP doesn’t yet actually include reduction. It’s about next on the to do list. Currently it includes an emergence loop that is based on the power set function. The function produces a staggering amount of information in just a few cycles. It seems to me that this is because instead of accounting for emergence relations the mind actually performs, it accounts for all defined emergence relations the mind could perform. So the theory is clearly still under construction, and it doesn’t yet have any kind of an algorithm part. I’m not much of a coder, so I need to work with someone who is. I already know one mathematician who likes to do this stuff with me. He’s not interested of the metaphysical part of the theory, and even said he doesn’t want to know too much about it. :) I’m not guaranteeing RP can be used for anything at all, but it’s interesting.
Can you handle the truth then? I don’t understand the notion of truth you are using. In everyday language, when a person states something as “true”, it doesn’t usually need to be grounded to logic in order to work for a practical purpose. But you are making extremely abstract statements here. They just don’t mean anything unless you define truth and solve the symbol grounding problem. You have criticized philosophy in other threads, yet here you are making dubious arguments. The arguments are dubious because they are not clearly mere rhetoric, and not clearly philosophy. If someone tries to require you to explain the meaning of them, you could say you’re not interested of philosophy, so philosophical counterarguments are irrelevant to you. But you can’t be disinterested of philosophy if you make philosophical claims like that and actually consider them important.
I don’t like contemporary philosophy either, but I would suppose you are in trouble with these things, and I wonder if you are open to a solution? If not, fine.
But you haven’t defined reality. As long as you haven’t done so, “reality” will be a metaphorical, vague concept, which frequently changes its meaning in use. This means if you state something to be “reality” in one discussion, logical analysis would probably reveal you didn’t use it in the same meaning in another discussion.
You can have a deterministic definition of reality, but that will be arbitrary. Then people will start having completely pointless debates with you, and to make matters worse, you will perceive these debates as people trying to unjustify what you are doing. That’s a problem caused by you not realizing you didn’t have to justify your activities or approach in the first place. You didn’t need to make these philosophical claims, and I don’t suppose you would done so had you not felt threatened by something, such as religion or mysticism or people imposing their views on you.
If you categorize yourself as a reductionist, why don’t you go all the way? You can’t be both a reductionist and a realist. Ie. you can’t believe in reductionism and in the existence of a territory at the same time. You have to drop either one of them. But which one?
Drop the one you need to drop. I’m serious. You don’t need this metaphysical nonsense to justify something you are doing. Neither reductionism nor realism is “true” in any meaningful way. You are not doing anything wrong if you are a reductionist for 15 minutes, then switch to realism (ie. the belief in a “territory”) for ten seconds, then switch again into reductionism and then maybe to something else. And that is also the way you really live your life. I mean, think about your mind. I suppose it’s somewhat similar to mine. You don’t think about that metaphysical nonsense when you’re actually doing something practical. So you are not a metaphycisist when you’re riding a bike and enjoying the wind or something.
It’s just some conception of yourself which you have, that you have defined as someone who is an advocate of “reductionism and realism”. This conception is true only when you indeed are either one of those. It’s not true, when you’re neither of those. But you are operating in your mind. Suppose someone says to you you’re not a “reductionist and a realist” when you are, for example, in intense pain for some reason and are very unlikely to think about philosophy. Well, even in that case you could remind yourself of your own conception of yourself, that is, you are a “reductionist and a realist”, and argue that the person who said you are not was wrong. But why would you want to do so? The only reasons I see are some naive or egoistic or defensive reasons, such as:
You are afraid the person who said you’re not a “reductionist or realist” will try to waste your time by presenting stupid arguments according to which you may or may not or should or should not do something.
You believe your image of yourself as a “reductionist and realist” is somehow “true”. But you are able to decide at will whether that image is true. It is true when you are thinking in a certain way, and false when you are not thinking that way. So the statement conveys no useful information, except maybe on something you would like to be or something like that. But that is no longer philosophy.
You have some sort of a need to never get caught uttering something that’s not true. But in philosophy, it’s a really bad idea to want to make true statements all the time. Metaphysical theories in and of itself are neither true nor false. Instead, they are used to define truth and falsehood. They can be contradictory or silly or arbitrary, but they can’t be true or false.
If you state you to regard one state of mind or one theory, such as realism or reductionism, as some sort of an ultimate truth, you are simply putting yourself into a prison of words for no reason except that you apparently perceive some sort of safety in that prison or something like that. But its not safe. It exposes you to philosophical criticism you previously were invulnerable towards, because before you went to that prison, you didn’t even participate in that game.
If you actually care about philosophy, great. But I haven’t yet gotten such an impression. It seems like philosophy is an unpleasant chore to you. You want to use philosophy to obtain justification, a sense of entitlement, or something, and then throw it away because you think you’re already finished with it—that you’ve obtained a framework theory which already suits your needs, and you can now focus on the needs. But you’re not a true reductionist in the sense you defined reductionism, unless you also scrap the belief in the territory. I don’t care what you choose as long as you’re fine with it, but I don’t want you to contradict yourself.
There is no way to express the existence of the “territory” as a meaningfully true statement. Or if there is, I haven’t heard of it. It is a completely arbitrary declaration you use to create a framework for the rest of the things you do. You can’t construct a “metatheory of reality” which is about the territory, which you suppose to exist, and have that same territory prove the metatheory is right. The territory may contain empirical evidence that the metatheory is okay, but no algorithm can use that evidence to produce proof for the metatheory, because:
From “territory’s” point of view, the metatheory is undefined.
But the notion of gathering empirical evidence is meaningless if the metatheory, according to which the “territory” exists, is undefined.
Therefore, you have to define it if you want to use it for something, and just accept the fact that you can’t prove it to be somehow true, much less use its alleged truth to prove something else false. You can believe what you want, but you can’t make an AI that would use “territory” to construct a metatheory of territory, if it’s somehow true to the AI that territory is all there is. The AI can’t even construct a metatheory of “map and territory”, if it’s programmed to hold as somehow true that map and territory are the only things that exist. This entails that the AI cannot conceptualize its own metaphysical beliefs even as well as you can. It could not talk about them at all. To do so, it would have to be able to construct arbitrary metatheories on its own. This can only be done if the AI holds no metaphysical belief as infallible, that is, the AI is a reductionist in your meaning of the word.
I’ve seen some interest towards AI on LW. If you really would like to one day construct a very human-like AI, you will have problems if you cannot program an AI that can conceptualize the structure of its own cognitive processes also in terms that do not include realism. Because humans are not realists all the time. Their mind has a lot of features, and the metaphysical assumption of realism is usually only constructed when it is needed to perform some task. So if you want to have that assumption around all the time, you’ll just end up adding unnecessary extra baggage to the AI which will probably also make the code very difficult to comprehend. You don’t want to lug the assumption around all the time just because it’s supposed to be true in some way nobody can define.
You could as well have a reductionist theory, which only constructs realism (ie. the declaration that an external world exists) under certain conditions. Now, philosophy doesn’t usually include such theories, because the discipline is rather outdated, but there’s no inherent reason why it can’t be done. Realism is neither true not false in any meaningful and universal way. You are free to state it to exist if you are going to use that statement for something. But if you just say it, as if it would mean something in and of itself, you are not saying anything meaningful.
I hope you were interested of my rant.
I don’t follow why you claim that reductionism and realism are incompatible. I think this may be because I’m very confused when I try to figure out, from context, what you mean by “realism”, and I strongly suspect that that’s because you don’t have a definition of that word which can be used in tests for updating predictions, which is the sort of thing LWers look for in a useful definition.
Basically, I’m inclined to agree with you when you say:
This is a really good reason in my experience for not getting into long discussions about “But what is reality, really?”
A belief is true when it corresponds to reality. Or equivalently, “X” is true iff X.
In the map/territory distinction, reality is the territory. Less figuratively, reality is the thing that generates experimental results. From The Simple Truth:
Actually, this may be a good point for me to try to figure out what you mean by “realism”, because here you seem to have connected that word to some but not all strategies of problem-solving. Can you give me some specific examples of problems which the mind tends to use realism in solving, and problems where it doesn’t?
I got “reductionism” wrong, actually. I thought the author was using some nonstandard definition of reductionism, which would have been something to the effect of not having unnecessary declarations in a theory. I did not take into account that the author could actually be what he says he is, no bells and whistles, because I didn’t take into account that reductionism could be taken seriously here. But that just means I misjudged. Of course I am not necessarily even supposed to be on this site. I am looking for people who might give useful ideas for theoretical work which could be useful for constructing AI, and I’m trying to check whether my approach is deemed intelligible here.
“Realism” is the belief that there is an external world, usually thought to consist of quarks, leptons, forces and such. It is typically thought of as a belief or a doctrine that is somehow true, instead of just an assumption an AI or a human makes because it needs to. Depending on who labels themself as a realist and on what mood is he, this can entail that everybody who is not a realist is considered mistaken.
An example of a problem whose solution does not need to involve realism is: “John is a small kid who seems to emulate his big brother almost all the time. Why is he doing this?” Possible answers would be: “He thinks his brother is cool” or “He wants to annoy his brother” or “He doesn’t emulate his brother, they are just very similar”. Of course you could just brain scan John. But if you really knew John, that’s not what you would do, unless brain scanners were about as common and inexpensive as laptops. And have much better functionality than they currently do.
In the John problem, there’s no need to construct the assumptions of a physical world, because the problem would be intelligible even in the case you meet John in a dream. You can’t take any physical brain scanner with you in a dream, so you can’t brain scan John. But you can analyze John’s behavior with the same criteria according to which you would analyze him had you met him while awake.
I’m not trying to impose any views on you, because I’m basically just trying to find out whether someone is interested of this kind of stuff. The point is that I’m trying to construct a framework theory for AI that is not grounded on anything else than sensory (or emotional etc.) perception—all the abstract parts are defined recursively. Structurally, the theory is intended to resemble a programming language with dynamic typing, as opposed to static typing. The theory would be pretty much both philosophy and AI.
The problem I see now is this. My theory, RP, is founded on the notion that important parts of thinking are based on metaphysical emergence. The main recursion loop of the theory, in its current form, will not create any information if only reduction is allowed. I would allow both, but if the people on LW are reductionist, I would suppose that the logical consequence of that would be they believe my theory cannot work. And that’s why I’m a bit troubled by the notion that you might accept reductionism as some sort of an axiom, because you don’t want to have a long philosophical conversation and would prefer to settle down with something that currently seems reasonable. So should I expect you to not want to consider other options? It’s strange that I should go elsewhere with my project, because that would amount to you rejecting an AI theory on grounds of contradicting your philosophical assumptions. Yet, my common sense expectation would be that you’d find AI more important than philosophy.
You seem to be overthinking this. Reductionism is “merely” a really useful cognition technique, because calculating everything at the finest possible level is hopelessly inefficient. Perhaps a practical simple example is needed:
An AI that can use reductionism can say “Oh, that collection of pixels within my current view is a dog, and this collection is a man, and the other collection is a leash”, and go on to match against (and develop on its own) patterns about objects at the coarser-than-pixel size of dogs, men, and leashes. Without reductionism, it would be forced to do the pattern matching for everything, even for complex concepts like “Man walking a dog”, directly at the pixel level, which is not impossible but is certainly a lot slower to run and harder to update.
If you’ve ever refactored a common element out in your code into its own module, or even if you’ve used a library or high-level language, you are also using reductionism. The non-reductionistic alternative would be something like writing every program from scratch, in machine code.
Okay. That sounds very good. And it would seem to be in accordance with this statement:
If reductionism does not entail that I must construct the notion of a territory and include it into my conceptualizations at all times, it’s not a problem. I now understand even better why I was confused by this. This kind of reductionism is not reductive physicalism. It’s hardly a philosophical statement at all, which is good. I would say that “the notion of higher levels being out there in the territory” is meaningless, but expressing disbelief to that notion is apparently intended to convey approximately the same meaning.
RP doesn’t yet actually include reduction. It’s about next on the to do list. Currently it includes an emergence loop that is based on the power set function. The function produces a staggering amount of information in just a few cycles. It seems to me that this is because instead of accounting for emergence relations the mind actually performs, it accounts for all defined emergence relations the mind could perform. So the theory is clearly still under construction, and it doesn’t yet have any kind of an algorithm part. I’m not much of a coder, so I need to work with someone who is. I already know one mathematician who likes to do this stuff with me. He’s not interested of the metaphysical part of the theory, and even said he doesn’t want to know too much about it. :) I’m not guaranteeing RP can be used for anything at all, but it’s interesting.