A bit late to this, but I think I figured out what the basic problem here is: Robert Pirsig is an archer, while LW (and folk like Judea Pearl, Gary Drescher and Marcus Hutter) are building hot-air balloons. And we’re talking about doing a Moon shot, building an artificial general intelligence, here.
Archers think that if they get their bowyery really good and train to shoot really, really well, they might eventually land an arrow on the Moon. Maybe they’ll need to build some kind of ballista type thing that needs five people to draw, but archery is awesome at skewering all sorts of things, so it should definitely be the way to go.
Hot-air balloonists on the other hand are pretty sure bows and arrows aren’t the way to go, despite balloons being a pretty recent invention while archery has been practiced for millennia and has a very distinguished pedigree of masters. Balloons seem to get you higher up than you can get things to go with any sort of throwing device, even one of those fancy newfangled trebuchet things. Sure, nobody has managed to land a balloon on the Moon either, despite decades of trying, so obviously we’re still missing something important that nobody really has a good idea about.
But it does look like figuring out how stuff like balloons work and trying to think of something new along similar lines, instead of developing a really good archery style is the way to go if you want to actually land something on the Moon at some point.
Would you find a space rocket to resemble either a balloon or an arrow, but not both?
I didn’t imply something Pirsig wrote would, in and of itself, have much to do with artificial intelligence.
LessWrong is like a sieve, that only collects stuff that looks like I need it, but on a closer look I don’t. You won’t come until the table is already set. Fine.
Would you find a space rocket to resemble either a balloon or an arrow, but not both?
The point is that the people who build it will resemble balloon-builders, not archers. People who are obsessed with getting machines to do things, not people who are obsessed with human performance.
My work is a type theory for AI for conceptualizing the input it receives via its artificial senses. If it weren’t, I would have never come here.
The conceptualization faculty is accompanied with a formula for making moral evaluations, which is the basis of advanced decision making. Whatever the AI can conceptualize, it can also project as a vector on a Cartesian plane. The direction and magnitude of that vector are the data used in this decision making.
The actual decision making algorithm may begin by making random decisions and filtering good decisions from bad with the mathematical model I developed. Based on this filtering, the AI would begin to develop a self-modifying heuristic algorithm for making good decisions and, in general, for behaving in a good manner. What the AI would perceive as good behavior would of course, to some extent, depend of the environment in which the AI is placed.
If you had an AI making random actions and changing it’s behavior according to heuristic rules, it could learn things in a similar way as a baby learns things. If you’re not interested of that, I don’t know what you’re interested of.
I didn’t come here to talk about some philosophy. I know you’re not interested of that. I’ve done the math, but not the algorithm, because I’m not much of a coder. If you don’t want to code a program that implements my mathematical model, that’s no reason to give me −54 karma.
I really don’t understand why you don’t want a mathematical model of moral decision making, even for discussion. “Moral” is not a philosophical concept here. It is just the thing that makes some decisions better than others. I didn’t have the formula when I came here in October. Now I have it. Maybe later I will have something more. And all you can do, with the exception of Risto, is to give me −1. Can you recommend me some transhumanist community?
How do you expect an AI to be rational, if you yourselves don’t want to be metarational? Do you want some “pocket calculator” AI?
Too bad you don’t like philosophical concepts. I thought you knew computer science is oozing over philosophy, which has all but died on its feet as far as we’re concerned of the academia.
One thing’s for sure: you don’t know whack about karma. The AI could actually differentiate karma, in the proper sense of the word, from “reputation”. You keep playing with your lego blocks until you grow up.
It would have been really neat to do this on LessWrong. It would have made for a good story. It would have also been practical. The academia isn’t interested of this—there is no academic discipline for studying AI theory at this level of abstraction. I don’t even have any AI expertise, and I didn’t intend to develop a mathematical model for AI in the first place. That’s just what I got when I worked on this for long enough.
I don’t like stereotypical LessWrongians—I think they are boring and narrow-minded. I think we could have had something to do together despite the fact that our personalities don’t make it easy for us to be friends. Almost anyone with AI expertise is competent enough to help me get started with this. You are not likely to get a better deal to get famous by doing so little work. But some deals of course seem too good to be true. So call me the “snake oil man” and go play with your legos.
I suppose you got the joke above (Right? You did, right?) but you were simply too down not to fall for it. What’s the use of acquiring all that theoretical information, if it doesn’t make you happy? Spending your days hanging around on some LessWrong with polyamorous dom king Eliezer Yudkowsky as your idol. You wish you could be like him, right? You wish the cool guys would be on the losing side, like he is?
In any case, this “hot-air balloonist vs. archer” (POP!) comparison seems like some sort of an argument ad hominem -type fallacy, and that’s why I reacted with an ad hominem attack about legos and stuff. First of all, ad hominem is a fallacy, and does nothing to undermine my case. It does undermine the notion that you are being rational.
Secondly, if my person is that interesting, I’d say I resemble the mathematician C. S. Peirce more than Ramakrishna. It seems to me mathematics is not necessarily considered completely acceptable by the notion of rationality you are advocating, as pure mathematics is only concerned about rules regarding what you’d call “maps” but not rules regarding what you’d call “territory”. That’s a weird problem, though.
I didn’t intend it as much of an ad hominem, after all both groups in the comparison are so far quite unprepared for the undertaking they’re trying to do. Just trying to find ways to try to describe the cultural mismatch that seems to be going on here.
I understand that math is starting to have some stuff dealing with how to make good maps from a territory. Only that’s inside the difficult and technical stuff like Jaynes’ Probability Theory or Pearl’s Causality, instead of somebody just making a nice new logical calculus with an operator for doing induction. There’s already some actual philosophically interesting results like an inductive learner needing to have innate biases to be able to learn anything.
That’s a good result. However, the necessity of innate biases undermines the notion of rationality, unless we have a system for differentiating the rational cognitive faculty from the innately biased cognitive faculty. I am proposing that this differentiation faculty be rational, hence “Metarationality”.
In the Cartesian coordinate system I devised object-level entities are projected as vectors. Vectors with a positive Y coordinate are rational. The only defined operation so far is addition: vectors can be added to each other. In this metasystem we are able to combine object-level entities (events, objects, “things”) by adding them to each other as vectors. This system can be used to examine individual object-level entities within the context other entities create by virtue of their existence. Because the coordinate system assigns a moral value to each entity it can express, it can be used for decision making. Obviously, it values morally good decisions over morally bad ones.
Every entity in my system is an ordered pair of the form
). Here x and y are propositional variables whose truth values can be −1 (false) or 1 (true). x denotes whether the entity is tangible and y whether it is placed within a rational epistemology. p is the entity. &p is the conceptual part of the entity (a philosopher would call that an “intension”). p is the sensory part of the entity, ie. what sensory input is considered to be the referent of the entity’s conceptual part. A philosopher would call p an extension. a, b and c are numerical values, which denote the value of the entity itself, of its intension, and of its extension, respectively.
The right side of the following formula (right to the equivalence operator) tells how b and c are used to calculate a. The left side of the formula tells how any entity is converted to vector a. The vector conversion allows both innate cognitive bias and object-level rationality to influence decision making within the same metasystem.
A bit late to this, but I think I figured out what the basic problem here is: Robert Pirsig is an archer, while LW (and folk like Judea Pearl, Gary Drescher and Marcus Hutter) are building hot-air balloons. And we’re talking about doing a Moon shot, building an artificial general intelligence, here.
Archers think that if they get their bowyery really good and train to shoot really, really well, they might eventually land an arrow on the Moon. Maybe they’ll need to build some kind of ballista type thing that needs five people to draw, but archery is awesome at skewering all sorts of things, so it should definitely be the way to go.
Hot-air balloonists on the other hand are pretty sure bows and arrows aren’t the way to go, despite balloons being a pretty recent invention while archery has been practiced for millennia and has a very distinguished pedigree of masters. Balloons seem to get you higher up than you can get things to go with any sort of throwing device, even one of those fancy newfangled trebuchet things. Sure, nobody has managed to land a balloon on the Moon either, despite decades of trying, so obviously we’re still missing something important that nobody really has a good idea about.
But it does look like figuring out how stuff like balloons work and trying to think of something new along similar lines, instead of developing a really good archery style is the way to go if you want to actually land something on the Moon at some point.
Would you find a space rocket to resemble either a balloon or an arrow, but not both?
I didn’t imply something Pirsig wrote would, in and of itself, have much to do with artificial intelligence.
LessWrong is like a sieve, that only collects stuff that looks like I need it, but on a closer look I don’t. You won’t come until the table is already set. Fine.
The point is that the people who build it will resemble balloon-builders, not archers. People who are obsessed with getting machines to do things, not people who are obsessed with human performance.
My work is a type theory for AI for conceptualizing the input it receives via its artificial senses. If it weren’t, I would have never come here.
The conceptualization faculty is accompanied with a formula for making moral evaluations, which is the basis of advanced decision making. Whatever the AI can conceptualize, it can also project as a vector on a Cartesian plane. The direction and magnitude of that vector are the data used in this decision making.
The actual decision making algorithm may begin by making random decisions and filtering good decisions from bad with the mathematical model I developed. Based on this filtering, the AI would begin to develop a self-modifying heuristic algorithm for making good decisions and, in general, for behaving in a good manner. What the AI would perceive as good behavior would of course, to some extent, depend of the environment in which the AI is placed.
If you had an AI making random actions and changing it’s behavior according to heuristic rules, it could learn things in a similar way as a baby learns things. If you’re not interested of that, I don’t know what you’re interested of.
I didn’t come here to talk about some philosophy. I know you’re not interested of that. I’ve done the math, but not the algorithm, because I’m not much of a coder. If you don’t want to code a program that implements my mathematical model, that’s no reason to give me −54 karma.
I really don’t understand why you don’t want a mathematical model of moral decision making, even for discussion. “Moral” is not a philosophical concept here. It is just the thing that makes some decisions better than others. I didn’t have the formula when I came here in October. Now I have it. Maybe later I will have something more. And all you can do, with the exception of Risto, is to give me −1. Can you recommend me some transhumanist community?
How do you expect an AI to be rational, if you yourselves don’t want to be metarational? Do you want some “pocket calculator” AI?
Too bad you don’t like philosophical concepts. I thought you knew computer science is oozing over philosophy, which has all but died on its feet as far as we’re concerned of the academia.
One thing’s for sure: you don’t know whack about karma. The AI could actually differentiate karma, in the proper sense of the word, from “reputation”. You keep playing with your lego blocks until you grow up.
It would have been really neat to do this on LessWrong. It would have made for a good story. It would have also been practical. The academia isn’t interested of this—there is no academic discipline for studying AI theory at this level of abstraction. I don’t even have any AI expertise, and I didn’t intend to develop a mathematical model for AI in the first place. That’s just what I got when I worked on this for long enough.
I don’t like stereotypical LessWrongians—I think they are boring and narrow-minded. I think we could have had something to do together despite the fact that our personalities don’t make it easy for us to be friends. Almost anyone with AI expertise is competent enough to help me get started with this. You are not likely to get a better deal to get famous by doing so little work. But some deals of course seem too good to be true. So call me the “snake oil man” and go play with your legos.
I just gave my girlfriend an orgasm. Come on, give me another −1.
I suppose you got the joke above (Right? You did, right?) but you were simply too down not to fall for it. What’s the use of acquiring all that theoretical information, if it doesn’t make you happy? Spending your days hanging around on some LessWrong with polyamorous dom king Eliezer Yudkowsky as your idol. You wish you could be like him, right? You wish the cool guys would be on the losing side, like he is?
Why do you give me all the minus? Just asking.
In any case, this “hot-air balloonist vs. archer” (POP!) comparison seems like some sort of an argument ad hominem -type fallacy, and that’s why I reacted with an ad hominem attack about legos and stuff. First of all, ad hominem is a fallacy, and does nothing to undermine my case. It does undermine the notion that you are being rational.
Secondly, if my person is that interesting, I’d say I resemble the mathematician C. S. Peirce more than Ramakrishna. It seems to me mathematics is not necessarily considered completely acceptable by the notion of rationality you are advocating, as pure mathematics is only concerned about rules regarding what you’d call “maps” but not rules regarding what you’d call “territory”. That’s a weird problem, though.
I didn’t intend it as much of an ad hominem, after all both groups in the comparison are so far quite unprepared for the undertaking they’re trying to do. Just trying to find ways to try to describe the cultural mismatch that seems to be going on here.
I understand that math is starting to have some stuff dealing with how to make good maps from a territory. Only that’s inside the difficult and technical stuff like Jaynes’ Probability Theory or Pearl’s Causality, instead of somebody just making a nice new logical calculus with an operator for doing induction. There’s already some actual philosophically interesting results like an inductive learner needing to have innate biases to be able to learn anything.
That’s a good result. However, the necessity of innate biases undermines the notion of rationality, unless we have a system for differentiating the rational cognitive faculty from the innately biased cognitive faculty. I am proposing that this differentiation faculty be rational, hence “Metarationality”.
In the Cartesian coordinate system I devised object-level entities are projected as vectors. Vectors with a positive Y coordinate are rational. The only defined operation so far is addition: vectors can be added to each other. In this metasystem we are able to combine object-level entities (events, objects, “things”) by adding them to each other as vectors. This system can be used to examine individual object-level entities within the context other entities create by virtue of their existence. Because the coordinate system assigns a moral value to each entity it can express, it can be used for decision making. Obviously, it values morally good decisions over morally bad ones.
Every entity in my system is an ordered pair of the form
). Here x and y are propositional variables whose truth values can be −1 (false) or 1 (true). x denotes whether the entity is tangible and y whether it is placed within a rational epistemology. p is the entity. &p is the conceptual part of the entity (a philosopher would call that an “intension”). p is the sensory part of the entity, ie. what sensory input is considered to be the referent of the entity’s conceptual part. A philosopher would call p an extension. a, b and c are numerical values, which denote the value of the entity itself, of its intension, and of its extension, respectively.The right side of the following formula (right to the equivalence operator) tells how b and c are used to calculate a. The left side of the formula tells how any entity is converted to vector a. The vector conversion allows both innate cognitive bias and object-level rationality to influence decision making within the same metasystem.
%20\Leftrightarrow%20{%5E{x}_{y}p}_{\frac{%20\textup{min}(m,n)%20}%20{%20\textup{max}(m,n)%20}(m+n)}%20=(%5E{x}_{y}{\&}p%20_n%20,%20{%5E{x}_{y}*p}%20_m%20)))If someone says that it’s just a hypothesis this model works, I agree! But I’m eager to test it. However, this would require some teamwork.