Would you find a space rocket to resemble either a balloon or an arrow, but not both?
The point is that the people who build it will resemble balloon-builders, not archers. People who are obsessed with getting machines to do things, not people who are obsessed with human performance.
My work is a type theory for AI for conceptualizing the input it receives via its artificial senses. If it weren’t, I would have never come here.
The conceptualization faculty is accompanied with a formula for making moral evaluations, which is the basis of advanced decision making. Whatever the AI can conceptualize, it can also project as a vector on a Cartesian plane. The direction and magnitude of that vector are the data used in this decision making.
The actual decision making algorithm may begin by making random decisions and filtering good decisions from bad with the mathematical model I developed. Based on this filtering, the AI would begin to develop a self-modifying heuristic algorithm for making good decisions and, in general, for behaving in a good manner. What the AI would perceive as good behavior would of course, to some extent, depend of the environment in which the AI is placed.
If you had an AI making random actions and changing it’s behavior according to heuristic rules, it could learn things in a similar way as a baby learns things. If you’re not interested of that, I don’t know what you’re interested of.
I didn’t come here to talk about some philosophy. I know you’re not interested of that. I’ve done the math, but not the algorithm, because I’m not much of a coder. If you don’t want to code a program that implements my mathematical model, that’s no reason to give me −54 karma.
I really don’t understand why you don’t want a mathematical model of moral decision making, even for discussion. “Moral” is not a philosophical concept here. It is just the thing that makes some decisions better than others. I didn’t have the formula when I came here in October. Now I have it. Maybe later I will have something more. And all you can do, with the exception of Risto, is to give me −1. Can you recommend me some transhumanist community?
How do you expect an AI to be rational, if you yourselves don’t want to be metarational? Do you want some “pocket calculator” AI?
Too bad you don’t like philosophical concepts. I thought you knew computer science is oozing over philosophy, which has all but died on its feet as far as we’re concerned of the academia.
One thing’s for sure: you don’t know whack about karma. The AI could actually differentiate karma, in the proper sense of the word, from “reputation”. You keep playing with your lego blocks until you grow up.
It would have been really neat to do this on LessWrong. It would have made for a good story. It would have also been practical. The academia isn’t interested of this—there is no academic discipline for studying AI theory at this level of abstraction. I don’t even have any AI expertise, and I didn’t intend to develop a mathematical model for AI in the first place. That’s just what I got when I worked on this for long enough.
I don’t like stereotypical LessWrongians—I think they are boring and narrow-minded. I think we could have had something to do together despite the fact that our personalities don’t make it easy for us to be friends. Almost anyone with AI expertise is competent enough to help me get started with this. You are not likely to get a better deal to get famous by doing so little work. But some deals of course seem too good to be true. So call me the “snake oil man” and go play with your legos.
I suppose you got the joke above (Right? You did, right?) but you were simply too down not to fall for it. What’s the use of acquiring all that theoretical information, if it doesn’t make you happy? Spending your days hanging around on some LessWrong with polyamorous dom king Eliezer Yudkowsky as your idol. You wish you could be like him, right? You wish the cool guys would be on the losing side, like he is?
The point is that the people who build it will resemble balloon-builders, not archers. People who are obsessed with getting machines to do things, not people who are obsessed with human performance.
My work is a type theory for AI for conceptualizing the input it receives via its artificial senses. If it weren’t, I would have never come here.
The conceptualization faculty is accompanied with a formula for making moral evaluations, which is the basis of advanced decision making. Whatever the AI can conceptualize, it can also project as a vector on a Cartesian plane. The direction and magnitude of that vector are the data used in this decision making.
The actual decision making algorithm may begin by making random decisions and filtering good decisions from bad with the mathematical model I developed. Based on this filtering, the AI would begin to develop a self-modifying heuristic algorithm for making good decisions and, in general, for behaving in a good manner. What the AI would perceive as good behavior would of course, to some extent, depend of the environment in which the AI is placed.
If you had an AI making random actions and changing it’s behavior according to heuristic rules, it could learn things in a similar way as a baby learns things. If you’re not interested of that, I don’t know what you’re interested of.
I didn’t come here to talk about some philosophy. I know you’re not interested of that. I’ve done the math, but not the algorithm, because I’m not much of a coder. If you don’t want to code a program that implements my mathematical model, that’s no reason to give me −54 karma.
I really don’t understand why you don’t want a mathematical model of moral decision making, even for discussion. “Moral” is not a philosophical concept here. It is just the thing that makes some decisions better than others. I didn’t have the formula when I came here in October. Now I have it. Maybe later I will have something more. And all you can do, with the exception of Risto, is to give me −1. Can you recommend me some transhumanist community?
How do you expect an AI to be rational, if you yourselves don’t want to be metarational? Do you want some “pocket calculator” AI?
Too bad you don’t like philosophical concepts. I thought you knew computer science is oozing over philosophy, which has all but died on its feet as far as we’re concerned of the academia.
One thing’s for sure: you don’t know whack about karma. The AI could actually differentiate karma, in the proper sense of the word, from “reputation”. You keep playing with your lego blocks until you grow up.
It would have been really neat to do this on LessWrong. It would have made for a good story. It would have also been practical. The academia isn’t interested of this—there is no academic discipline for studying AI theory at this level of abstraction. I don’t even have any AI expertise, and I didn’t intend to develop a mathematical model for AI in the first place. That’s just what I got when I worked on this for long enough.
I don’t like stereotypical LessWrongians—I think they are boring and narrow-minded. I think we could have had something to do together despite the fact that our personalities don’t make it easy for us to be friends. Almost anyone with AI expertise is competent enough to help me get started with this. You are not likely to get a better deal to get famous by doing so little work. But some deals of course seem too good to be true. So call me the “snake oil man” and go play with your legos.
I just gave my girlfriend an orgasm. Come on, give me another −1.
I suppose you got the joke above (Right? You did, right?) but you were simply too down not to fall for it. What’s the use of acquiring all that theoretical information, if it doesn’t make you happy? Spending your days hanging around on some LessWrong with polyamorous dom king Eliezer Yudkowsky as your idol. You wish you could be like him, right? You wish the cool guys would be on the losing side, like he is?
Why do you give me all the minus? Just asking.