I really don’t understand why you don’t want a mathematical model of moral decision making, even for discussion. “Moral” is not a philosophical concept here. It is just the thing that makes some decisions better than others. I didn’t have the formula when I came here in October. Now I have it. Maybe later I will have something more. And all you can do, with the exception of Risto, is to give me −1. Can you recommend me some transhumanist community?
How do you expect an AI to be rational, if you yourselves don’t want to be metarational? Do you want some “pocket calculator” AI?
Too bad you don’t like philosophical concepts. I thought you knew computer science is oozing over philosophy, which has all but died on its feet as far as we’re concerned of the academia.
One thing’s for sure: you don’t know whack about karma. The AI could actually differentiate karma, in the proper sense of the word, from “reputation”. You keep playing with your lego blocks until you grow up.
It would have been really neat to do this on LessWrong. It would have made for a good story. It would have also been practical. The academia isn’t interested of this—there is no academic discipline for studying AI theory at this level of abstraction. I don’t even have any AI expertise, and I didn’t intend to develop a mathematical model for AI in the first place. That’s just what I got when I worked on this for long enough.
I don’t like stereotypical LessWrongians—I think they are boring and narrow-minded. I think we could have had something to do together despite the fact that our personalities don’t make it easy for us to be friends. Almost anyone with AI expertise is competent enough to help me get started with this. You are not likely to get a better deal to get famous by doing so little work. But some deals of course seem too good to be true. So call me the “snake oil man” and go play with your legos.
I suppose you got the joke above (Right? You did, right?) but you were simply too down not to fall for it. What’s the use of acquiring all that theoretical information, if it doesn’t make you happy? Spending your days hanging around on some LessWrong with polyamorous dom king Eliezer Yudkowsky as your idol. You wish you could be like him, right? You wish the cool guys would be on the losing side, like he is?
I really don’t understand why you don’t want a mathematical model of moral decision making, even for discussion. “Moral” is not a philosophical concept here. It is just the thing that makes some decisions better than others. I didn’t have the formula when I came here in October. Now I have it. Maybe later I will have something more. And all you can do, with the exception of Risto, is to give me −1. Can you recommend me some transhumanist community?
How do you expect an AI to be rational, if you yourselves don’t want to be metarational? Do you want some “pocket calculator” AI?
Too bad you don’t like philosophical concepts. I thought you knew computer science is oozing over philosophy, which has all but died on its feet as far as we’re concerned of the academia.
One thing’s for sure: you don’t know whack about karma. The AI could actually differentiate karma, in the proper sense of the word, from “reputation”. You keep playing with your lego blocks until you grow up.
It would have been really neat to do this on LessWrong. It would have made for a good story. It would have also been practical. The academia isn’t interested of this—there is no academic discipline for studying AI theory at this level of abstraction. I don’t even have any AI expertise, and I didn’t intend to develop a mathematical model for AI in the first place. That’s just what I got when I worked on this for long enough.
I don’t like stereotypical LessWrongians—I think they are boring and narrow-minded. I think we could have had something to do together despite the fact that our personalities don’t make it easy for us to be friends. Almost anyone with AI expertise is competent enough to help me get started with this. You are not likely to get a better deal to get famous by doing so little work. But some deals of course seem too good to be true. So call me the “snake oil man” and go play with your legos.
I just gave my girlfriend an orgasm. Come on, give me another −1.
I suppose you got the joke above (Right? You did, right?) but you were simply too down not to fall for it. What’s the use of acquiring all that theoretical information, if it doesn’t make you happy? Spending your days hanging around on some LessWrong with polyamorous dom king Eliezer Yudkowsky as your idol. You wish you could be like him, right? You wish the cool guys would be on the losing side, like he is?
Why do you give me all the minus? Just asking.