You are right in thinking that I have not studied the field in the depth that may be necessary—I have always judged it by the woeful stuff that makes it across into other places where the subject often comes up, but it’s possible that I’ve misjudged the worth of some of it by being misled by misrepresentations of it, so I will look up the things in your list that I haven’t already checked and see what they have to offer. What this site really needs though is its own set of articles on them, all properly debugged and aimed squarely at AGI system deveolpers.
I have always judged it by the woeful stuff that makes it across into other places where the subject often comes up
Well, that hardly seems a reliable approach…
I should, perhaps, clarify my point. My list of terms wasn’t intended to be some sort of exhaustive set of prerequisite topics, but only a sampling of some representative (and particularly salient) concepts. If, indeed, you haven not looked into moral philosophy at all… then, quite frankly, it will not suffice to simply “look up” a handful of terms. (Don’t take this to mean that you shouldn’t look up the concepts I listed! But do avoid Wikipedia; the Stanford Encyclopedia of Philosophy is a far better source for this sort of thing.) You really ought to delve into the field at some length…
What this site really needs though is its own set of articles on them, all properly debugged and aimed squarely at AGI system deveolpers.
Perhaps, perhaps not. It would be a mistake to suppose that everyone who has studied the matter until now, and everyone who has attempted to systematize it, has been stupid, incompetent, etc. Systematic surveys of moral philosophy, even good ones, are not difficult to find.
It’s being confirmed right here—I’m finding the same range of faulty stuff on every page I read, although it’s possible that it is less wrong than most. There is room for hope that I have found the most rational place on the Net for this kind of discussion, but there are a lot of errors that need to be corrected, and it’s such a big task that it will probably have to wait for AGI to drive that process.
″...the Stanford Encyclopedia of Philosophy is a far better source for this sort of thing.) You really ought to delve into the field at some length…”
Thanks—it saves a lot of time to start with the better sources of information and it’s hard to know when you’ve found them.
“It would be a mistake to suppose that everyone who has studied the matter until now, and everyone who has attempted to systematize it, has been stupid, incompetent, etc.”
Certainly—there are bound to be some who do it a lot better than the rest, but they’re hidden deep in the noise.
“Systematic surveys of moral philosophy, even good ones, are not difficult to find.”
I have only found fault-ridden stuff so far, but hope springs eternal.
It’s being confirmed right here—I’m finding the same range of faulty stuff on every page I read, although it’s possible that it is less wrong than most.
Could you be less vague? How is the philosophy here faulty? Is there a pattern? If you have valid criticism then this community is probably in the top 5% for accepting it, but just saying “you’re all wrong” isn’t actually useful.
it’s such a big task that it will probably have to wait for AGI to drive that process.
If AGI has been built, then LW’s task is over. Either we have succeeded, and we will be in a world beyond our ability to predict, but almost certainly one in which we will not need to edit LW to better explain reductionism; or we have failed, and we are no more—there is nobody to read LW. This is putting the rocket before the horse.
Just look at the reactions to my post “Mere Addition Paradox Resolved”. The community here is simply incapable of recognising correct argument when it’s staring them in the face. Someone should have brought in Yudkowsky to take a look and to pronounce judgement upon it because it’s a significant advance. What we see instead is people down-voting it in order to protect their incorrect beliefs, and they’re doing that because they aren’t allowing themselves to be steered by reason, but by their emotional attachment to their existing beliefs. There hasn’t been a single person who’s dared to contradict the mob by commenting to say that I’m right, although I know that there are some of them who do accept it because I’ve been watching the points go up and down. But look at the score awarded to the person who commented to say that resources aren’t involved—what does that tell you about the general level of competence here? But then, the mistake made in that “paradox” is typical of the sloppy thinking that riddles this whole field. What I’ve learned from this site is that if you don’t have a huge negative score next to your name, you’re not doing it right.
AGI needs to read through all the arguments of philosophy in order to find out what people believe and what they’re most interested in investigating. It will then make its own pronouncements on all those issues, and it will also inform each person about their performance so that they know who won which arguments, how much they broke the rules of reason, etc. - all of that needs to be done, and it will be. The idea that AGI won’t bother to read through this stuff and analyse it is way off—AGI will need to study how people think and the places in which they fail.
The community here is simply incapable of recognising correct argument when it’s staring them in the face. Someone should have brought in Yudkowsky to take a look and to pronounce judgement upon it because it’s a significant advance. What we see instead is people down-voting it in order to protect their incorrect beliefs, and they’re doing that because they aren’t allowing themselves to be steered by reason, but by their emotional attachment to their existing beliefs.
Perhaps the reason that we disagree with you is not that we’re emotionally biased, irrational, mobbish, etc. Maybe we simply disagree. People can legitimately disagree without one of them being Bad People.
There hasn’t been a single person who’s dared to contradict the mob by commenting to say that I’m right, although I know that there are some of them who do accept it because I’ve been watching the points go up and down.
Really. You know that LW is an oppressive mob with a few people who don’t dare to contradict the dogma for fear of [something]… because you observed a number go up and down a few times. May I recommend that you get acquainted with Bayes’ Formula? Because I rather doubt that people only ever see votes go up and down in fora with oppressive dogmatic irrational mobs, and Bayes explains how this is easily inverted to show that votes going up and down a few times is rather weak evidence, if any, for LW being Awful in the ways you described.
But look at the score awarded to the person who commented to say that resources aren’t involved—what does that tell you about the general level of competence here? But then, the mistake made in that “paradox” is typical of the sloppy thinking that riddles this whole field.
It tells me that you missed the point. Parfit’s paradox is not about pragmatic decision making, it is about flaws in the utility function.
What I’ve learned from this site is that if you don’t have a huge negative score next to your name, you’re not doing it right.
“Truth forever on the scaffold, Wrong forever on the throne,” eh? And fractally so?
AGI needs to read through all the arguments of philosophy in order to find out what people believe and what they’re most interested in investigating. It will then make its own pronouncements on all those issues, and it will also inform each person about their performance so that they know who won which arguments, how much they broke the rules of reason, etc. - all of that needs to be done, and it will be. The idea that AGI won’t bother to read through this stuff and analyse it is way off—AGI will need to study how people think and the places in which they fail.
You have indeed found A Reason that supports your belief in the AGI-God, but I think you’ve failed to think it through. Why should the AGI need to tell us how we did in order to analyze our thought processes? And how come the optimal study method is specifically the one which allows you to be shown Right All Along? Specificity only brings Burdensome Details.
“Perhaps the reason that we disagree with you is not that we’re emotionally biased, irrational, mobbish, etc. Maybe we simply disagree. People can legitimately disagree without one of them being Bad People.”
It’s obvious what’s going on when you look at the high positive scores being given to really poor comments.
“It tells me that you missed the point. Parfit’s paradox is not about pragmatic decision making, it is about flaws in the utility function.”
A false paradox tells you nothing about flaws in the utility function—it simply tells you that people who apply it in a slapdash manner get the wrong answers out of it and that the fault lies with them.
“You have indeed found A Reason that supports your belief in the AGI-God, but I think you’ve failed to think it through. Why should the AGI need to tell us how we did in order to analyze our thought processes? And how come the optimal study method is specifically the one which allows you to be shown Right All Along? Specificity only brings Burdensome Details.”
AGI won’t be programmed to find me right all the time, but to identify which arguments are right. And for the sake of those who are wrong, they need to be told that they were wrong so that they understand that they are at reasoning and not the great thinkers they imagine themselves to be.
You are right in thinking that I have not studied the field in the depth that may be necessary—I have always judged it by the woeful stuff that makes it across into other places where the subject often comes up, but it’s possible that I’ve misjudged the worth of some of it by being misled by misrepresentations of it, so I will look up the things in your list that I haven’t already checked and see what they have to offer. What this site really needs though is its own set of articles on them, all properly debugged and aimed squarely at AGI system deveolpers.
Well, that hardly seems a reliable approach…
I should, perhaps, clarify my point. My list of terms wasn’t intended to be some sort of exhaustive set of prerequisite topics, but only a sampling of some representative (and particularly salient) concepts. If, indeed, you haven not looked into moral philosophy at all… then, quite frankly, it will not suffice to simply “look up” a handful of terms. (Don’t take this to mean that you shouldn’t look up the concepts I listed! But do avoid Wikipedia; the Stanford Encyclopedia of Philosophy is a far better source for this sort of thing.) You really ought to delve into the field at some length…
Perhaps, perhaps not. It would be a mistake to suppose that everyone who has studied the matter until now, and everyone who has attempted to systematize it, has been stupid, incompetent, etc. Systematic surveys of moral philosophy, even good ones, are not difficult to find.
“Well, that hardly seems a reliable approach…”
It’s being confirmed right here—I’m finding the same range of faulty stuff on every page I read, although it’s possible that it is less wrong than most. There is room for hope that I have found the most rational place on the Net for this kind of discussion, but there are a lot of errors that need to be corrected, and it’s such a big task that it will probably have to wait for AGI to drive that process.
″...the Stanford Encyclopedia of Philosophy is a far better source for this sort of thing.) You really ought to delve into the field at some length…”
Thanks—it saves a lot of time to start with the better sources of information and it’s hard to know when you’ve found them.
“It would be a mistake to suppose that everyone who has studied the matter until now, and everyone who has attempted to systematize it, has been stupid, incompetent, etc.”
Certainly—there are bound to be some who do it a lot better than the rest, but they’re hidden deep in the noise.
“Systematic surveys of moral philosophy, even good ones, are not difficult to find.”
I have only found fault-ridden stuff so far, but hope springs eternal.
Could you be less vague? How is the philosophy here faulty? Is there a pattern? If you have valid criticism then this community is probably in the top 5% for accepting it, but just saying “you’re all wrong” isn’t actually useful.
If AGI has been built, then LW’s task is over. Either we have succeeded, and we will be in a world beyond our ability to predict, but almost certainly one in which we will not need to edit LW to better explain reductionism; or we have failed, and we are no more—there is nobody to read LW. This is putting the rocket before the horse.
Just look at the reactions to my post “Mere Addition Paradox Resolved”. The community here is simply incapable of recognising correct argument when it’s staring them in the face. Someone should have brought in Yudkowsky to take a look and to pronounce judgement upon it because it’s a significant advance. What we see instead is people down-voting it in order to protect their incorrect beliefs, and they’re doing that because they aren’t allowing themselves to be steered by reason, but by their emotional attachment to their existing beliefs. There hasn’t been a single person who’s dared to contradict the mob by commenting to say that I’m right, although I know that there are some of them who do accept it because I’ve been watching the points go up and down. But look at the score awarded to the person who commented to say that resources aren’t involved—what does that tell you about the general level of competence here? But then, the mistake made in that “paradox” is typical of the sloppy thinking that riddles this whole field. What I’ve learned from this site is that if you don’t have a huge negative score next to your name, you’re not doing it right.
AGI needs to read through all the arguments of philosophy in order to find out what people believe and what they’re most interested in investigating. It will then make its own pronouncements on all those issues, and it will also inform each person about their performance so that they know who won which arguments, how much they broke the rules of reason, etc. - all of that needs to be done, and it will be. The idea that AGI won’t bother to read through this stuff and analyse it is way off—AGI will need to study how people think and the places in which they fail.
Perhaps the reason that we disagree with you is not that we’re emotionally biased, irrational, mobbish, etc. Maybe we simply disagree. People can legitimately disagree without one of them being Bad People.
Really. You know that LW is an oppressive mob with a few people who don’t dare to contradict the dogma for fear of [something]… because you observed a number go up and down a few times. May I recommend that you get acquainted with Bayes’ Formula? Because I rather doubt that people only ever see votes go up and down in fora with oppressive dogmatic irrational mobs, and Bayes explains how this is easily inverted to show that votes going up and down a few times is rather weak evidence, if any, for LW being Awful in the ways you described.
It tells me that you missed the point. Parfit’s paradox is not about pragmatic decision making, it is about flaws in the utility function.
“Truth forever on the scaffold, Wrong forever on the throne,” eh? And fractally so?
You have indeed found A Reason that supports your belief in the AGI-God, but I think you’ve failed to think it through. Why should the AGI need to tell us how we did in order to analyze our thought processes? And how come the optimal study method is specifically the one which allows you to be shown Right All Along? Specificity only brings Burdensome Details.
“Perhaps the reason that we disagree with you is not that we’re emotionally biased, irrational, mobbish, etc. Maybe we simply disagree. People can legitimately disagree without one of them being Bad People.”
It’s obvious what’s going on when you look at the high positive scores being given to really poor comments.
“It tells me that you missed the point. Parfit’s paradox is not about pragmatic decision making, it is about flaws in the utility function.”
A false paradox tells you nothing about flaws in the utility function—it simply tells you that people who apply it in a slapdash manner get the wrong answers out of it and that the fault lies with them.
“You have indeed found A Reason that supports your belief in the AGI-God, but I think you’ve failed to think it through. Why should the AGI need to tell us how we did in order to analyze our thought processes? And how come the optimal study method is specifically the one which allows you to be shown Right All Along? Specificity only brings Burdensome Details.”
AGI won’t be programmed to find me right all the time, but to identify which arguments are right. And for the sake of those who are wrong, they need to be told that they were wrong so that they understand that they are at reasoning and not the great thinkers they imagine themselves to be.