The community here is simply incapable of recognising correct argument when it’s staring them in the face. Someone should have brought in Yudkowsky to take a look and to pronounce judgement upon it because it’s a significant advance. What we see instead is people down-voting it in order to protect their incorrect beliefs, and they’re doing that because they aren’t allowing themselves to be steered by reason, but by their emotional attachment to their existing beliefs.
Perhaps the reason that we disagree with you is not that we’re emotionally biased, irrational, mobbish, etc. Maybe we simply disagree. People can legitimately disagree without one of them being Bad People.
There hasn’t been a single person who’s dared to contradict the mob by commenting to say that I’m right, although I know that there are some of them who do accept it because I’ve been watching the points go up and down.
Really. You know that LW is an oppressive mob with a few people who don’t dare to contradict the dogma for fear of [something]… because you observed a number go up and down a few times. May I recommend that you get acquainted with Bayes’ Formula? Because I rather doubt that people only ever see votes go up and down in fora with oppressive dogmatic irrational mobs, and Bayes explains how this is easily inverted to show that votes going up and down a few times is rather weak evidence, if any, for LW being Awful in the ways you described.
But look at the score awarded to the person who commented to say that resources aren’t involved—what does that tell you about the general level of competence here? But then, the mistake made in that “paradox” is typical of the sloppy thinking that riddles this whole field.
It tells me that you missed the point. Parfit’s paradox is not about pragmatic decision making, it is about flaws in the utility function.
What I’ve learned from this site is that if you don’t have a huge negative score next to your name, you’re not doing it right.
“Truth forever on the scaffold, Wrong forever on the throne,” eh? And fractally so?
AGI needs to read through all the arguments of philosophy in order to find out what people believe and what they’re most interested in investigating. It will then make its own pronouncements on all those issues, and it will also inform each person about their performance so that they know who won which arguments, how much they broke the rules of reason, etc. - all of that needs to be done, and it will be. The idea that AGI won’t bother to read through this stuff and analyse it is way off—AGI will need to study how people think and the places in which they fail.
You have indeed found A Reason that supports your belief in the AGI-God, but I think you’ve failed to think it through. Why should the AGI need to tell us how we did in order to analyze our thought processes? And how come the optimal study method is specifically the one which allows you to be shown Right All Along? Specificity only brings Burdensome Details.
“Perhaps the reason that we disagree with you is not that we’re emotionally biased, irrational, mobbish, etc. Maybe we simply disagree. People can legitimately disagree without one of them being Bad People.”
It’s obvious what’s going on when you look at the high positive scores being given to really poor comments.
“It tells me that you missed the point. Parfit’s paradox is not about pragmatic decision making, it is about flaws in the utility function.”
A false paradox tells you nothing about flaws in the utility function—it simply tells you that people who apply it in a slapdash manner get the wrong answers out of it and that the fault lies with them.
“You have indeed found A Reason that supports your belief in the AGI-God, but I think you’ve failed to think it through. Why should the AGI need to tell us how we did in order to analyze our thought processes? And how come the optimal study method is specifically the one which allows you to be shown Right All Along? Specificity only brings Burdensome Details.”
AGI won’t be programmed to find me right all the time, but to identify which arguments are right. And for the sake of those who are wrong, they need to be told that they were wrong so that they understand that they are at reasoning and not the great thinkers they imagine themselves to be.
Perhaps the reason that we disagree with you is not that we’re emotionally biased, irrational, mobbish, etc. Maybe we simply disagree. People can legitimately disagree without one of them being Bad People.
Really. You know that LW is an oppressive mob with a few people who don’t dare to contradict the dogma for fear of [something]… because you observed a number go up and down a few times. May I recommend that you get acquainted with Bayes’ Formula? Because I rather doubt that people only ever see votes go up and down in fora with oppressive dogmatic irrational mobs, and Bayes explains how this is easily inverted to show that votes going up and down a few times is rather weak evidence, if any, for LW being Awful in the ways you described.
It tells me that you missed the point. Parfit’s paradox is not about pragmatic decision making, it is about flaws in the utility function.
“Truth forever on the scaffold, Wrong forever on the throne,” eh? And fractally so?
You have indeed found A Reason that supports your belief in the AGI-God, but I think you’ve failed to think it through. Why should the AGI need to tell us how we did in order to analyze our thought processes? And how come the optimal study method is specifically the one which allows you to be shown Right All Along? Specificity only brings Burdensome Details.
“Perhaps the reason that we disagree with you is not that we’re emotionally biased, irrational, mobbish, etc. Maybe we simply disagree. People can legitimately disagree without one of them being Bad People.”
It’s obvious what’s going on when you look at the high positive scores being given to really poor comments.
“It tells me that you missed the point. Parfit’s paradox is not about pragmatic decision making, it is about flaws in the utility function.”
A false paradox tells you nothing about flaws in the utility function—it simply tells you that people who apply it in a slapdash manner get the wrong answers out of it and that the fault lies with them.
“You have indeed found A Reason that supports your belief in the AGI-God, but I think you’ve failed to think it through. Why should the AGI need to tell us how we did in order to analyze our thought processes? And how come the optimal study method is specifically the one which allows you to be shown Right All Along? Specificity only brings Burdensome Details.”
AGI won’t be programmed to find me right all the time, but to identify which arguments are right. And for the sake of those who are wrong, they need to be told that they were wrong so that they understand that they are at reasoning and not the great thinkers they imagine themselves to be.