The opposite of most sorts of stupid is still stupid. Particularly most things that are functional enough to proliferate themselves successfully.
Don’t have a leader
If you meant “Have more than one leader” you’d be on to something. That isn’t what you meant though.
Don″t have a gospel
There is a difference between the connotations you are going with for ‘gospel’ and what amounts to a textbook that most people haven’t read anyway.
Don’t have a dogma
I sometimes wish people would submit to reference to rudimentary references to rational, logical, decision theoretic or scientific concepts as if they were dogma. That is far from what I observe.
Don’t have quasi-religious “meetups”
Socialize in person with rudimentary organisation? Oh the horror!
Don’t have quasi-religious rituals (!)
Actually, I don’t disagree at all on this one. Or at least I’d prefer that anyone who was into that kind of thing did it without it being affiliated with lesswrong in any way except partial membership overlap.
Don’t have an eschatology
Are you complaining (or shaming with labels the observation) that an economist and an AI researcher attempted to use their respective expertise to make predictions about the future?
Don’t have a God.
Don’t. Working on it...
WELCOME CRITIICISM AND DISSENT
Most upvoted post. Welcome competent, sane or useful criticism. Don’t give nonsense a free pass just because it is ‘dissent’.
If you meant “Have more than one leader” you’d be on to something. That isn’t what you meant though.
How do you know? Multiple leaders at least dilute the problem.
There is a difference between the connotations you are going with for ‘gospel’ and what amounts to a textbook that most people haven’t read anyway.
I’ve read it. There’s some time I’ll never get back.
I sometimes wish people would submit to reference to rudimentary references to rational, logical, decision theoretic or scientific concepts as if they were dogma. T
Not what I meant. Those can be studied anywhere. “MWI is the correct interpretation of QM” is an example of dogma.
Socialize in person with rudimentary organisation?
Other rationalists manage without it.
Are you complaining (or shaming with labels the observation) that an economist and an AI researcher attempted to use their respective expertise to make predictions about the future?
No, I am referring to mind-killing aspects of the mythos: it fools people into thinking they are Saving the World This sense of self-importance is yet another mind killer. Instead of examining ideas dispassionaely,a s they should, they develop a mentality of “No, don’t take my important world-saving role away from me! I cannot tolerate any criticism of these ideas, because then I will go back to being an ordinary person”.
Don’t give nonsense a free pass just because it is ‘dissent’.
It contains five misspellings in a single paragraph: “utimately” “canot” “statees” “hvae” “ontoogical” which might themselves be enough for a downvote, regardless of content.
As for the is-ought problem, if we accept that “ought” is just a matter of calculations in our brain returning an output (and reject that it’s a matter of e.g. our brain receiving supernatural instruction from some non-physical soul), then the “ought” is describable in terms of the world-that-is, because every algorithm in our brain is describable in terms of the world-that-is.
It’s not a matter of “cramming” an entire world-state into your brain—any approximation that your brain is making, including any self-identified deficiency in the ability to make a moral evaluation in any particular situation, are also encoded in your brain—your current brain, not some hypothetical superbrain.
As for the is-ought problem, if we accept that “ought” is just a matter of calculations in our brain returning an output
But we shouldnt accept that, because we can miscalculate an “ought” or antyhing else. The is-ought problem is the problem of correctly inferring an ought from a tractable amount of “is’s”.
(and reject that it’s a matter of e.g. our brain receiving supernatural instruction from some non-physical soul), then the “ought” is describable in terms of the world-that-is, because every algorithm in our brain is describable in terms of the world-that-is.
It perhaps might be one day given sufficiently advanced brain scanning, but we don’t have that now, so we
still have an is-ought gap.
It’s not a matter of “cramming” an entire world-state into your brain—any approximation that your brain is making, including any self-identified deficiency in the ability to make a moral evaluation in any particular situation, are also encoded in your brain—your current brain, not some hypothetical superbrain.
The is-ought problem is epistemic. Being told that I have an epistemically inaccessible black box in my head that calculates oughts still doesn’t lead to a situation where oughts can be consciously undestood as correct entailments of is’s.
because we can miscalculate an “ought” or anything else.
One way to miscalculate an “ought” is the same way that we can miscalculate an “is”—e.g. lack of information, erroneous knowledge, false understanding of how to weigh data, etc.
And also, because people aren’t perfectly self-aware, we can mistake mere habits or strongly-held preferences to be the outputs of our moral algorithm—same way that e.g. a synaesthete might perceive the number 8 to be colored blue, even though there’s no “blue” light frequency striking the optical nerve. But that sort of thing doesn’t seem as a very deep philosophical problem to me.
We can correct miscalculations where we have an conscious epistemic grasp of how the calculation should work. If morality is a neural black box, we have no such grasp. Such a neural black box cannot be used to plug the is-ought gap, because it does not distinguish correct calculations from miscalculations.
The opposite of most sorts of stupid is still stupid. Particularly most things that are functional enough to proliferate themselves successfully.
If you meant “Have more than one leader” you’d be on to something. That isn’t what you meant though.
There is a difference between the connotations you are going with for ‘gospel’ and what amounts to a textbook that most people haven’t read anyway.
I sometimes wish people would submit to reference to rudimentary references to rational, logical, decision theoretic or scientific concepts as if they were dogma. That is far from what I observe.
Socialize in person with rudimentary organisation? Oh the horror!
Actually, I don’t disagree at all on this one. Or at least I’d prefer that anyone who was into that kind of thing did it without it being affiliated with lesswrong in any way except partial membership overlap.
Are you complaining (or shaming with labels the observation) that an economist and an AI researcher attempted to use their respective expertise to make predictions about the future?
Don’t. Working on it...
Most upvoted post. Welcome competent, sane or useful criticism. Don’t give nonsense a free pass just because it is ‘dissent’.
How do you know? Multiple leaders at least dilute the problem.
I’ve read it. There’s some time I’ll never get back.
Not what I meant. Those can be studied anywhere. “MWI is the correct interpretation of QM” is an example of dogma.
Other rationalists manage without it.
No, I am referring to mind-killing aspects of the mythos: it fools people into thinking they are Saving the World This sense of self-importance is yet another mind killer. Instead of examining ideas dispassionaely,a s they should, they develop a mentality of “No, don’t take my important world-saving role away from me! I cannot tolerate any criticism of these ideas, because then I will go back to being an ordinary person”.
Is this nonsense ?
It contains five misspellings in a single paragraph: “utimately” “canot” “statees” “hvae” “ontoogical” which might themselves be enough for a downvote, regardless of content.
As for the is-ought problem, if we accept that “ought” is just a matter of calculations in our brain returning an output (and reject that it’s a matter of e.g. our brain receiving supernatural instruction from some non-physical soul), then the “ought” is describable in terms of the world-that-is, because every algorithm in our brain is describable in terms of the world-that-is.
It’s not a matter of “cramming” an entire world-state into your brain—any approximation that your brain is making, including any self-identified deficiency in the ability to make a moral evaluation in any particular situation, are also encoded in your brain—your current brain, not some hypothetical superbrain.
But we shouldnt accept that, because we can miscalculate an “ought” or antyhing else. The is-ought problem is the problem of correctly inferring an ought from a tractable amount of “is’s”.
It perhaps might be one day given sufficiently advanced brain scanning, but we don’t have that now, so we still have an is-ought gap.
The is-ought problem is epistemic. Being told that I have an epistemically inaccessible black box in my head that calculates oughts still doesn’t lead to a situation where oughts can be consciously undestood as correct entailments of is’s.
One way to miscalculate an “ought” is the same way that we can miscalculate an “is”—e.g. lack of information, erroneous knowledge, false understanding of how to weigh data, etc.
And also, because people aren’t perfectly self-aware, we can mistake mere habits or strongly-held preferences to be the outputs of our moral algorithm—same way that e.g. a synaesthete might perceive the number 8 to be colored blue, even though there’s no “blue” light frequency striking the optical nerve. But that sort of thing doesn’t seem as a very deep philosophical problem to me.
We can correct miscalculations where we have an conscious epistemic grasp of how the calculation should work. If morality is a neural black box, we have no such grasp. Such a neural black box cannot be used to plug the is-ought gap, because it does not distinguish correct calculations from miscalculations.