Sure, I consider the “(for humans)” parenthetical to be doing a lot of work. If we interpret “good” as meaning what (some agent-like system) would want if it knew more, thought faster, &c., it’s going to be true that some humans are quantitatively less good with respect to the “good” (extrapolated volition) of everyone else. But I expect most people to wildly overestimate the quantitative extent to which this matters; selfishness is a much more powerful force in the world than outright evil. So, you should read me as claiming that smart is improvable by a much higher percentage.
I consider the “(for humans)” parenthetical to be doing a lot of work.
that is a lot of work, and probably should be made explicit that you mean “for typical smartness and goodness of my current peer group”. Unless you really mean to deny that “goodness” can be zero or negative, and then becomes far more important than smartness.
should be made explicit that you mean “for typical smartness and goodness of my current peer group”.
It turns out that my current peer group does not have a magical monopoly on goodness! It even turns out that being two or three standard deviations from the mean in intelligence does not give us a magical monopoly on goodness! I didn’t notice this until very recently!
I think it’s a really valuable (if expensive and painful) exercise to spend a lot of time reading the literature of some ideology that you despise, really trying to learn from their models and see the Bayes-structure that they’re pointing at, even if you don’t like them and don’t want to join their group. When the model clicks (this may take a few years), you might learn something! (You still don’t have to join the hated outgroup when this happens—you are in fact free to continue to hate them—but the experience may change you enough that you don’t fit in with your ingroup anymore.)
it’s a really valuable (if expensive and painful) exercise … you might learn something
Those two thoughts don’t seem to match well. If you think it’s valuable enough to be worth the expense and the pain, presumably you have a better description of the potential payoff other than “learn something”?
The payoff is the shock of, “Wait! A lot of the customs and ideas that my ingroup thinks are obviously inherently good, aren’t actually what I want now that I understand more about the world, and I predict that my ingroup friends would substantially agree if they knew what I knew, but I can’t just tell them, because from their perspective it probably just looks like I suddenly went crazy!”
I know, that’s still vague. The reason I’m being vague is because the details are going to depend on your ingroup, and which hated outgroup’s body of knowledge you chose to study. Sorry about this.
Sure, I consider the “(for humans)” parenthetical to be doing a lot of work. If we interpret “good” as meaning what (some agent-like system) would want if it knew more, thought faster, &c., it’s going to be true that some humans are quantitatively less good with respect to the “good” (extrapolated volition) of everyone else. But I expect most people to wildly overestimate the quantitative extent to which this matters; selfishness is a much more powerful force in the world than outright evil. So, you should read me as claiming that smart is improvable by a much higher percentage.
that is a lot of work, and probably should be made explicit that you mean “for typical smartness and goodness of my current peer group”. Unless you really mean to deny that “goodness” can be zero or negative, and then becomes far more important than smartness.
It turns out that my current peer group does not have a magical monopoly on goodness! It even turns out that being two or three standard deviations from the mean in intelligence does not give us a magical monopoly on goodness! I didn’t notice this until very recently!
I think it’s a really valuable (if expensive and painful) exercise to spend a lot of time reading the literature of some ideology that you despise, really trying to learn from their models and see the Bayes-structure that they’re pointing at, even if you don’t like them and don’t want to join their group. When the model clicks (this may take a few years), you might learn something! (You still don’t have to join the hated outgroup when this happens—you are in fact free to continue to hate them—but the experience may change you enough that you don’t fit in with your ingroup anymore.)
Those two thoughts don’t seem to match well. If you think it’s valuable enough to be worth the expense and the pain, presumably you have a better description of the potential payoff other than “learn something”?
The payoff is the shock of, “Wait! A lot of the customs and ideas that my ingroup thinks are obviously inherently good, aren’t actually what I want now that I understand more about the world, and I predict that my ingroup friends would substantially agree if they knew what I knew, but I can’t just tell them, because from their perspective it probably just looks like I suddenly went crazy!”
I know, that’s still vague. The reason I’m being vague is because the details are going to depend on your ingroup, and which hated outgroup’s body of knowledge you chose to study. Sorry about this.
So if my attitude is already this, can I skip the pain and the expense? :-)