He could instead mean something closer to “AI risk seems to be an important contribution for charitable dollars, but the SIAI’s lack of careful control and moderation of their own fora even given its potential PR risk makes me question whether they are competent enough or organized enough to substantially help deal with AI risk.”
That is indeed my concern. If CFAR can’t avoid a Jerry Sandusky/Joe Paterno type scenario (which I am reasonably probable it is capable of, given one of its founders wrote HPMOR), then it is literally a horrendous joke and I should be allocating my contributions to somewhere more productive.
That is indeed my concern. If CFAR can’t avoid a Jerry Sandusky/Joe Paterno type scenario (which I am reasonably probable it is capable of, given one of its founders wrote HPMOR), then it is literally a horrendous joke and I should be allocating my contributions to somewhere more productive.
This confuses me. First of all, the probability of such a scenario is tiny (how many universities have the exact same complete lack of safeguards and transparency and how many had an international scandal?) Second, the difference between writing HPMR and the difference between being associated with one of the most prominent universities in the US seems pretty large. A small point that does back up your concerns somewhat- it may be worth noting that the SI early on did have a serious embezzlement problem at one point. But the difference of “has an unmoderated IRC forum where people say hateful stuff” and the scale of a massive coverup of a decade long pedophilia scandal seems pretty clear. Finally, the inability to potentially deal with an unlikely scandal, even if one did have evidence for that, isn’t a reason to think that they are incompetent in other ways.
Frankly, it seems as an outside observer that your reaction is likely more connected to the simple fact that these were pretty disgusting statements that can easily trigger a large emotional reaction. But this website is devoted to rationality, and the name of it is Less Wrong. Increasing the world’s total existential risk because a certain person who isn’t even an SI higher-up or anything similar said some hateful things is not a rational move.
But the difference of “has an unmoderated IRC forum where people say hateful stuff” [...]
Lesswrong does not have an unmoderated IRC forum. There is an IRC forum called #lesswrong on freenode which is mostly populated by people who read lesswrong but it has no official LW backing or involvement. SIAI/FHI/CFAR or whoever is in charge of LW should ask the #lesswrong mod to close it and take ##lesswrong if they want it. This is how freenode rules treat unofficial IRC channels.
Anything that seems like support for Dallas or Ritalin/Rational_Brony was unintentional.
As I said before, appealing to an online forum crowd from an associated chat channel, whether official or not, is invariably a bad idea, because of the difference in the expectations of privacy. It harms the forum (but usually not the channel) and so is often a bannable offence in the forums which support banning users. Anyone bringing the same issue up in an unrelated thread, like Dallas did, ought to be banned for trolling.
There was an attempt by someone to change the forum policies (about censorship, that time) by doing something terrible if the policies weren’t changed. EY and company said “we don’t give in to blackmail,” the policies were not changed, and the person possibly carried through on their threat. It’s worth bringing up only to discourage future attempts at blackmail.
Rather, I meant to say: I expect LW posters to largely agree that it can be correct to select an option which has lower expected utility according to naive calculation so as to prevent such situations from arising in the first place (in that it is correct to have a decision function that selects such options, and that if you don’t actually select such options then you don’t have that decision function). It seems possibly reasonable to construe an organization having access to high utility but opposing specific human rights issues as creating such a situation (I do not comment on whether or not this is actually the case in our world).
A list of outcomes possible in the future (in order of my preference):
We create AI which corresponds to my values.
Life on Earth persists under my value set.
Life on Earth is totally exterminated.
Life on Earth persists under its current value set.
We create an AI which does not correspond to my values.
If LW is not trying to eradicate the scourge of transphobia, than clearly SIAI has moved from 1 to 5, and I should be trying to dismantle it, rather than fund it.
So to be clear, you are claiming that the destruction of all life on Earth is a better alternative than life continuing with the common current values?
(5) We create an AI which does not correspond to my values.
So part of the whole point of attempts to things like CEV is that they will (ideally) not use any individual’s fixed values but rather will try to use what everyone’s values would be if they were smarter and knew more.
If LW is not trying to eradicate the scourge of transphobia, than clearly SIAI has moved from 1 to 5, and I should be trying to dismantle it, rather than fund it.
If your value set is so focused on the complete destruction of the world rather than let any deviation from your values to be implemented, then I suspect that LW and SI were already trying to accomplish something you’d regard as 5. Moreover, it seems that you are confused about priorities: LW isn’t an organization devoted to dealing with LGBTQE issues. You might as well complain that LW isn’t trying to eradicate malaria. The goal of LW is to improve rationality, and the goal of SI is to construct safe general AI. If one or both of those happens to solve other problems or result in a value shift making things better for trans individuals then that will be a consequence, but it doesn’t make it their job to do so.
Frankly, any value system which says “I’d rather have all life destroyed then everyone live under a value system slightly different than my own” seems more like something out of the worst sort of utopian fanaticism than anything else. One of the major ways human society has improved over time and become more peaceful is that we’ve learned that we don’t have to frame everything as an existential struggle. Sometimes it does actually make sense to compromise, or at least, wait to resolve things. We live in a era of truly awesome weaponry, and it is only this willingness to place the survival of humanity over disagreements in values that has seen us to this day. It is from the moderation of Reagan, Nixon, Carter, Kruschev, Breznev, Andropov and others that we are around to have this discussion instead of trying to desperately survive in the crumbled, radioactive ruins of human civilization.
If CFAR can’t avoid a Jerry Sandusky/Joe Paterno type scenario
So, I agree that any organization that works with minors should be held to high standards (and CFAR does run a camp for high schoolers). I don’t think the forum policy gives much evidence about the likelihood of children being victimized by employees, though.
which I am reasonably probable it is capable of, given one of its founders wrote HPMOR
It’s not clear to me how skill at writing HPMOR is related skill at avoiding PR gaffes. Have you looked at EY’s okcupid page? There are a lot of things there that don’t look like they’re written with public relations in mind.
That is indeed my concern. If CFAR can’t avoid a Jerry Sandusky/Joe Paterno type scenario (which I am reasonably probable it is capable of, given one of its founders wrote HPMOR), then it is literally a horrendous joke and I should be allocating my contributions to somewhere more productive.
This confuses me. First of all, the probability of such a scenario is tiny (how many universities have the exact same complete lack of safeguards and transparency and how many had an international scandal?) Second, the difference between writing HPMR and the difference between being associated with one of the most prominent universities in the US seems pretty large. A small point that does back up your concerns somewhat- it may be worth noting that the SI early on did have a serious embezzlement problem at one point. But the difference of “has an unmoderated IRC forum where people say hateful stuff” and the scale of a massive coverup of a decade long pedophilia scandal seems pretty clear. Finally, the inability to potentially deal with an unlikely scandal, even if one did have evidence for that, isn’t a reason to think that they are incompetent in other ways.
Frankly, it seems as an outside observer that your reaction is likely more connected to the simple fact that these were pretty disgusting statements that can easily trigger a large emotional reaction. But this website is devoted to rationality, and the name of it is Less Wrong. Increasing the world’s total existential risk because a certain person who isn’t even an SI higher-up or anything similar said some hateful things is not a rational move.
Lesswrong does not have an unmoderated IRC forum. There is an IRC forum called #lesswrong on freenode which is mostly populated by people who read lesswrong but it has no official LW backing or involvement. SIAI/FHI/CFAR or whoever is in charge of LW should ask the #lesswrong mod to close it and take ##lesswrong if they want it. This is how freenode rules treat unofficial IRC channels.
Anything that seems like support for Dallas or Ritalin/Rational_Brony was unintentional.
As I said before, appealing to an online forum crowd from an associated chat channel, whether official or not, is invariably a bad idea, because of the difference in the expectations of privacy. It harms the forum (but usually not the channel) and so is often a bannable offence in the forums which support banning users. Anyone bringing the same issue up in an unrelated thread, like Dallas did, ought to be banned for trolling.
Unless this can be construed as blackmail, in which case, it is.
There was an attempt by someone to change the forum policies (about censorship, that time) by doing something terrible if the policies weren’t changed. EY and company said “we don’t give in to blackmail,” the policies were not changed, and the person possibly carried through on their threat. It’s worth bringing up only to discourage future attempts at blackmail.
Rather, I meant to say: I expect LW posters to largely agree that it can be correct to select an option which has lower expected utility according to naive calculation so as to prevent such situations from arising in the first place (in that it is correct to have a decision function that selects such options, and that if you don’t actually select such options then you don’t have that decision function). It seems possibly reasonable to construe an organization having access to high utility but opposing specific human rights issues as creating such a situation (I do not comment on whether or not this is actually the case in our world).
A list of outcomes possible in the future (in order of my preference):
We create AI which corresponds to my values.
Life on Earth persists under my value set.
Life on Earth is totally exterminated.
Life on Earth persists under its current value set.
We create an AI which does not correspond to my values.
If LW is not trying to eradicate the scourge of transphobia, than clearly SIAI has moved from 1 to 5, and I should be trying to dismantle it, rather than fund it.
So to be clear, you are claiming that the destruction of all life on Earth is a better alternative than life continuing with the common current values?
So part of the whole point of attempts to things like CEV is that they will (ideally) not use any individual’s fixed values but rather will try to use what everyone’s values would be if they were smarter and knew more.
If your value set is so focused on the complete destruction of the world rather than let any deviation from your values to be implemented, then I suspect that LW and SI were already trying to accomplish something you’d regard as 5. Moreover, it seems that you are confused about priorities: LW isn’t an organization devoted to dealing with LGBTQE issues. You might as well complain that LW isn’t trying to eradicate malaria. The goal of LW is to improve rationality, and the goal of SI is to construct safe general AI. If one or both of those happens to solve other problems or result in a value shift making things better for trans individuals then that will be a consequence, but it doesn’t make it their job to do so.
Frankly, any value system which says “I’d rather have all life destroyed then everyone live under a value system slightly different than my own” seems more like something out of the worst sort of utopian fanaticism than anything else. One of the major ways human society has improved over time and become more peaceful is that we’ve learned that we don’t have to frame everything as an existential struggle. Sometimes it does actually make sense to compromise, or at least, wait to resolve things. We live in a era of truly awesome weaponry, and it is only this willingness to place the survival of humanity over disagreements in values that has seen us to this day. It is from the moderation of Reagan, Nixon, Carter, Kruschev, Breznev, Andropov and others that we are around to have this discussion instead of trying to desperately survive in the crumbled, radioactive ruins of human civilization.
So, I agree that any organization that works with minors should be held to high standards (and CFAR does run a camp for high schoolers). I don’t think the forum policy gives much evidence about the likelihood of children being victimized by employees, though.
It’s not clear to me how skill at writing HPMOR is related skill at avoiding PR gaffes. Have you looked at EY’s okcupid page? There are a lot of things there that don’t look like they’re written with public relations in mind.