Yes, I understand that this is not religion and all positions will have to be argued and defended in due time. I am merely declaring my position. I do find it really fascinating that, in the first stages of drafting this new map, we begin by drawing lines in the sand…
It’s more that the counterargument against your position was covered, at great length, and then covered some more, on OB by Yudkowsky, the person that most of are here because we respect.
If you’re going to take a stand for something that most people here have already read very persuasive arguments against, I don’t think it’s unreasonable to expect more than just a position statement (and an emotionally-loaded one, at that).
I meant no disrespect. (Eliezer has 661 posts on OB.) I do appreciate your direction/correction. I didn’t mean to take a stand against.
(Sigh.) I have no positions, no beliefs, prior to what I might learn from Eliezer.
So the idea is that a unique, complex thing may not necessarily have an appreciation for another unique complexity? Unless appreciating unique complexity has a mathematical basis.
brynema, “disrespect” isn’t at all the the right axis for understanding why your last couple comments weren’t helpful. (I’m not attacking you here; LW is an unusual place, and understanding how to usefully contribute takes time. You’ve been doing well.) The trouble with your last two comments is mostly:
Comments on LW should aspire to rationality. As part of this aspiration, we basically shouldn’t have “positions” on issues we haven’t thought much about; the beliefs we share here should be evidence-based best-guesses about the future, not clothes to decorate ourselves with.
Many places encourage people to make up and share “beliefs”, because any person’s beliefs are as good as any other’s and it’s good to express oneself, or something like that. Those norms are not useful toward arriving at truth, at least not compared to what we usually manage on LW. Not even if people follow their made-up “beliefs” with evidence created to support their conclusions; nor even if evidence or intuitions play some role in the initial forming of beliefs.
This is particularly true in cases where the subjects are difficult technical problems that some in the community have specialized in and thought carefully about; declaring positions there is kind of like approaching a physicist, without knowledge of physics, and announcing your “position” on how atoms hold together. (Though less so, since AI is less well-grounded than physics.)
AI risks are a particularly difficult subject about which to have useful conversation, mostly because there is little data to help keep conversation from veering off into nonsense-land. So it makes sense, in discussing AI risks and other slippery topics, to have lower tolerance for folks making up positions.
Also, yes, these particular positions have been discussed and have proven un-workable in pretty exhaustive detail.
As to the object-level issue concerning possible minds, I wrote an answer in the welcome thread, on the theory that, if we want to talk about AI or other prerequisite-requiring topics on LW, we should probably get in the habit of taking “already discussed to death” questions to the welcome thread, where they won’t clutter mainline discussion. Please don’t be offended by this, though; I value your presence here, and you had no real way of knowing this had already been discussed.
I’ve spent some time working through my emotional responses and intellectual defenses to the posts above. I would like to make some observations:
(1) I’m disappointed that even as rationalists, while you were able to recognize that I had committed some transgression, you were not able to identify it precisely and power was used (authority, shaming) instead of the truth to broadly punish me.
(2) My mistake was not in asserting something false. This happens all the time here and people usually respond more rationally.
(3) My transgression was using the emotionally loaded word “love”. (So SoulessAutomaton actually came close.) The word in this context is taboo for a good reason—I will try to explain but perhaps I will fail: while I believe in love, I should not put the belief in those terms because invoking the word is dark art manipulation; the whole point of rationality is to find a better vocabulary for explaining truth.
(4) We can look at this example as a case study to evaluate what responses were rational and which weren’t. SoulessAutomaton’s and Anna Salamon’s responses were well-intentioned but escalated the emotional cost of the argument for me (broadly, SoulessAutomaton accused me of being subversive/disrespectful and AnnaSalamon made the character attack that I’m not rational.) Both tempered their meted ‘punishments’ with useful suggestions. VladimirNesov’s comment was I think quite rational: he asserted I probably needed to learn more about the subject and he provided some links. (The specific links are enormously helpful for navigating this huge maze.) One criticism would be that he was overly charitable with his assessment of “affective rhetoric”. While my rhetoric was indeed affective by some measure external to LW, the point is, I know, affective rhetoric for its own sake is not appropriate here. I suspect Vladimir_Nesov was just trying to signal respect for me as an individual before criticizing my position, generally a good practice.
(2) My mistake was not in asserting something false.
It was. What you asserted, depending on interpretation, is either ill-formed or false. A counterexample to your claim is that a Paperclip AI won’t, in any meaningful sense, love humanity.
(3) My transgression was using the emotionally loaded word “love”.
The use of emotionally-loaded word is inappropriate, unless it is. In this case, your statement of attribution of emotion was false, and so affective aura accompanying the statement was inappropriate. I hypothesized that emotional thinking was one of the sources of your belief in the truth of the statement you made, so stating that your words were “affective rhetoric” meant to communicate this diagnostic (by analogy with “empty rhetoric”). I actually edited to that phrase from earlier “affective silliness”, that directly communicated a reference to the fact of you making a mistake, but I changed it to be less offensive.
Vladimir Nesov’s comment was I think quite rational: he asserted I probably needed to learn more about the subject and he provided some links
The ‘probably’ was more of a weasel word, referring to the fact that I’m not sure whether you actually want to spend time learning all that stuff, rather than to special uncertainty in whether the answer to your question is found there.
(1) I’m disappointed that even as rationalists, while you were able to recognize that I had committed some transgression, you were not able to identify it precisely and power was used (authority, shaming) instead of the truth to broadly punish me.
The problem is that the inferential distance is too great, and so it’s easier to refer the newcommer to the archive, where the answer to what was wrong can be learned systematically, instead of trying to explain the problems on her own terms.
I read “affective rhetoric” as “effective rhetoric”. (oops) Yes, “affective rhetoric” is a much more appropriate comment than (“effective rhetoric”). Since it seems like a good place for a neophyte to begin, I will address your comment about the paperclip AI in the welcome thread where Anna Salamon replied.
Anna Salamon replied on the Welcome thread, starting with:
This is in response to a comment of brynema’s elsewhere; if we want LW discussions to thrive even in cases where the discussions require non-trivial prerequisites, my guess is that we should get in the habit of taking “already discussed exhaustively” questions to the welcome thread.
If we want to talk usefully about AI as a community, we should probably make a wiki page that summarizes or links to the main points. And then we should have a policy in certain threads: “don’t comment here unless you’ve read the links off of wiki page such-and-such”.
brynema’s right that we want newcomers in LW, and that newcomers can’t be expected to know all of what’s been discussed. But it is also true that we’ll never get discussions off the ground if we have to start all over again every time someone new enters.
You assume that an AI will necessarily value beauty as you conceive it. This is unlikely.
Yes, I understand that this is not religion and all positions will have to be argued and defended in due time. I am merely declaring my position. I do find it really fascinating that, in the first stages of drafting this new map, we begin by drawing lines in the sand…
It’s more that the counterargument against your position was covered, at great length, and then covered some more, on OB by Yudkowsky, the person that most of are here because we respect.
If you’re going to take a stand for something that most people here have already read very persuasive arguments against, I don’t think it’s unreasonable to expect more than just a position statement (and an emotionally-loaded one, at that).
I meant no disrespect. (Eliezer has 661 posts on OB.) I do appreciate your direction/correction. I didn’t mean to take a stand against.
(Sigh.) I have no positions, no beliefs, prior to what I might learn from Eliezer.
So the idea is that a unique, complex thing may not necessarily have an appreciation for another unique complexity? Unless appreciating unique complexity has a mathematical basis.
brynema, “disrespect” isn’t at all the the right axis for understanding why your last couple comments weren’t helpful. (I’m not attacking you here; LW is an unusual place, and understanding how to usefully contribute takes time. You’ve been doing well.) The trouble with your last two comments is mostly:
Comments on LW should aspire to rationality. As part of this aspiration, we basically shouldn’t have “positions” on issues we haven’t thought much about; the beliefs we share here should be evidence-based best-guesses about the future, not clothes to decorate ourselves with.
Many places encourage people to make up and share “beliefs”, because any person’s beliefs are as good as any other’s and it’s good to express oneself, or something like that. Those norms are not useful toward arriving at truth, at least not compared to what we usually manage on LW. Not even if people follow their made-up “beliefs” with evidence created to support their conclusions; nor even if evidence or intuitions play some role in the initial forming of beliefs.
This is particularly true in cases where the subjects are difficult technical problems that some in the community have specialized in and thought carefully about; declaring positions there is kind of like approaching a physicist, without knowledge of physics, and announcing your “position” on how atoms hold together. (Though less so, since AI is less well-grounded than physics.)
AI risks are a particularly difficult subject about which to have useful conversation, mostly because there is little data to help keep conversation from veering off into nonsense-land. So it makes sense, in discussing AI risks and other slippery topics, to have lower tolerance for folks making up positions.
Also, yes, these particular positions have been discussed and have proven un-workable in pretty exhaustive detail.
As to the object-level issue concerning possible minds, I wrote an answer in the welcome thread, on the theory that, if we want to talk about AI or other prerequisite-requiring topics on LW, we should probably get in the habit of taking “already discussed to death” questions to the welcome thread, where they won’t clutter mainline discussion. Please don’t be offended by this, though; I value your presence here, and you had no real way of knowing this had already been discussed.
I’ve spent some time working through my emotional responses and intellectual defenses to the posts above. I would like to make some observations:
(1) I’m disappointed that even as rationalists, while you were able to recognize that I had committed some transgression, you were not able to identify it precisely and power was used (authority, shaming) instead of the truth to broadly punish me.
(2) My mistake was not in asserting something false. This happens all the time here and people usually respond more rationally.
(3) My transgression was using the emotionally loaded word “love”. (So SoulessAutomaton actually came close.) The word in this context is taboo for a good reason—I will try to explain but perhaps I will fail: while I believe in love, I should not put the belief in those terms because invoking the word is dark art manipulation; the whole point of rationality is to find a better vocabulary for explaining truth.
(4) We can look at this example as a case study to evaluate what responses were rational and which weren’t. SoulessAutomaton’s and Anna Salamon’s responses were well-intentioned but escalated the emotional cost of the argument for me (broadly, SoulessAutomaton accused me of being subversive/disrespectful and AnnaSalamon made the character attack that I’m not rational.) Both tempered their meted ‘punishments’ with useful suggestions. VladimirNesov’s comment was I think quite rational: he asserted I probably needed to learn more about the subject and he provided some links. (The specific links are enormously helpful for navigating this huge maze.) One criticism would be that he was overly charitable with his assessment of “affective rhetoric”. While my rhetoric was indeed affective by some measure external to LW, the point is, I know, affective rhetoric for its own sake is not appropriate here. I suspect Vladimir_Nesov was just trying to signal respect for me as an individual before criticizing my position, generally a good practice.
It was. What you asserted, depending on interpretation, is either ill-formed or false. A counterexample to your claim is that a Paperclip AI won’t, in any meaningful sense, love humanity.
The use of emotionally-loaded word is inappropriate, unless it is. In this case, your statement of attribution of emotion was false, and so affective aura accompanying the statement was inappropriate. I hypothesized that emotional thinking was one of the sources of your belief in the truth of the statement you made, so stating that your words were “affective rhetoric” meant to communicate this diagnostic (by analogy with “empty rhetoric”). I actually edited to that phrase from earlier “affective silliness”, that directly communicated a reference to the fact of you making a mistake, but I changed it to be less offensive.
The ‘probably’ was more of a weasel word, referring to the fact that I’m not sure whether you actually want to spend time learning all that stuff, rather than to special uncertainty in whether the answer to your question is found there.
The problem is that the inferential distance is too great, and so it’s easier to refer the newcommer to the archive, where the answer to what was wrong can be learned systematically, instead of trying to explain the problems on her own terms.
I read “affective rhetoric” as “effective rhetoric”. (oops) Yes, “affective rhetoric” is a much more appropriate comment than (“effective rhetoric”). Since it seems like a good place for a neophyte to begin, I will address your comment about the paperclip AI in the welcome thread where Anna Salamon replied.
Anna Salamon replied on the Welcome thread, starting with:
If we want to talk usefully about AI as a community, we should probably make a wiki page that summarizes or links to the main points. And then we should have a policy in certain threads: “don’t comment here unless you’ve read the links off of wiki page such-and-such”.
brynema’s right that we want newcomers in LW, and that newcomers can’t be expected to know all of what’s been discussed. But it is also true that we’ll never get discussions off the ground if we have to start all over again every time someone new enters.