I find it unfortunate that none of the SIAI research associates have engaged very deeply in this debate, even LessWrong regulars like Nesov and cousin_it. This is part of the reason why I was reluctant to accept (and ultimately declined) when SI invited me to become a research associate, that I would feel less free to to speak up both in support of SI and in criticism of it.
I don’t think this is SI’s fault, but perhaps there are things it could do to lessen this downside of the research associate program. For example it could explicitly encourage the research associates to publicly criticize SI and to disagree with its official positions, and make it clear that no associate will be blamed if someone mistook their statements to be official SI positions or saw them as reflecting badly on SI in general. I also write this comment because just being consciously aware of this bias (in favor of staying silent) may help to counteract it.
I don’t usually engage in potentially protracted debates lately. A very short summary of my disagreement with Holden’s object-level argument part of the post is (1) I don’t see in what way can the idea of powerful Tool AI be usefully different from that of Oracle AI, and it seems like the connotations of “Tool AI” that distinguish it from “Oracle AI” follow from an implicit sense of it not having too much optimization power, so it might be impossible for a Tool AI to both be powerful and hold the characteristics suggested in the post; (1a) the description of Tool AI denies it goals/intentionality and other words, but I don’t see what they mean apart from optimization power, and so I don’t know how to use them to characterize Tool AI; (2) the potential danger of having a powerful Tool/Oracle AI around is such that aiming at their development doesn’t seem like a good idea; (3) I don’t see how a Tool/Oracle AI could be sufficiently helpful to break the philosophical part of the FAI problem, since we don’t even know which questions to ask.
Since Holden stated that he’s probably not going to (interactively) engage the comments to this post, and writing this up in a self-contained way is a lot of work, I’m going to leave this task to the people who usually write up SingInst outreach papers.
edit: removed text in Russian because it was read by recipient (the private message system here shows replies to public and private messages together, making private messages very easy to miss).
[This thread presents a good opportunity to exercise the (tentatively suggested) norm of indiscriminately downvoting all comments in pointless conversations, irrespective of individual quality or helpfulness of the comments.]
Why would you even got in touch with these stupid dropout? These “artificial intelligence” has been working on in the imagination of animism, respectively, if he wants to predict what course wants to be the correct predictions were.
The real work on the mathematics in a computer, gave him 100 rooms, he’ll spit out a few formulas that describe the accuracy with varying sequence, and he is absolutely on the drum, they coincide with your new numbers or not, unless specifically addressed grounding symbols, and specifically to do so was not on the drum.
In my opinion you’d better stay away from this group of dropouts. They climbed to the retarded arguments to GiveWell, Holden wrote a bad review on the subject, and this is just the beginning—will be even more angry feedback from the experts. Communicate with them as a biochemist scientist to communicate with fools who are against vaccines (It is clear that the vaccine can improve and increase their safety, and it is clear that the morons who are against this vaccine does not help).
It mistranslates the words a fair lot. The meaning is literally uncomplete-educated; one can be a dropout but not be uncomplete-educated; one may complete course but be uncomplete-educated, too. The ‘communicate’ is more close to ‘relate to’. Basically, what I am saying is that I don’t understand why he chooses to associate with incompetent, undereducated fools of SI, and defend them; it is about as sensible as for a biochemist to associate with some anti-vaccination idiots.
Actually, I’m most curious about the middle paragraph (with the “100 rooms” and the “drum”). Google seems to have totally mangled that one. What is the actual meaning?
Replied in private. The point is that number-sequence predictor for instance (somehow number was translated as a room) which makes some formula that fits sequence ain’t going to care (drum part) about you matching up formula to numbers.
whatever, translate your message to russian then back to english.
Anyhow, it is the case that SI is organisation led by two under educated, incompetent overly narcissistic individuals whom are speaking with undue confidence of things that they do not understand, and basically do nothing but generate bullshit. He is best off not associating with this sort of stuff. You think Holden’s response is bad? Wait until you run into someone even less polite than me. You’ll hear the same thing I am saying, from someone with position of authority.
Not sure about the others, but as for me, at some point this spring I realized that talking about saving the world makes me really upset and I’m better off avoiding the whole topic.
It would appear that cousin_it believes we’re screwed. It’s tempting to argue that this would, overall, be an argument against the effectiveness of the SI program. However, that’s probably not true, because we could be 99% screwed and the remaining 1% could depend on SI; this would be a depressing fact, yet still justify supporting the SI.
(Personally, I agree with the poster about the problems with SI, but I’m just laying it out. Responding to wei_dai rather than cousin_it because I don’t want to upset the latter unnecessarily.)
we could be 99.9% screwed and the remaining 0.1% could be caused by donating to SI and it discouraging some avenue to survival.
Actually the way i see it, the most stark symptoms of SI being diseased is the certainty in intuitions even though there isn’t some mechanism for such intuitions to be based on some subconscious but valid reasoning, and abundance of biases affecting the intuitions. There’s nothing rational about summarizing a list of biases then proclaiming now they dont apply to me and i can use my intuitions.
It’s because talking about the singularity and end-of-world in near mode for a large amount of time makes you alieve that it’s going to happen. In the same way that it actually happening would make you alieve it, but talking about it once and believing it then never thinking about it explicitly again wouldn’t.
Probably not wise to categorically tell someone the reasons behind their feelings when you’re underinformed, and probably not kind to ruminate on the subject when you can expect it to be unpleasant.
I have personally felt the same feelings and I think I have pinned down the reason. I welcome alternative theories, in the spirit of rational debate rather than polite silence.
That you may have discovered the reason that you felt this way does not mean that you have discovered the reason another specific person felt a similar way. In fact, they may not even be unaware of the causes of their feelings.
Sure. That’s why I said: “I welcome alternative theories” (including theories about there being multiple different reasons which may apply to different extents to different people). Do you have one?
Missed the point. Do you understand that you shouldn’t have been confident you knew why cousin_it felt a particular way? Beyond that, personally I’m not all that interested in theorizing about the reasons, but if you really want to know you could just ask.
Sorry I wasn’t implying very strong confidence. I would give a probability of, say, 65% that my reason is the principal cause of the feelings of Cousin_it
Probably not wise to categorically tell someone the reasons behind their feelings when you’re underinformed,
Neither wise or epistemically sound practice.
and probably not kind to ruminate on the subject when you can expect it to be unpleasant.
It is perfectly acceptable to make a reply to a publicly made comment that was itself freely volunteered. If the subject of there being subjects which are unpleasant to discuss is itself terribly unpleasant to discuss then it is cousin_it’s prerogative to not bring up the subject on a forum where analysis of the subject is both relevant and potentially useful for others.
I disagree that it is in general unacceptable to post information that you would not like to discuss beyond a certain point.
Without further clarification one could reasonably assume that cousin_it was okay with discussing the subject at one removal, as you suggest, but as it happens several days before the great-grandparent cousin_it explicitly stated that it would be upsetting to discuss this topic.
I disagree that it is in general unacceptable to post information that you would not like to discuss beyond a certain point.
I would not make (and haven’t made) the claim as you have stated it.
Without further clarification one could reasonably assume that cousin_it was okay with discussing the subject at one removal, as you suggest, but as it happens several days before the great-grandparent cousin_it explicitly stated that it would be upsetting to discuss this topic.
When that is the case—and if I happened to see it before making a contribution—I would refrain from making any direct reply to the user or to discuss him as an instance when talking about the subject (all else being equal). I would still discuss the subject itself using the same criteria for posting that I always use. Mind you I would probably have already have refrained from directly discussing the user due to the aforementioned epistemic absurdity and presumptuousness.
What you claimed was that “It is perfectly acceptable to make a reply to a publicly made comment that was itself freely volunteered”, and that if someone didn’t want to discuss something then they shouldn’t have brought it up. In context, however, this was a reply to me saying it was probably unkind to belabor a subject to someone who’d expressed that they find the subject upsetting, which you now seem to be saying you agree with. So what are you taking issue with? I certainly didn’t mean to imply that if someone finds a subject uncomfortable to discuss, personally, then that means that others should stop discussing it at all, but this point isn’t raised in your great-grandparent comment, and I hope my meaning was clear from the context.
Just wanted to clarify, as at the time your posts had both been downvoted.
So I assumed. As a pure curiosity, if my comments were still downvoted I would have had to downvote yours despite your disclaimer. Not out of reciprocation but because the wedrifid comments being lower than the CuSithBell comments would be an error state and I would have no way to correct the wedrifid votes.
Correct. It is instead something that people should usually say is true because belief or practical assumption that defection is impossible is a better signal to send than that they could easily defect if they wanted to but choose not to.
It does so happen that I am incredibly talented when it comes to automation and have created web bots that are far more advanced than that required to prevent anything I would consider an ‘error state’ in voting patterns, essentially undetectably. It just so happens that I couldn’t really be bothered doing so in the case of lesswrong and have something of an aversion to doing so anyway.
I mean, I’ve already got 20k votes in this game without cheating and without even trying to (by, for example, writing posts.)
Even if we agree to pretend that defection is impossible, you can also correct the wedrifid votes in a socially endorsed way by calling the attention of your allies to the exchange.
I would have no way to correct the wedrifid votes.
If there are viewers of the post who are sufficiently similar to you, they will correct the wedrifid votes. A strategy to ensure error states get corrected is to be sufficiently similar to more post-viewers than your interlocutor.
A strategy to ensure error states get corrected is to be sufficiently similar to more post-viewers than your interlocutor.
That is a strategy to get votes. If it so happened that wedrifid was particularly different to people here then modifying himself to be more similar to the norm would result in more votes but also more error states. Because all comments of the modified wedrifid that the original wedrifid would have objected to that get upvoted would constitute “error states” from the perspective of the wedrifid making the choice of whether to self modify. ie. Ghandi doesn’t take the murder pill.
Just to be clear, I would not label all instances of wedrifid being downvoted or having less votes than the other person in a conversation as ‘error states’, just that in this specific conversation it would be a bad thing if that were the case. Obviously this is expected to be uncontroversial at least as the expected assumption from my perspective.
(I corrected the conversation’s votes.)
I corrected the conversation’s votes too. Someone downvoted the parent!
I think he means that if the interlocutor votes but you do not then you must get 1 more vote on average from the observers than the interlocutor does.
That assumes you’re conversing with people who desire error states (from your perspective).
That seems true. ie. It assumes a downvote from the interlocutor when their downvote would constitute an error state. Without that assumption the ‘moreso’ is required only by way of creating an error margin.
My conception of error states was a little more general—the advice and assumptions wouldn’t apply to, say, a conversation which both participants find valuable, but in which one or both are downvoted by observers.
wouldn’t apply to, say, a conversation which both participants find valuable, but in which one or both are downvoted by observers.
Such conversations happen rather often and I usually find it sufficient reason to discontinue the otherwise useful conversation. The information gained about public perception based on the feedback from observers completely changes what can be said and modifies how any given statement will be interpreted. Too annoying to deal with and a tad offensive. Not necessarily the fault of the interlocutor but the attitudes of the interlocutor’s supporters still necessitates abandoning free conversation or information exchange with them and instead treating the situation as one of social politics.
Well, whatever floats your boat. I wasn’t trying to avoid downvotes, just ill-will.
So I take it you don’t find your issue resolved, but you don’t think it’ll be fruitful to pursue the matter? If that’s the case, sorry to give you that impression.
So I take it you don’t find your issue resolved, but you don’t think it’ll be fruitful to pursue the matter? If that’s the case, sorry to give you that impression.
I didn’t consider it to be an issue that particularly needed to be resolved. It was a five second fire and forget perspective given on your assertion of social norms that was a partial agreement and partial disagreement. The degree of difference is sufficiently minor that if your original injunction had either included the link or somewhat less general wording I would not have even thought it was worth an initial reply.
Sure, sometimes I am known to analyse such nuances in depth but for some reason this one just didn’t catch my interest.
I find it unfortunate that none of the SIAI research associates have engaged very deeply in this debate, even LessWrong regulars like Nesov and cousin_it. This is part of the reason why I was reluctant to accept (and ultimately declined) when SI invited me to become a research associate, that I would feel less free to to speak up both in support of SI and in criticism of it.
I don’t think this is SI’s fault, but perhaps there are things it could do to lessen this downside of the research associate program. For example it could explicitly encourage the research associates to publicly criticize SI and to disagree with its official positions, and make it clear that no associate will be blamed if someone mistook their statements to be official SI positions or saw them as reflecting badly on SI in general. I also write this comment because just being consciously aware of this bias (in favor of staying silent) may help to counteract it.
I don’t usually engage in potentially protracted debates lately. A very short summary of my disagreement with Holden’s object-level argument part of the post is (1) I don’t see in what way can the idea of powerful Tool AI be usefully different from that of Oracle AI, and it seems like the connotations of “Tool AI” that distinguish it from “Oracle AI” follow from an implicit sense of it not having too much optimization power, so it might be impossible for a Tool AI to both be powerful and hold the characteristics suggested in the post; (1a) the description of Tool AI denies it goals/intentionality and other words, but I don’t see what they mean apart from optimization power, and so I don’t know how to use them to characterize Tool AI; (2) the potential danger of having a powerful Tool/Oracle AI around is such that aiming at their development doesn’t seem like a good idea; (3) I don’t see how a Tool/Oracle AI could be sufficiently helpful to break the philosophical part of the FAI problem, since we don’t even know which questions to ask.
Since Holden stated that he’s probably not going to (interactively) engage the comments to this post, and writing this up in a self-contained way is a lot of work, I’m going to leave this task to the people who usually write up SingInst outreach papers.
The Tool/Oracle AI may transfer the power to the people, who manage and control this device. They can easily become unfriendly, yes.
And I would cut out this “Tool AI”, the “Oracle AI” is enough.
edit: removed text in Russian because it was read by recipient (the private message system here shows replies to public and private messages together, making private messages very easy to miss).
[This thread presents a good opportunity to exercise the (tentatively suggested) norm of indiscriminately downvoting all comments in pointless conversations, irrespective of individual quality or helpfulness of the comments.]
Although, please be aware that the pointlessness of the conversation may not initially have been so transparent to those who cannot read Russian.
Google Translate’s translation:
It mistranslates the words a fair lot. The meaning is literally uncomplete-educated; one can be a dropout but not be uncomplete-educated; one may complete course but be uncomplete-educated, too. The ‘communicate’ is more close to ‘relate to’. Basically, what I am saying is that I don’t understand why he chooses to associate with incompetent, undereducated fools of SI, and defend them; it is about as sensible as for a biochemist to associate with some anti-vaccination idiots.
Actually, I’m most curious about the middle paragraph (with the “100 rooms” and the “drum”). Google seems to have totally mangled that one. What is the actual meaning?
Replied in private. The point is that number-sequence predictor for instance (somehow number was translated as a room) which makes some formula that fits sequence ain’t going to care (drum part) about you matching up formula to numbers.
The connotations were clear from the machine translated form. In this context, your behavior was unproductive, uncivil and passive-aggressive.
whatever, translate your message to russian then back to english.
Anyhow, it is the case that SI is organisation led by two under educated, incompetent overly narcissistic individuals whom are speaking with undue confidence of things that they do not understand, and basically do nothing but generate bullshit. He is best off not associating with this sort of stuff. You think Holden’s response is bad? Wait until you run into someone even less polite than me. You’ll hear the same thing I am saying, from someone with position of authority.
“want X” = how “having the goal X” feels from the inside. Animalism is in your imagination.
Not sure about the others, but as for me, at some point this spring I realized that talking about saving the world makes me really upset and I’m better off avoiding the whole topic.
Would it upset you to talk about why talking about saving the world makes you upset?
It would appear that cousin_it believes we’re screwed. It’s tempting to argue that this would, overall, be an argument against the effectiveness of the SI program. However, that’s probably not true, because we could be 99% screwed and the remaining 1% could depend on SI; this would be a depressing fact, yet still justify supporting the SI.
(Personally, I agree with the poster about the problems with SI, but I’m just laying it out. Responding to wei_dai rather than cousin_it because I don’t want to upset the latter unnecessarily.)
we could be 99.9% screwed and the remaining 0.1% could be caused by donating to SI and it discouraging some avenue to survival.
Actually the way i see it, the most stark symptoms of SI being diseased is the certainty in intuitions even though there isn’t some mechanism for such intuitions to be based on some subconscious but valid reasoning, and abundance of biases affecting the intuitions. There’s nothing rational about summarizing a list of biases then proclaiming now they dont apply to me and i can use my intuitions.
Yes.
It’s because talking about the singularity and end-of-world in near mode for a large amount of time makes you alieve that it’s going to happen. In the same way that it actually happening would make you alieve it, but talking about it once and believing it then never thinking about it explicitly again wouldn’t.
Probably not wise to categorically tell someone the reasons behind their feelings when you’re underinformed, and probably not kind to ruminate on the subject when you can expect it to be unpleasant.
I have personally felt the same feelings and I think I have pinned down the reason. I welcome alternative theories, in the spirit of rational debate rather than polite silence.
That you may have discovered the reason that you felt this way does not mean that you have discovered the reason another specific person felt a similar way. In fact, they may not even be unaware of the causes of their feelings.
Sure. That’s why I said: “I welcome alternative theories” (including theories about there being multiple different reasons which may apply to different extents to different people). Do you have one?
Missed the point. Do you understand that you shouldn’t have been confident you knew why cousin_it felt a particular way? Beyond that, personally I’m not all that interested in theorizing about the reasons, but if you really want to know you could just ask.
Sorry I wasn’t implying very strong confidence. I would give a probability of, say, 65% that my reason is the principal cause of the feelings of Cousin_it
Neither wise or epistemically sound practice.
It is perfectly acceptable to make a reply to a publicly made comment that was itself freely volunteered. If the subject of there being subjects which are unpleasant to discuss is itself terribly unpleasant to discuss then it is cousin_it’s prerogative to not bring up the subject on a forum where analysis of the subject is both relevant and potentially useful for others.
I disagree that it is in general unacceptable to post information that you would not like to discuss beyond a certain point.
Without further clarification one could reasonably assume that cousin_it was okay with discussing the subject at one removal, as you suggest, but as it happens several days before the great-grandparent cousin_it explicitly stated that it would be upsetting to discuss this topic.
I would not make (and haven’t made) the claim as you have stated it.
When that is the case—and if I happened to see it before making a contribution—I would refrain from making any direct reply to the user or to discuss him as an instance when talking about the subject (all else being equal). I would still discuss the subject itself using the same criteria for posting that I always use. Mind you I would probably have already have refrained from directly discussing the user due to the aforementioned epistemic absurdity and presumptuousness.
What you claimed was that “It is perfectly acceptable to make a reply to a publicly made comment that was itself freely volunteered”, and that if someone didn’t want to discuss something then they shouldn’t have brought it up. In context, however, this was a reply to me saying it was probably unkind to belabor a subject to someone who’d expressed that they find the subject upsetting, which you now seem to be saying you agree with. So what are you taking issue with? I certainly didn’t mean to imply that if someone finds a subject uncomfortable to discuss, personally, then that means that others should stop discussing it at all, but this point isn’t raised in your great-grandparent comment, and I hope my meaning was clear from the context.
ETA: I have not voted on your comments here.
I have not voted here either. As of now the conversation is all at “0” which is how I would prefer it.
Just wanted to clarify, as at the time your posts had both been downvoted.
So I assumed. As a pure curiosity, if my comments were still downvoted I would have had to downvote yours despite your disclaimer. Not out of reciprocation but because the wedrifid comments being lower than the CuSithBell comments would be an error state and I would have no way to correct the wedrifid votes.
That isn’t actually true.
Correct. It is instead something that people should usually say is true because belief or practical assumption that defection is impossible is a better signal to send than that they could easily defect if they wanted to but choose not to.
It does so happen that I am incredibly talented when it comes to automation and have created web bots that are far more advanced than that required to prevent anything I would consider an ‘error state’ in voting patterns, essentially undetectably. It just so happens that I couldn’t really be bothered doing so in the case of lesswrong and have something of an aversion to doing so anyway.
I mean, I’ve already got 20k votes in this game without cheating and without even trying to (by, for example, writing posts.)
Even if we agree to pretend that defection is impossible, you can also correct the wedrifid votes in a socially endorsed way by calling the attention of your allies to the exchange.
If there are viewers of the post who are sufficiently similar to you, they will correct the wedrifid votes. A strategy to ensure error states get corrected is to be sufficiently similar to more post-viewers than your interlocutor.
(I corrected the conversation’s votes.)
That is a strategy to get votes. If it so happened that wedrifid was particularly different to people here then modifying himself to be more similar to the norm would result in more votes but also more error states. Because all comments of the modified wedrifid that the original wedrifid would have objected to that get upvoted would constitute “error states” from the perspective of the wedrifid making the choice of whether to self modify. ie. Ghandi doesn’t take the murder pill.
Just to be clear, I would not label all instances of wedrifid being downvoted or having less votes than the other person in a conversation as ‘error states’, just that in this specific conversation it would be a bad thing if that were the case. Obviously this is expected to be uncontroversial at least as the expected assumption from my perspective.
I corrected the conversation’s votes too. Someone downvoted the parent!
Ah, that was the false assumption I made. Cheers!
To be sure, most would be. But I’m sure in all the comments I’ve made over the years there is at least one that I would downvote in hindsight! ;)
Why moreso than your interlocutor? That assumes you’re conversing with people who desire error states (from your perspective).
I think he means that if the interlocutor votes but you do not then you must get 1 more vote on average from the observers than the interlocutor does.
That seems true. ie. It assumes a downvote from the interlocutor when their downvote would constitute an error state. Without that assumption the ‘moreso’ is required only by way of creating an error margin.
My conception of error states was a little more general—the advice and assumptions wouldn’t apply to, say, a conversation which both participants find valuable, but in which one or both are downvoted by observers.
Such conversations happen rather often and I usually find it sufficient reason to discontinue the otherwise useful conversation. The information gained about public perception based on the feedback from observers completely changes what can be said and modifies how any given statement will be interpreted. Too annoying to deal with and a tad offensive. Not necessarily the fault of the interlocutor but the attitudes of the interlocutor’s supporters still necessitates abandoning free conversation or information exchange with them and instead treating the situation as one of social politics.
Well, whatever floats your boat. I wasn’t trying to avoid downvotes, just ill-will.
So I take it you don’t find your issue resolved, but you don’t think it’ll be fruitful to pursue the matter? If that’s the case, sorry to give you that impression.
I didn’t consider it to be an issue that particularly needed to be resolved. It was a five second fire and forget perspective given on your assertion of social norms that was a partial agreement and partial disagreement. The degree of difference is sufficiently minor that if your original injunction had either included the link or somewhat less general wording I would not have even thought it was worth an initial reply.
Sure, sometimes I am known to analyse such nuances in depth but for some reason this one just didn’t catch my interest.
All right, that’s cool then. Cheerio!