I don’t usually engage in potentially protracted debates lately. A very short summary of my disagreement with Holden’s object-level argument part of the post is (1) I don’t see in what way can the idea of powerful Tool AI be usefully different from that of Oracle AI, and it seems like the connotations of “Tool AI” that distinguish it from “Oracle AI” follow from an implicit sense of it not having too much optimization power, so it might be impossible for a Tool AI to both be powerful and hold the characteristics suggested in the post; (1a) the description of Tool AI denies it goals/intentionality and other words, but I don’t see what they mean apart from optimization power, and so I don’t know how to use them to characterize Tool AI; (2) the potential danger of having a powerful Tool/Oracle AI around is such that aiming at their development doesn’t seem like a good idea; (3) I don’t see how a Tool/Oracle AI could be sufficiently helpful to break the philosophical part of the FAI problem, since we don’t even know which questions to ask.
Since Holden stated that he’s probably not going to (interactively) engage the comments to this post, and writing this up in a self-contained way is a lot of work, I’m going to leave this task to the people who usually write up SingInst outreach papers.
edit: removed text in Russian because it was read by recipient (the private message system here shows replies to public and private messages together, making private messages very easy to miss).
[This thread presents a good opportunity to exercise the (tentatively suggested) norm of indiscriminately downvoting all comments in pointless conversations, irrespective of individual quality or helpfulness of the comments.]
Why would you even got in touch with these stupid dropout? These “artificial intelligence” has been working on in the imagination of animism, respectively, if he wants to predict what course wants to be the correct predictions were.
The real work on the mathematics in a computer, gave him 100 rooms, he’ll spit out a few formulas that describe the accuracy with varying sequence, and he is absolutely on the drum, they coincide with your new numbers or not, unless specifically addressed grounding symbols, and specifically to do so was not on the drum.
In my opinion you’d better stay away from this group of dropouts. They climbed to the retarded arguments to GiveWell, Holden wrote a bad review on the subject, and this is just the beginning—will be even more angry feedback from the experts. Communicate with them as a biochemist scientist to communicate with fools who are against vaccines (It is clear that the vaccine can improve and increase their safety, and it is clear that the morons who are against this vaccine does not help).
It mistranslates the words a fair lot. The meaning is literally uncomplete-educated; one can be a dropout but not be uncomplete-educated; one may complete course but be uncomplete-educated, too. The ‘communicate’ is more close to ‘relate to’. Basically, what I am saying is that I don’t understand why he chooses to associate with incompetent, undereducated fools of SI, and defend them; it is about as sensible as for a biochemist to associate with some anti-vaccination idiots.
Actually, I’m most curious about the middle paragraph (with the “100 rooms” and the “drum”). Google seems to have totally mangled that one. What is the actual meaning?
Replied in private. The point is that number-sequence predictor for instance (somehow number was translated as a room) which makes some formula that fits sequence ain’t going to care (drum part) about you matching up formula to numbers.
whatever, translate your message to russian then back to english.
Anyhow, it is the case that SI is organisation led by two under educated, incompetent overly narcissistic individuals whom are speaking with undue confidence of things that they do not understand, and basically do nothing but generate bullshit. He is best off not associating with this sort of stuff. You think Holden’s response is bad? Wait until you run into someone even less polite than me. You’ll hear the same thing I am saying, from someone with position of authority.
I don’t usually engage in potentially protracted debates lately. A very short summary of my disagreement with Holden’s object-level argument part of the post is (1) I don’t see in what way can the idea of powerful Tool AI be usefully different from that of Oracle AI, and it seems like the connotations of “Tool AI” that distinguish it from “Oracle AI” follow from an implicit sense of it not having too much optimization power, so it might be impossible for a Tool AI to both be powerful and hold the characteristics suggested in the post; (1a) the description of Tool AI denies it goals/intentionality and other words, but I don’t see what they mean apart from optimization power, and so I don’t know how to use them to characterize Tool AI; (2) the potential danger of having a powerful Tool/Oracle AI around is such that aiming at their development doesn’t seem like a good idea; (3) I don’t see how a Tool/Oracle AI could be sufficiently helpful to break the philosophical part of the FAI problem, since we don’t even know which questions to ask.
Since Holden stated that he’s probably not going to (interactively) engage the comments to this post, and writing this up in a self-contained way is a lot of work, I’m going to leave this task to the people who usually write up SingInst outreach papers.
The Tool/Oracle AI may transfer the power to the people, who manage and control this device. They can easily become unfriendly, yes.
And I would cut out this “Tool AI”, the “Oracle AI” is enough.
edit: removed text in Russian because it was read by recipient (the private message system here shows replies to public and private messages together, making private messages very easy to miss).
[This thread presents a good opportunity to exercise the (tentatively suggested) norm of indiscriminately downvoting all comments in pointless conversations, irrespective of individual quality or helpfulness of the comments.]
Although, please be aware that the pointlessness of the conversation may not initially have been so transparent to those who cannot read Russian.
Google Translate’s translation:
It mistranslates the words a fair lot. The meaning is literally uncomplete-educated; one can be a dropout but not be uncomplete-educated; one may complete course but be uncomplete-educated, too. The ‘communicate’ is more close to ‘relate to’. Basically, what I am saying is that I don’t understand why he chooses to associate with incompetent, undereducated fools of SI, and defend them; it is about as sensible as for a biochemist to associate with some anti-vaccination idiots.
Actually, I’m most curious about the middle paragraph (with the “100 rooms” and the “drum”). Google seems to have totally mangled that one. What is the actual meaning?
Replied in private. The point is that number-sequence predictor for instance (somehow number was translated as a room) which makes some formula that fits sequence ain’t going to care (drum part) about you matching up formula to numbers.
The connotations were clear from the machine translated form. In this context, your behavior was unproductive, uncivil and passive-aggressive.
whatever, translate your message to russian then back to english.
Anyhow, it is the case that SI is organisation led by two under educated, incompetent overly narcissistic individuals whom are speaking with undue confidence of things that they do not understand, and basically do nothing but generate bullshit. He is best off not associating with this sort of stuff. You think Holden’s response is bad? Wait until you run into someone even less polite than me. You’ll hear the same thing I am saying, from someone with position of authority.
“want X” = how “having the goal X” feels from the inside. Animalism is in your imagination.