I should stop posting, was only meaning to message some people in private.
I understand that you may not reply, given this statement, but …
Are you sure you’re actually disagreeing with Yudkowsky et al.? I agree that it’s plausible that many systems, including the weather, are chaotic in such a way so as that no agent can precisely predict them, but I don’t think that this disproves the “Foom thesis” (that a self-improving AI is likely to quickly overpower humanity and therefore that such an AI’s goals should be designed very carefully). Even if some problems (like predicting the weather) are intractable to all possible agents, all the Foom thesis requires is some subset of relevant problems is tractable to AIs but not humans.
I agree that insights from computational complexity theory are relevant: if solving a particular problem of size n provably requires a number of operations that is exponential in n, then clearly just throwing more computing power at the problem won’t help solve much larger problem instances. But (competent) Foom-theorists surely don’t disagree with this.
As to the claim that Yudkowsky et al. are merely doing theology, I agree that there are some similarities between the idea of a God and the idea of a very powerful artificial intelligence, but I don’t think this observation is very relevant to the issue at hand. “Idea X shares some features with the popular Idea Y, but Idea Y is clearly false, therefore the proponents of Idea X are probably mistaken” is not a compelling argument. (I’m aware that this paraphrasing of the “Belief in powerful AI is like religion” argument takes an uncharitable tone, but it doesn’t seem like an inaccurate paraphrase, either.) [EDIT: I shouldn’t have written the previous two sentences the way I did; see Eugine Nier’s criticism in the child comment and my reply in the grandchild.]
As to the claim that Yudkowsky et al. are merely doing theology, I agree that there are some similarities between the idea of a God and the idea of a very powerful artificial intelligence, but I don’t think this observation is very relevant to the issue at hand. “Idea X shares some features with the popular Idea Y, but Idea Y is clearly false, therefore the proponents of Idea X are probably mistaken” is not a compelling argument. (I’m aware that this paraphrasing of the “Belief in powerful AI is like religion” argument takes an uncharitable tone, but it doesn’t seem like an inaccurate paraphrase, either.)
The correct phrasing of that argument is:
Idea Y is popular and false.
Therefore, humans have a bias that makes them overestimate ideas like Y.
Idea X shares many features with idea Y.
Therefore, proponents of idea X are probably suffering from the bias above.
It’s even worse than that. I am using theology more as empirical example of what you get when the specific features are part of thought process. Ultimately what matters is the features in question. If the features were ‘wearing same type of hat’, then that wouldn’t mean a lot, if the feature is lack of attempt to reason in the least sloppy manner (for example the computational complexity things reasoned about using math), then that’s the shared cause, not just pattern matching.
Ultimately, what an intelligence would do under rule that you can just postulate it smart enough to do anything, is entirely irrelevant to anything. I do see implicit disagreement with that, in doing this sort of thinking.
I accept the correction. I should also take this occasion as a reminder to think twice the next time I’m inclined to claim that I’m paraphrasing something fairly and yet in such a way that it still sounds silly; I’m much better than I used to be at resisting the atavistic temptation (conscious or not) to use such rhetorical ploys, but I still do it sometimes.
My response to the revised argument is, of course, that the mental state of proponents of an Idea X is distinct from the actual truth or falsity of Idea X. (As the local slogan goes, “Reversed Stupidity Is Not Intelligence.”) There certainly are people who believe in the Singularity for much the same reason many people are attracted to religion, but I maintain (as I said in the grandparent) that this isn’t very relevant to the object-level issue: the fact that most of the proponents of Idea X are biased in this-and-such a manner doesn’t tell us very much about Idea X, because we expect there to be biased proponents in favor of any idea, true or false.
I agree that this kind of outside view argument doesn’t provide absolute certainty. However, it does provide evidence that part of your reasons for believing X are irrational reasons that you’re rationalizing. Reduce your probability estimate of X accordingly.
I should also take this occasion as a reminder to think twice the next time I’m inclined to claim that I’m paraphrasing something fairly and yet in such a way that it still sounds silly;
I wasn’t talking about idea X itself, I was talking about the process of thought about idea X, we were discussing how smart EY is, and I used the specific type of thinking about X as a counter example to sanity waterline being raised in any way.
One can think about plumbing wrong, e.g. imagining that pipes grow as part of a pipe plant that must be ripe or the pipes will burst, even though pipes and valves and so on exist and can be thought of correctly, and plumbing is not an invalid idea. It doesn’t matter to the argument I’m making, whenever AIs would foom (whenever pipes would burst at N bars). It only matters that the reasons for belief aren’t valid, and aren’t even close to being valid. (especially for the post-foom state)
edit: Maybe the issue is that the people in the west seem not to have enough proofs in math homeworks early enough. You get bad grades for bad proofs, regardless of whenever things you proved were true or false! Some years of schools make you internalize that well enough. Now, the people whom didn’t internalize this, they are very annoying to argue with. They keep asking that you prove the opposite, they do vague reasoning that’s wrong everywhere and ask you to pinpoint a specific error, they ask you to tell them the better way to reason if you don’t like how they reasoned about it (imagine this for Fermat’s last theorem a couple decades ago, or now for P!=NP), they do every excuse they can think of, to disregard what you say on basis of some fallacy.
edit2: or rather, disregard the critique as ‘not good enough’, akin to disregarding critique on a flawed mathematical proof if the critique doesn’t prove the theorem true or false. Anyway, I just realized that if I think that Eliezer is a quite successful sociopath who’s scamming people for money, that results in higher expected utility for me reading his writings (more curiosity), than if I think he is a self deluded person and the profitability of belief is an accident.
edit: Maybe the issue is that the people in the west seem not to have enough proofs in math homeworks early enough.
From personal experience, we got introduced to those in our 10th year (might have been 9th?), so I would have been 15 or 16 when I got introduced to the idea of formal proofs. The idea is fairly intuitive to me, but I also have a decent respect for people who seem to routinely produce correct answers via faulty reasoning.
I understand that you may not reply, given this statement, but …
Are you sure you’re actually disagreeing with Yudkowsky et al.? I agree that it’s plausible that many systems, including the weather, are chaotic in such a way so as that no agent can precisely predict them, but I don’t think that this disproves the “Foom thesis” (that a self-improving AI is likely to quickly overpower humanity and therefore that such an AI’s goals should be designed very carefully). Even if some problems (like predicting the weather) are intractable to all possible agents, all the Foom thesis requires is some subset of relevant problems is tractable to AIs but not humans.
I agree that insights from computational complexity theory are relevant: if solving a particular problem of size n provably requires a number of operations that is exponential in n, then clearly just throwing more computing power at the problem won’t help solve much larger problem instances. But (competent) Foom-theorists surely don’t disagree with this.
As to the claim that Yudkowsky et al. are merely doing theology, I agree that there are some similarities between the idea of a God and the idea of a very powerful artificial intelligence, but I don’t think this observation is very relevant to the issue at hand. “Idea X shares some features with the popular Idea Y, but Idea Y is clearly false, therefore the proponents of Idea X are probably mistaken” is not a compelling argument. (I’m aware that this paraphrasing of the “Belief in powerful AI is like religion” argument takes an uncharitable tone, but it doesn’t seem like an inaccurate paraphrase, either.) [EDIT: I shouldn’t have written the previous two sentences the way I did; see Eugine Nier’s criticism in the child comment and my reply in the grandchild.]
The correct phrasing of that argument is:
Idea Y is popular and false.
Therefore, humans have a bias that makes them overestimate ideas like Y.
Idea X shares many features with idea Y.
Therefore, proponents of idea X are probably suffering from the bias above.
It’s even worse than that. I am using theology more as empirical example of what you get when the specific features are part of thought process. Ultimately what matters is the features in question. If the features were ‘wearing same type of hat’, then that wouldn’t mean a lot, if the feature is lack of attempt to reason in the least sloppy manner (for example the computational complexity things reasoned about using math), then that’s the shared cause, not just pattern matching.
Ultimately, what an intelligence would do under rule that you can just postulate it smart enough to do anything, is entirely irrelevant to anything. I do see implicit disagreement with that, in doing this sort of thinking.
I accept the correction. I should also take this occasion as a reminder to think twice the next time I’m inclined to claim that I’m paraphrasing something fairly and yet in such a way that it still sounds silly; I’m much better than I used to be at resisting the atavistic temptation (conscious or not) to use such rhetorical ploys, but I still do it sometimes.
My response to the revised argument is, of course, that the mental state of proponents of an Idea X is distinct from the actual truth or falsity of Idea X. (As the local slogan goes, “Reversed Stupidity Is Not Intelligence.”) There certainly are people who believe in the Singularity for much the same reason many people are attracted to religion, but I maintain (as I said in the grandparent) that this isn’t very relevant to the object-level issue: the fact that most of the proponents of Idea X are biased in this-and-such a manner doesn’t tell us very much about Idea X, because we expect there to be biased proponents in favor of any idea, true or false.
I agree that this kind of outside view argument doesn’t provide absolute certainty. However, it does provide evidence that part of your reasons for believing X are irrational reasons that you’re rationalizing. Reduce your probability estimate of X accordingly.
Note, that the formulation presented here is one I came up with on my own while searching for the bayesstructure behind arguments based on the outside view.
I wasn’t talking about idea X itself, I was talking about the process of thought about idea X, we were discussing how smart EY is, and I used the specific type of thinking about X as a counter example to sanity waterline being raised in any way.
One can think about plumbing wrong, e.g. imagining that pipes grow as part of a pipe plant that must be ripe or the pipes will burst, even though pipes and valves and so on exist and can be thought of correctly, and plumbing is not an invalid idea. It doesn’t matter to the argument I’m making, whenever AIs would foom (whenever pipes would burst at N bars). It only matters that the reasons for belief aren’t valid, and aren’t even close to being valid. (especially for the post-foom state)
edit: Maybe the issue is that the people in the west seem not to have enough proofs in math homeworks early enough. You get bad grades for bad proofs, regardless of whenever things you proved were true or false! Some years of schools make you internalize that well enough. Now, the people whom didn’t internalize this, they are very annoying to argue with. They keep asking that you prove the opposite, they do vague reasoning that’s wrong everywhere and ask you to pinpoint a specific error, they ask you to tell them the better way to reason if you don’t like how they reasoned about it (imagine this for Fermat’s last theorem a couple decades ago, or now for P!=NP), they do every excuse they can think of, to disregard what you say on basis of some fallacy.
edit2: or rather, disregard the critique as ‘not good enough’, akin to disregarding critique on a flawed mathematical proof if the critique doesn’t prove the theorem true or false. Anyway, I just realized that if I think that Eliezer is a quite successful sociopath who’s scamming people for money, that results in higher expected utility for me reading his writings (more curiosity), than if I think he is a self deluded person and the profitability of belief is an accident.
From personal experience, we got introduced to those in our 10th year (might have been 9th?), so I would have been 15 or 16 when I got introduced to the idea of formal proofs. The idea is fairly intuitive to me, but I also have a decent respect for people who seem to routinely produce correct answers via faulty reasoning.
so you consider those answers correct?
I assume you’re refer to that?
A correct ANSWER is different from a correct METHOD. I treat an answer as correct if I can verify it.
Problem: X^2 = 9 Solution X=3
It doesn’t matter how they arrived at “X=3”, it’s still correct, and I can verify that (3^2 = 9, yep!)
It’s not about whenever they disagree, it’s about whenever they actually did it themselves, that would make them competent. Re: Niler, writing reply.