And in the analogy to thinking machines, is that more like our current brains, or more like the kind of brains we will be building and calling artificial intelligence?
Remind me again; these new bodies are going to run better on some surfaces? In the analogy, these artificial brains are going to think differently?
You’re funny. First you make up an analogy you think is false to say I’m wrong. Then you say geckos are fundamentally superior to technology, while linking to a technology. Now you’re saying I’m wrong because the analogy is true. Do you think at any point in this you were wrong?
(Do note that I linked to mecho-gecko as an example of a technology that can run on a surface that we, even using that technology, would not be able to run on. The actual gecko is irrelevant, I just couldn’t find a clip that didn’t include the comparison.)
No, I don’t. I am aware that you also think you have not been wrong at any point during this either, which has caused me to re-evaluate my own estimation of my correctness.
Having re-evaluated, I still believe I have been right all along.
To expand further on the analogy: the human brain is not a universal thinker, any more than the human leg is a universal runner. The brain thinks, and the leg runs, but they both do so in ways that are limited in some aspects, underperform in some domains, and suffer from quirks and idiosyncrasies. To say that the kind of thinking that a human brain does, is the only kind of thinking and AIs won’t do any different, is isomorphic to saying that the kind of running a human leg does, is the same kind of running that a gecko’s leg does.
This constitutes a pretty good argument against our brains having universal intelligence.
I thought I understood what you meant by “universal intelligence”—that any idea that is conceivable, could be conceived by a human mind—but I am open to the possibility you are referring to a technical term of some sort. If you’d care to enlighten me?
Previously you refused to ask. Why did you change your mind?
Do you know what the arguments that human minds are universal are? I asked this in my previous comment. You didn’t engage with it. Do you not consider it important to know that?
I was unable to find any relevant argument at the link. It did beg the question several times (which is OK if it was written for a different purpose). Quote the passage you thought was an argument.
Previously you refused to ask. Why did you change your mind?
I re-read our conversation looking for possible hidden disputes of definition. It’s one of the argument resolution tools LessWrong has taught me.
Do you know what the arguments that human minds are universal are?
I don’t claim familiarity with all of them. If you’d care to enlighten me?
I was unable to find any relevant argument at the link.
The strongest part would be this:
If we focus on the bounded subspace of mind design space which contains all those minds whose makeup can be specified in a trillion bits or less, then every universal generalization that you make has two to the trillionth power chances to be falsified.
Conversely, every existential generalization—“there exists at least one mind such that X”—has two to the trillionth power chances to be true.
Why did you think you could tell me what link would refute my position, if you didn’t know what arguments my position consisted of?
BTW you have the concept of universal correct.
Well, I think you do. Except that the part from the link you quoted is talking about a different kind of universality (of generalizations, not of minds). How is that supposed to be relevant?
edit: Thinking about it more, I think he’s got a background assumption where he assumes that most minds in the abstract mind design space are not universal and that they come on a continuum of functionality. Or possibly not that but something else? I do not accept this unargued assumption and I note that’s not what the computer design space looks like.
Because my link wasn’t a refutation. It was a statement of a correct position, with which any kind of universality of minds position is incompatible.
It is easily relevant. Anything we wish to say about universal ideas is a universal generalisation about every mind in mindspace. If you wish to say that all ideas are concepts, for example, that is equivalent to saying that all minds in mindspace are capable of containing concepts.
This constitutes a pretty good argument against our brains having universal intelligence.
If you meant
This constitutes a pretty good statement of the correct position, with no argument against your position.
Do you understand the difference between an argument which engages with someone’s position, and simply a statement which ignores them?
I’ve run into this kind of issue with several people here. In my view, the way of thinking where you build up your position without worrying about criticisms of it, and without worrying about criticizing other positions, is anti-critical. Do you think it’s good? Why is it good? Doesn’t it go wrong whenever there is a criticism you don’t know about, or another way of thinking which is better that you don’t know about? Doesn’t it tend to not seek those things out since you think your position is correct and that’s that?
Do you understand the difference between an argument which engages with someone’s position, and simply a statement which ignores them?
Yes. In practical terms of coming to the most correct worldview, there isn’t much difference. I suspect your Popper fetish has misled you into thinking that arguments and refutations of positions are what matters—what matters is truth, maps-to-reality-ness, correctness. That is, if I have a correct thing, and your thing is incompatible with my thing, due to the nature of reality, your thing is wrong. I don’t need to show you that it’s wrong, or how it’s wrong—the mere existence of my correct thing does more than enough.
I’ve run into this kind of issue with several people here.
I noticed; hence why I caused this particular exchange.
In my view, the way of thinking where you build up your position without worrying about criticisms of it, and without worrying about criticizing other positions, is anti-critical.
We need to insert a few very important thing into this description: the way of thinking where you build up your position to match reality as closely as possible without worrying about criticisms of it, and especially without worrying about criticizing other positions, is anti-critical, and pro-truth.
Do you think it’s good? Why is it good? Doesn’t it go wrong whenever there is a criticism you don’t know about, or another way of thinking which is better that you don’t know about? Doesn’t it tend to not seek those things out since you think your position is correct and that’s that?
I do think this new, updated description is good. It’s good because reversed stupidity isn’t intelligence. It’s good because it’s a much better search pattern in the space of all possible ideas than rejecting all falsified ideas. If you have a formal scheme built of Is and Us, then building strings from the rules is a better way to get correct strings than generating random strings, or strings that seem like they should be right, and sticking with them until someone proves they’re not.
That is, Popperian philosophy is all about the social rules of belief: you are allowed to belief whatever you like, until it’s falsified, criticized, or refuted. It’s rude, impolite, gauche to continue believing something that’s falsified. As long as you don’t believe anything that’s wrong. And so on.
Here at LessWrong, we have a better truth-seeking method. The Bayesian perspective is a better paradigm. You can’t just beg the question and say it’s not a better paradigm because it lacks criticism or refutation; these are elements of your paradigm that are unnecessary to the Bayesian view.
And if you doubt this: I can show you that the Bayesian perspective is better than the Popperian perspective at coming to the truth. Say there were two scientific theories, both attempting to explain some aspect of the world. Both of these theories are well-developed; both make predictions that, while couched in very different terminology, make us expect mostly the same events to happen. They differ radically in their description of the underlying structure of the phenomenon, but these cash out to more or less the same events. Now, one of these theories is a little older, a little more supported by scientists, a little clunkier, a little less parsimonious. The other is newer, less supported, but simpler. Neither of these theories have had criticisms beyond simple appeals to incredulity directed at them. Neither of these theories has had any real refutations put forward. An event is observed, which provides strong evidence for the newer theory, but doesn’t contradict anything in the older theory.
I put it to you that Popperians would be almost unanimously supporting the first theory—they would have learned of it first, and seen no reason to change—no refutation, etc. Bayesians would be almost unanimously supporting the second theory, because it more strongly predicted this event.
I can’t take too much credit. The entire second half is mostly just what Eliezer was saying in the sequences around Quantum Physics. Well, sure, I can take credit for expressing it well, I guess.
(nods) Yes, the latter is what I was considering meritorious.
I mention it not because it’s a huge deal—it isn’t, and ordinarily I would have just quietly upvoted it—but given that I really don’t want more of the thread that comment is in, I felt obligated to clarify what my upvote meant.
I put it to you that Popperians would be almost unanimously supporting the first theory
As someone who actually knows many Popperians, and as one myself, I can tell you they would not be. The second theory sounds way better, as you describe it.
the way of thinking where you build up your position to match reality as closely as possible without worrying about criticisms of it, and especially without worrying about criticizing other positions, is anti-critical, and pro-truth.
But what if you’re making a mistake? Don’t we need criticism just in case your way of building up the truth has a mistake?
Popper fetish
I see that you do like one kind of criticism: ad-hominems.
if I have a correct thing, and your thing is incompatible with my thing, due to the nature of reality, your thing is wrong.
Logically, yes. But do you have a correct thing? What if you don’t. That’s why you need criticism. Because you’re fallible, and your methods are fallible too, and your choice of methods fallible yet again.
That is, Popperian philosophy is all about the social rules of belief: you are allowed to belief whatever you like, until it’s falsified, criticized, or refuted. It’s rude, impolite, gauche to continue believing something that’s falsified.
As a Popperian far more familiar with the Popperian community than you, let me tell you:
this is wrong. This is not what Popperians think, it’s not what Popper wrote, it’s not what Popperians do.
Where are you getting this nonsense? Now that I’ve told you it’s not what we’re about, will you reconsider and try to learn our actual views before you reject Popper?
As someone who actually knows many Popperians, and as one myself, I can tell you they would not be. The second theory sounds way better, as you describe it.
Can you tell me what process they would use to move over to the new theory? Do keep in mind that everyone started on the first theory—the second theory didn’t even exist around the time the first theory picked up momentum.
You come up with a criticism of the old theory, and an explanation of what the new theory is and how it does better (e.g. by solving the problem that was wrong with the old theory). And people are free the whole time to criticize either theory, and suggest new ones, as they see fit. If they see something wrong with the old one, but not the new one, they will change their minds.
But there is no criticism of the old theory! At least, no criticism that isn’t easily dismantled by proponents of the old theory. There is no problem that is wrong with the old theory!
This is not some thought experiment, either. This situation is actually happening, right now, with the Copenhagen and Many Worlds interpretations of quantum physics. Copenhagen has the clumsy ‘decoherence’, Many Worlds has the elegant, well, many worlds. The event that supports Many Worlds strongly but also supports Copenhagen weakly is the double-slit experiment.
Bad example. Decoherence is a phenomenon that exists in any interpretation of quantum mechanics, and is heavily used in MWI as a tool to explain when branches effectively no longer interact.
But the Copenhagen interpretation has no defense. It doesn’t even make sense.
Decoherence is a major concept in MWI. Maybe if you learned the arguments on both sides the situation would be clearer to you.
I think you’ve basically given up on the possibility of arguing reaching a conclusion, without even learning the views of both sides first. There are conclusive arguments to be found—on this topic and many others—and plenty of unanswered and unanswerable criticisms of Copenhagen.
Conclusive doesn’t mean infallible, but it does mean that it actually resolves the issue and doesn’t allow for:
easily dismantled by proponents of the old theory
The original statement was:
Now, one of these theories is a little older, a little more supported by scientists, a little clunkier, a little less parsimonious.
Mecho-Gecko disagrees..
If you have been updating your worldview in light of evidence streaming in from neuroscience and biology, you’d have heard …
You realize we can build new bodies with technology? Or maybe you don’t...
And in the analogy to thinking machines, is that more like our current brains, or more like the kind of brains we will be building and calling artificial intelligence?
Remind me again; these new bodies are going to run better on some surfaces? In the analogy, these artificial brains are going to think differently?
You’re funny. First you make up an analogy you think is false to say I’m wrong. Then you say geckos are fundamentally superior to technology, while linking to a technology. Now you’re saying I’m wrong because the analogy is true. Do you think at any point in this you were wrong?
(Do note that I linked to mecho-gecko as an example of a technology that can run on a surface that we, even using that technology, would not be able to run on. The actual gecko is irrelevant, I just couldn’t find a clip that didn’t include the comparison.)
No, I don’t. I am aware that you also think you have not been wrong at any point during this either, which has caused me to re-evaluate my own estimation of my correctness.
Having re-evaluated, I still believe I have been right all along.
To expand further on the analogy: the human brain is not a universal thinker, any more than the human leg is a universal runner. The brain thinks, and the leg runs, but they both do so in ways that are limited in some aspects, underperform in some domains, and suffer from quirks and idiosyncrasies. To say that the kind of thinking that a human brain does, is the only kind of thinking and AIs won’t do any different, is isomorphic to saying that the kind of running a human leg does, is the same kind of running that a gecko’s leg does.
Do you have an argument that our brains do not have universality WRT intelligence?
Do you understand what the theory I’m advocating is and says? Do you know why it says it?
This constitutes a pretty good argument against our brains having universal intelligence.
I thought I understood what you meant by “universal intelligence”—that any idea that is conceivable, could be conceived by a human mind—but I am open to the possibility you are referring to a technical term of some sort. If you’d care to enlighten me?
Previously you refused to ask. Why did you change your mind?
Do you know what the arguments that human minds are universal are? I asked this in my previous comment. You didn’t engage with it. Do you not consider it important to know that?
I was unable to find any relevant argument at the link. It did beg the question several times (which is OK if it was written for a different purpose). Quote the passage you thought was an argument.
I re-read our conversation looking for possible hidden disputes of definition. It’s one of the argument resolution tools LessWrong has taught me.
I don’t claim familiarity with all of them. If you’d care to enlighten me?
The strongest part would be this:
Why did you think you could tell me what link would refute my position, if you didn’t know what arguments my position consisted of?
BTW you have the concept of universal correct.
Well, I think you do. Except that the part from the link you quoted is talking about a different kind of universality (of generalizations, not of minds). How is that supposed to be relevant?
edit: Thinking about it more, I think he’s got a background assumption where he assumes that most minds in the abstract mind design space are not universal and that they come on a continuum of functionality. Or possibly not that but something else? I do not accept this unargued assumption and I note that’s not what the computer design space looks like.
Because my link wasn’t a refutation. It was a statement of a correct position, with which any kind of universality of minds position is incompatible.
It is easily relevant. Anything we wish to say about universal ideas is a universal generalisation about every mind in mindspace. If you wish to say that all ideas are concepts, for example, that is equivalent to saying that all minds in mindspace are capable of containing concepts.
Why did you say
If you meant
Do you understand the difference between an argument which engages with someone’s position, and simply a statement which ignores them?
I’ve run into this kind of issue with several people here. In my view, the way of thinking where you build up your position without worrying about criticisms of it, and without worrying about criticizing other positions, is anti-critical. Do you think it’s good? Why is it good? Doesn’t it go wrong whenever there is a criticism you don’t know about, or another way of thinking which is better that you don’t know about? Doesn’t it tend to not seek those things out since you think your position is correct and that’s that?
Yes. In practical terms of coming to the most correct worldview, there isn’t much difference. I suspect your Popper fetish has misled you into thinking that arguments and refutations of positions are what matters—what matters is truth, maps-to-reality-ness, correctness. That is, if I have a correct thing, and your thing is incompatible with my thing, due to the nature of reality, your thing is wrong. I don’t need to show you that it’s wrong, or how it’s wrong—the mere existence of my correct thing does more than enough.
I noticed; hence why I caused this particular exchange.
We need to insert a few very important thing into this description: the way of thinking where you build up your position to match reality as closely as possible without worrying about criticisms of it, and especially without worrying about criticizing other positions, is anti-critical, and pro-truth.
I do think this new, updated description is good. It’s good because reversed stupidity isn’t intelligence. It’s good because it’s a much better search pattern in the space of all possible ideas than rejecting all falsified ideas. If you have a formal scheme built of Is and Us, then building strings from the rules is a better way to get correct strings than generating random strings, or strings that seem like they should be right, and sticking with them until someone proves they’re not.
That is, Popperian philosophy is all about the social rules of belief: you are allowed to belief whatever you like, until it’s falsified, criticized, or refuted. It’s rude, impolite, gauche to continue believing something that’s falsified. As long as you don’t believe anything that’s wrong. And so on.
Here at LessWrong, we have a better truth-seeking method. The Bayesian perspective is a better paradigm. You can’t just beg the question and say it’s not a better paradigm because it lacks criticism or refutation; these are elements of your paradigm that are unnecessary to the Bayesian view.
And if you doubt this: I can show you that the Bayesian perspective is better than the Popperian perspective at coming to the truth. Say there were two scientific theories, both attempting to explain some aspect of the world. Both of these theories are well-developed; both make predictions that, while couched in very different terminology, make us expect mostly the same events to happen. They differ radically in their description of the underlying structure of the phenomenon, but these cash out to more or less the same events. Now, one of these theories is a little older, a little more supported by scientists, a little clunkier, a little less parsimonious. The other is newer, less supported, but simpler. Neither of these theories have had criticisms beyond simple appeals to incredulity directed at them. Neither of these theories has had any real refutations put forward. An event is observed, which provides strong evidence for the newer theory, but doesn’t contradict anything in the older theory.
I put it to you that Popperians would be almost unanimously supporting the first theory—they would have learned of it first, and seen no reason to change—no refutation, etc. Bayesians would be almost unanimously supporting the second theory, because it more strongly predicted this event.
And the Bayesians would be right.
Upvoted for being merit-worthily well-expressed, despite my desire to see less of this discussion thread in general.
I can’t take too much credit. The entire second half is mostly just what Eliezer was saying in the sequences around Quantum Physics. Well, sure, I can take credit for expressing it well, I guess.
(nods) Yes, the latter is what I was considering meritorious.
I mention it not because it’s a huge deal—it isn’t, and ordinarily I would have just quietly upvoted it—but given that I really don’t want more of the thread that comment is in, I felt obligated to clarify what my upvote meant.
As someone who actually knows many Popperians, and as one myself, I can tell you they would not be. The second theory sounds way better, as you describe it.
But what if you’re making a mistake? Don’t we need criticism just in case your way of building up the truth has a mistake?
I see that you do like one kind of criticism: ad-hominems.
Logically, yes. But do you have a correct thing? What if you don’t. That’s why you need criticism. Because you’re fallible, and your methods are fallible too, and your choice of methods fallible yet again.
As a Popperian far more familiar with the Popperian community than you, let me tell you:
this is wrong. This is not what Popperians think, it’s not what Popper wrote, it’s not what Popperians do.
Where are you getting this nonsense? Now that I’ve told you it’s not what we’re about, will you reconsider and try to learn our actual views before you reject Popper?
Can you tell me what process they would use to move over to the new theory? Do keep in mind that everyone started on the first theory—the second theory didn’t even exist around the time the first theory picked up momentum.
You come up with a criticism of the old theory, and an explanation of what the new theory is and how it does better (e.g. by solving the problem that was wrong with the old theory). And people are free the whole time to criticize either theory, and suggest new ones, as they see fit. If they see something wrong with the old one, but not the new one, they will change their minds.
But there is no criticism of the old theory! At least, no criticism that isn’t easily dismantled by proponents of the old theory. There is no problem that is wrong with the old theory!
This is not some thought experiment, either. This situation is actually happening, right now, with the Copenhagen and Many Worlds interpretations of quantum physics. Copenhagen has the clumsy ‘decoherence’, Many Worlds has the elegant, well, many worlds. The event that supports Many Worlds strongly but also supports Copenhagen weakly is the double-slit experiment.
Bad example. Decoherence is a phenomenon that exists in any interpretation of quantum mechanics, and is heavily used in MWI as a tool to explain when branches effectively no longer interact.
I think he meant wave-form collapse.
But the Copenhagen interpretation has no defense. It doesn’t even make sense.
Decoherence is a major concept in MWI. Maybe if you learned the arguments on both sides the situation would be clearer to you.
I think you’ve basically given up on the possibility of arguing reaching a conclusion, without even learning the views of both sides first. There are conclusive arguments to be found—on this topic and many others—and plenty of unanswered and unanswerable criticisms of Copenhagen.
Conclusive doesn’t mean infallible, but it does mean that it actually resolves the issue and doesn’t allow for:
The original statement was:
Clunkier is a criticism.