It isn’t 100% guaranteed that if I jump off a tall building that I will then die.
Indeed. You’re the one who told me that writers sometimes don’t finish books… They aren’t 100% guaranteed to. I know that. Why did you say that?
Imagine for example, that our AI finds a proofs that P=NP and that this proof gives a O(n^2) algorithm for solving your favorite NP-complete problem, and that the constant in the O is really small.
Umm. Imagine a human does the same thing. What’s your point? My/Deutsch’s point is AGIs have no special advantage over non-artificial intelligences at finding a proof like that in the first place.
We’re pathetic sacks of meat that can’t even multiply four or five digits numbers in our heads.
That’s not even close to true. First of all, I could do that if I trained a bit. Many people could. Second, many people can memorize long sequences of the digits of pi with some training. And many other things. Ever reading about Renschaw and how he trained people to see faster and more accurately?
The point about jumping off a building was due to a miscommunication with you. See my remark here and I then misinterpreted your reply. Illusion of transparency is annoying. The section concerning that is now very confused and irrelevant. The relevant point I was trying to make regarding the writer is that even when knowledge areas are highly restricted making predictions about what will happen is really difficult.
And yes, I’ve read your essays, and nothing there is at all precise enough to be helpful. Maybe taboo knowledge and make your point without it?
Umm. Imagine a human does the same thing. What’s your point? My/Deutsch’s point is AGIs have no special advantage over non-artificial intelligences at finding a proof like that in the first place.
There are a lot of differences. Humans won’t in general have an easy a time modifying their structure. Moreover, human values fall in a very small cluster in mindspace. Humans aren’t for example paperclip maximizers or Pi digit calculators. There are two twin dangers, an AGI is has advantages in improving itself and an AGI is unlikely to share to our values. Those are both bad.
That’s not even close to true. First of all, I could do that if I trained a bit. Many people could. Second, many people can memorize long sequences of the digits of pi with some training.
Sure. Humans can do that if they train a lot. A simple computer can do that with much less effort, so an AGI which uses a digital base at all similar to a human won’t need to spend days training to be able to multiply 5 digit numbers quickly. And if you prefer a slightly more extreme example, a computer can factor a random 15 digit number in seconds with minimal optimization. A human can’t. And no amount of training will allow you to do so. Computers can do a lot of tasks we can’t. At present, we can do a lot of tasks that they can’t. A computer that can do both sets of tasks better than we can is the basic threat model.
There are a lot of differences. Humans won’t in general have an easy a time modifying their structure.
But it doesn’t matter because it’s universal (we are universal thinkers, we can create any ideas that any thinking things can). The implementation details of universal things are not super important because the repertoire remains the same.
And if you prefer a slightly more extreme example, a computer can factor a random 15 digit number in seconds with minimal optimization.
Not by the method of thinking. A human can factor it using a calculator. An AI could also factor it using a calculator program. An AI doing it the way humans do—by thinking—won’t be as fast!
Maybe taboo knowledge and make your point without it?
But it’s the central concept of epistemology. If you want to understand me or Popper you need to learn to understand it. Many points depend on it.
And yes, I’ve read your essays, and nothing there is at all precise enough to be helpful.
Would you like to know the details of those essays? If you want to discuss them I can elaborate on any issue (or if I can’t, I will be surprised and learn something, at least about my ignorance). If you want to discuss them, can we go somewhere else (that has other people who will know the answers to your questions too)? Tell me and I’ll PM you the place if you’re interested (I don’t want too many random people to come, currently).
BTW no matter what you write people always complain. They always have questions or misconceptions that aren’t the ones you addressed. No writing is perfect. Even if you were to write all of Popper’s books, you’d still get complaints...
(we are universal thinkers, we can create any ideas that any thinking things can)
This seems about as likely as saying “We are universal runners, we can run on any surface that any running thing can”. If you’ve been keeping up, you’d have heard that the brain is a lump of biological tissue, and as such is subject to limitations imposed by its substrate.
And btw we can run on any surface, that any running thing can, with the aid of technology. What’s the problem?
And instead of
If you’ve been keeping up
You should say what it really means:
I’m better than you, so I don’t need to argue, condescension suffices
If you actually want an explanation of the ideas, apologize and ask nicely. If you just want to flame me for contradicting your worldview, then go away.
And in the analogy to thinking machines, is that more like our current brains, or more like the kind of brains we will be building and calling artificial intelligence?
Remind me again; these new bodies are going to run better on some surfaces? In the analogy, these artificial brains are going to think differently?
You’re funny. First you make up an analogy you think is false to say I’m wrong. Then you say geckos are fundamentally superior to technology, while linking to a technology. Now you’re saying I’m wrong because the analogy is true. Do you think at any point in this you were wrong?
(Do note that I linked to mecho-gecko as an example of a technology that can run on a surface that we, even using that technology, would not be able to run on. The actual gecko is irrelevant, I just couldn’t find a clip that didn’t include the comparison.)
No, I don’t. I am aware that you also think you have not been wrong at any point during this either, which has caused me to re-evaluate my own estimation of my correctness.
Having re-evaluated, I still believe I have been right all along.
To expand further on the analogy: the human brain is not a universal thinker, any more than the human leg is a universal runner. The brain thinks, and the leg runs, but they both do so in ways that are limited in some aspects, underperform in some domains, and suffer from quirks and idiosyncrasies. To say that the kind of thinking that a human brain does, is the only kind of thinking and AIs won’t do any different, is isomorphic to saying that the kind of running a human leg does, is the same kind of running that a gecko’s leg does.
This constitutes a pretty good argument against our brains having universal intelligence.
I thought I understood what you meant by “universal intelligence”—that any idea that is conceivable, could be conceived by a human mind—but I am open to the possibility you are referring to a technical term of some sort. If you’d care to enlighten me?
Previously you refused to ask. Why did you change your mind?
Do you know what the arguments that human minds are universal are? I asked this in my previous comment. You didn’t engage with it. Do you not consider it important to know that?
I was unable to find any relevant argument at the link. It did beg the question several times (which is OK if it was written for a different purpose). Quote the passage you thought was an argument.
Previously you refused to ask. Why did you change your mind?
I re-read our conversation looking for possible hidden disputes of definition. It’s one of the argument resolution tools LessWrong has taught me.
Do you know what the arguments that human minds are universal are?
I don’t claim familiarity with all of them. If you’d care to enlighten me?
I was unable to find any relevant argument at the link.
The strongest part would be this:
If we focus on the bounded subspace of mind design space which contains all those minds whose makeup can be specified in a trillion bits or less, then every universal generalization that you make has two to the trillionth power chances to be falsified.
Conversely, every existential generalization—“there exists at least one mind such that X”—has two to the trillionth power chances to be true.
Why did you think you could tell me what link would refute my position, if you didn’t know what arguments my position consisted of?
BTW you have the concept of universal correct.
Well, I think you do. Except that the part from the link you quoted is talking about a different kind of universality (of generalizations, not of minds). How is that supposed to be relevant?
edit: Thinking about it more, I think he’s got a background assumption where he assumes that most minds in the abstract mind design space are not universal and that they come on a continuum of functionality. Or possibly not that but something else? I do not accept this unargued assumption and I note that’s not what the computer design space looks like.
Because my link wasn’t a refutation. It was a statement of a correct position, with which any kind of universality of minds position is incompatible.
It is easily relevant. Anything we wish to say about universal ideas is a universal generalisation about every mind in mindspace. If you wish to say that all ideas are concepts, for example, that is equivalent to saying that all minds in mindspace are capable of containing concepts.
This constitutes a pretty good argument against our brains having universal intelligence.
If you meant
This constitutes a pretty good statement of the correct position, with no argument against your position.
Do you understand the difference between an argument which engages with someone’s position, and simply a statement which ignores them?
I’ve run into this kind of issue with several people here. In my view, the way of thinking where you build up your position without worrying about criticisms of it, and without worrying about criticizing other positions, is anti-critical. Do you think it’s good? Why is it good? Doesn’t it go wrong whenever there is a criticism you don’t know about, or another way of thinking which is better that you don’t know about? Doesn’t it tend to not seek those things out since you think your position is correct and that’s that?
Do you understand the difference between an argument which engages with someone’s position, and simply a statement which ignores them?
Yes. In practical terms of coming to the most correct worldview, there isn’t much difference. I suspect your Popper fetish has misled you into thinking that arguments and refutations of positions are what matters—what matters is truth, maps-to-reality-ness, correctness. That is, if I have a correct thing, and your thing is incompatible with my thing, due to the nature of reality, your thing is wrong. I don’t need to show you that it’s wrong, or how it’s wrong—the mere existence of my correct thing does more than enough.
I’ve run into this kind of issue with several people here.
I noticed; hence why I caused this particular exchange.
In my view, the way of thinking where you build up your position without worrying about criticisms of it, and without worrying about criticizing other positions, is anti-critical.
We need to insert a few very important thing into this description: the way of thinking where you build up your position to match reality as closely as possible without worrying about criticisms of it, and especially without worrying about criticizing other positions, is anti-critical, and pro-truth.
Do you think it’s good? Why is it good? Doesn’t it go wrong whenever there is a criticism you don’t know about, or another way of thinking which is better that you don’t know about? Doesn’t it tend to not seek those things out since you think your position is correct and that’s that?
I do think this new, updated description is good. It’s good because reversed stupidity isn’t intelligence. It’s good because it’s a much better search pattern in the space of all possible ideas than rejecting all falsified ideas. If you have a formal scheme built of Is and Us, then building strings from the rules is a better way to get correct strings than generating random strings, or strings that seem like they should be right, and sticking with them until someone proves they’re not.
That is, Popperian philosophy is all about the social rules of belief: you are allowed to belief whatever you like, until it’s falsified, criticized, or refuted. It’s rude, impolite, gauche to continue believing something that’s falsified. As long as you don’t believe anything that’s wrong. And so on.
Here at LessWrong, we have a better truth-seeking method. The Bayesian perspective is a better paradigm. You can’t just beg the question and say it’s not a better paradigm because it lacks criticism or refutation; these are elements of your paradigm that are unnecessary to the Bayesian view.
And if you doubt this: I can show you that the Bayesian perspective is better than the Popperian perspective at coming to the truth. Say there were two scientific theories, both attempting to explain some aspect of the world. Both of these theories are well-developed; both make predictions that, while couched in very different terminology, make us expect mostly the same events to happen. They differ radically in their description of the underlying structure of the phenomenon, but these cash out to more or less the same events. Now, one of these theories is a little older, a little more supported by scientists, a little clunkier, a little less parsimonious. The other is newer, less supported, but simpler. Neither of these theories have had criticisms beyond simple appeals to incredulity directed at them. Neither of these theories has had any real refutations put forward. An event is observed, which provides strong evidence for the newer theory, but doesn’t contradict anything in the older theory.
I put it to you that Popperians would be almost unanimously supporting the first theory—they would have learned of it first, and seen no reason to change—no refutation, etc. Bayesians would be almost unanimously supporting the second theory, because it more strongly predicted this event.
I can’t take too much credit. The entire second half is mostly just what Eliezer was saying in the sequences around Quantum Physics. Well, sure, I can take credit for expressing it well, I guess.
(nods) Yes, the latter is what I was considering meritorious.
I mention it not because it’s a huge deal—it isn’t, and ordinarily I would have just quietly upvoted it—but given that I really don’t want more of the thread that comment is in, I felt obligated to clarify what my upvote meant.
I put it to you that Popperians would be almost unanimously supporting the first theory
As someone who actually knows many Popperians, and as one myself, I can tell you they would not be. The second theory sounds way better, as you describe it.
the way of thinking where you build up your position to match reality as closely as possible without worrying about criticisms of it, and especially without worrying about criticizing other positions, is anti-critical, and pro-truth.
But what if you’re making a mistake? Don’t we need criticism just in case your way of building up the truth has a mistake?
Popper fetish
I see that you do like one kind of criticism: ad-hominems.
if I have a correct thing, and your thing is incompatible with my thing, due to the nature of reality, your thing is wrong.
Logically, yes. But do you have a correct thing? What if you don’t. That’s why you need criticism. Because you’re fallible, and your methods are fallible too, and your choice of methods fallible yet again.
That is, Popperian philosophy is all about the social rules of belief: you are allowed to belief whatever you like, until it’s falsified, criticized, or refuted. It’s rude, impolite, gauche to continue believing something that’s falsified.
As a Popperian far more familiar with the Popperian community than you, let me tell you:
this is wrong. This is not what Popperians think, it’s not what Popper wrote, it’s not what Popperians do.
Where are you getting this nonsense? Now that I’ve told you it’s not what we’re about, will you reconsider and try to learn our actual views before you reject Popper?
As someone who actually knows many Popperians, and as one myself, I can tell you they would not be. The second theory sounds way better, as you describe it.
Can you tell me what process they would use to move over to the new theory? Do keep in mind that everyone started on the first theory—the second theory didn’t even exist around the time the first theory picked up momentum.
You come up with a criticism of the old theory, and an explanation of what the new theory is and how it does better (e.g. by solving the problem that was wrong with the old theory). And people are free the whole time to criticize either theory, and suggest new ones, as they see fit. If they see something wrong with the old one, but not the new one, they will change their minds.
But there is no criticism of the old theory! At least, no criticism that isn’t easily dismantled by proponents of the old theory. There is no problem that is wrong with the old theory!
This is not some thought experiment, either. This situation is actually happening, right now, with the Copenhagen and Many Worlds interpretations of quantum physics. Copenhagen has the clumsy ‘decoherence’, Many Worlds has the elegant, well, many worlds. The event that supports Many Worlds strongly but also supports Copenhagen weakly is the double-slit experiment.
Bad example. Decoherence is a phenomenon that exists in any interpretation of quantum mechanics, and is heavily used in MWI as a tool to explain when branches effectively no longer interact.
But the Copenhagen interpretation has no defense. It doesn’t even make sense.
Decoherence is a major concept in MWI. Maybe if you learned the arguments on both sides the situation would be clearer to you.
I think you’ve basically given up on the possibility of arguing reaching a conclusion, without even learning the views of both sides first. There are conclusive arguments to be found—on this topic and many others—and plenty of unanswered and unanswerable criticisms of Copenhagen.
Conclusive doesn’t mean infallible, but it does mean that it actually resolves the issue and doesn’t allow for:
easily dismantled by proponents of the old theory
The original statement was:
Now, one of these theories is a little older, a little more supported by scientists, a little clunkier, a little less parsimonious.
See the essay on knowledge: http://fallibleideas.com/
Or read Deutsch’s books.
Indeed. You’re the one who told me that writers sometimes don’t finish books… They aren’t 100% guaranteed to. I know that. Why did you say that?
Umm. Imagine a human does the same thing. What’s your point? My/Deutsch’s point is AGIs have no special advantage over non-artificial intelligences at finding a proof like that in the first place.
That’s not even close to true. First of all, I could do that if I trained a bit. Many people could. Second, many people can memorize long sequences of the digits of pi with some training. And many other things. Ever reading about Renschaw and how he trained people to see faster and more accurately?
The point about jumping off a building was due to a miscommunication with you. See my remark here and I then misinterpreted your reply. Illusion of transparency is annoying. The section concerning that is now very confused and irrelevant. The relevant point I was trying to make regarding the writer is that even when knowledge areas are highly restricted making predictions about what will happen is really difficult.
And yes, I’ve read your essays, and nothing there is at all precise enough to be helpful. Maybe taboo knowledge and make your point without it?
There are a lot of differences. Humans won’t in general have an easy a time modifying their structure. Moreover, human values fall in a very small cluster in mindspace. Humans aren’t for example paperclip maximizers or Pi digit calculators. There are two twin dangers, an AGI is has advantages in improving itself and an AGI is unlikely to share to our values. Those are both bad.
Sure. Humans can do that if they train a lot. A simple computer can do that with much less effort, so an AGI which uses a digital base at all similar to a human won’t need to spend days training to be able to multiply 5 digit numbers quickly. And if you prefer a slightly more extreme example, a computer can factor a random 15 digit number in seconds with minimal optimization. A human can’t. And no amount of training will allow you to do so. Computers can do a lot of tasks we can’t. At present, we can do a lot of tasks that they can’t. A computer that can do both sets of tasks better than we can is the basic threat model.
But it doesn’t matter because it’s universal (we are universal thinkers, we can create any ideas that any thinking things can). The implementation details of universal things are not super important because the repertoire remains the same.
Not by the method of thinking. A human can factor it using a calculator. An AI could also factor it using a calculator program. An AI doing it the way humans do—by thinking—won’t be as fast!
But it’s the central concept of epistemology. If you want to understand me or Popper you need to learn to understand it. Many points depend on it.
Would you like to know the details of those essays? If you want to discuss them I can elaborate on any issue (or if I can’t, I will be surprised and learn something, at least about my ignorance). If you want to discuss them, can we go somewhere else (that has other people who will know the answers to your questions too)? Tell me and I’ll PM you the place if you’re interested (I don’t want too many random people to come, currently).
BTW no matter what you write people always complain. They always have questions or misconceptions that aren’t the ones you addressed. No writing is perfect. Even if you were to write all of Popper’s books, you’d still get complaints...
This seems about as likely as saying “We are universal runners, we can run on any surface that any running thing can”. If you’ve been keeping up, you’d have heard that the brain is a lump of biological tissue, and as such is subject to limitations imposed by its substrate.
I don’t think you should say
in place of
And btw we can run on any surface, that any running thing can, with the aid of technology. What’s the problem?
And instead of
You should say what it really means:
If you actually want an explanation of the ideas, apologize and ask nicely. If you just want to flame me for contradicting your worldview, then go away.
Mecho-Gecko disagrees..
If you have been updating your worldview in light of evidence streaming in from neuroscience and biology, you’d have heard …
You realize we can build new bodies with technology? Or maybe you don’t...
And in the analogy to thinking machines, is that more like our current brains, or more like the kind of brains we will be building and calling artificial intelligence?
Remind me again; these new bodies are going to run better on some surfaces? In the analogy, these artificial brains are going to think differently?
You’re funny. First you make up an analogy you think is false to say I’m wrong. Then you say geckos are fundamentally superior to technology, while linking to a technology. Now you’re saying I’m wrong because the analogy is true. Do you think at any point in this you were wrong?
(Do note that I linked to mecho-gecko as an example of a technology that can run on a surface that we, even using that technology, would not be able to run on. The actual gecko is irrelevant, I just couldn’t find a clip that didn’t include the comparison.)
No, I don’t. I am aware that you also think you have not been wrong at any point during this either, which has caused me to re-evaluate my own estimation of my correctness.
Having re-evaluated, I still believe I have been right all along.
To expand further on the analogy: the human brain is not a universal thinker, any more than the human leg is a universal runner. The brain thinks, and the leg runs, but they both do so in ways that are limited in some aspects, underperform in some domains, and suffer from quirks and idiosyncrasies. To say that the kind of thinking that a human brain does, is the only kind of thinking and AIs won’t do any different, is isomorphic to saying that the kind of running a human leg does, is the same kind of running that a gecko’s leg does.
Do you have an argument that our brains do not have universality WRT intelligence?
Do you understand what the theory I’m advocating is and says? Do you know why it says it?
This constitutes a pretty good argument against our brains having universal intelligence.
I thought I understood what you meant by “universal intelligence”—that any idea that is conceivable, could be conceived by a human mind—but I am open to the possibility you are referring to a technical term of some sort. If you’d care to enlighten me?
Previously you refused to ask. Why did you change your mind?
Do you know what the arguments that human minds are universal are? I asked this in my previous comment. You didn’t engage with it. Do you not consider it important to know that?
I was unable to find any relevant argument at the link. It did beg the question several times (which is OK if it was written for a different purpose). Quote the passage you thought was an argument.
I re-read our conversation looking for possible hidden disputes of definition. It’s one of the argument resolution tools LessWrong has taught me.
I don’t claim familiarity with all of them. If you’d care to enlighten me?
The strongest part would be this:
Why did you think you could tell me what link would refute my position, if you didn’t know what arguments my position consisted of?
BTW you have the concept of universal correct.
Well, I think you do. Except that the part from the link you quoted is talking about a different kind of universality (of generalizations, not of minds). How is that supposed to be relevant?
edit: Thinking about it more, I think he’s got a background assumption where he assumes that most minds in the abstract mind design space are not universal and that they come on a continuum of functionality. Or possibly not that but something else? I do not accept this unargued assumption and I note that’s not what the computer design space looks like.
Because my link wasn’t a refutation. It was a statement of a correct position, with which any kind of universality of minds position is incompatible.
It is easily relevant. Anything we wish to say about universal ideas is a universal generalisation about every mind in mindspace. If you wish to say that all ideas are concepts, for example, that is equivalent to saying that all minds in mindspace are capable of containing concepts.
Why did you say
If you meant
Do you understand the difference between an argument which engages with someone’s position, and simply a statement which ignores them?
I’ve run into this kind of issue with several people here. In my view, the way of thinking where you build up your position without worrying about criticisms of it, and without worrying about criticizing other positions, is anti-critical. Do you think it’s good? Why is it good? Doesn’t it go wrong whenever there is a criticism you don’t know about, or another way of thinking which is better that you don’t know about? Doesn’t it tend to not seek those things out since you think your position is correct and that’s that?
Yes. In practical terms of coming to the most correct worldview, there isn’t much difference. I suspect your Popper fetish has misled you into thinking that arguments and refutations of positions are what matters—what matters is truth, maps-to-reality-ness, correctness. That is, if I have a correct thing, and your thing is incompatible with my thing, due to the nature of reality, your thing is wrong. I don’t need to show you that it’s wrong, or how it’s wrong—the mere existence of my correct thing does more than enough.
I noticed; hence why I caused this particular exchange.
We need to insert a few very important thing into this description: the way of thinking where you build up your position to match reality as closely as possible without worrying about criticisms of it, and especially without worrying about criticizing other positions, is anti-critical, and pro-truth.
I do think this new, updated description is good. It’s good because reversed stupidity isn’t intelligence. It’s good because it’s a much better search pattern in the space of all possible ideas than rejecting all falsified ideas. If you have a formal scheme built of Is and Us, then building strings from the rules is a better way to get correct strings than generating random strings, or strings that seem like they should be right, and sticking with them until someone proves they’re not.
That is, Popperian philosophy is all about the social rules of belief: you are allowed to belief whatever you like, until it’s falsified, criticized, or refuted. It’s rude, impolite, gauche to continue believing something that’s falsified. As long as you don’t believe anything that’s wrong. And so on.
Here at LessWrong, we have a better truth-seeking method. The Bayesian perspective is a better paradigm. You can’t just beg the question and say it’s not a better paradigm because it lacks criticism or refutation; these are elements of your paradigm that are unnecessary to the Bayesian view.
And if you doubt this: I can show you that the Bayesian perspective is better than the Popperian perspective at coming to the truth. Say there were two scientific theories, both attempting to explain some aspect of the world. Both of these theories are well-developed; both make predictions that, while couched in very different terminology, make us expect mostly the same events to happen. They differ radically in their description of the underlying structure of the phenomenon, but these cash out to more or less the same events. Now, one of these theories is a little older, a little more supported by scientists, a little clunkier, a little less parsimonious. The other is newer, less supported, but simpler. Neither of these theories have had criticisms beyond simple appeals to incredulity directed at them. Neither of these theories has had any real refutations put forward. An event is observed, which provides strong evidence for the newer theory, but doesn’t contradict anything in the older theory.
I put it to you that Popperians would be almost unanimously supporting the first theory—they would have learned of it first, and seen no reason to change—no refutation, etc. Bayesians would be almost unanimously supporting the second theory, because it more strongly predicted this event.
And the Bayesians would be right.
Upvoted for being merit-worthily well-expressed, despite my desire to see less of this discussion thread in general.
I can’t take too much credit. The entire second half is mostly just what Eliezer was saying in the sequences around Quantum Physics. Well, sure, I can take credit for expressing it well, I guess.
(nods) Yes, the latter is what I was considering meritorious.
I mention it not because it’s a huge deal—it isn’t, and ordinarily I would have just quietly upvoted it—but given that I really don’t want more of the thread that comment is in, I felt obligated to clarify what my upvote meant.
As someone who actually knows many Popperians, and as one myself, I can tell you they would not be. The second theory sounds way better, as you describe it.
But what if you’re making a mistake? Don’t we need criticism just in case your way of building up the truth has a mistake?
I see that you do like one kind of criticism: ad-hominems.
Logically, yes. But do you have a correct thing? What if you don’t. That’s why you need criticism. Because you’re fallible, and your methods are fallible too, and your choice of methods fallible yet again.
As a Popperian far more familiar with the Popperian community than you, let me tell you:
this is wrong. This is not what Popperians think, it’s not what Popper wrote, it’s not what Popperians do.
Where are you getting this nonsense? Now that I’ve told you it’s not what we’re about, will you reconsider and try to learn our actual views before you reject Popper?
Can you tell me what process they would use to move over to the new theory? Do keep in mind that everyone started on the first theory—the second theory didn’t even exist around the time the first theory picked up momentum.
You come up with a criticism of the old theory, and an explanation of what the new theory is and how it does better (e.g. by solving the problem that was wrong with the old theory). And people are free the whole time to criticize either theory, and suggest new ones, as they see fit. If they see something wrong with the old one, but not the new one, they will change their minds.
But there is no criticism of the old theory! At least, no criticism that isn’t easily dismantled by proponents of the old theory. There is no problem that is wrong with the old theory!
This is not some thought experiment, either. This situation is actually happening, right now, with the Copenhagen and Many Worlds interpretations of quantum physics. Copenhagen has the clumsy ‘decoherence’, Many Worlds has the elegant, well, many worlds. The event that supports Many Worlds strongly but also supports Copenhagen weakly is the double-slit experiment.
Bad example. Decoherence is a phenomenon that exists in any interpretation of quantum mechanics, and is heavily used in MWI as a tool to explain when branches effectively no longer interact.
I think he meant wave-form collapse.
But the Copenhagen interpretation has no defense. It doesn’t even make sense.
Decoherence is a major concept in MWI. Maybe if you learned the arguments on both sides the situation would be clearer to you.
I think you’ve basically given up on the possibility of arguing reaching a conclusion, without even learning the views of both sides first. There are conclusive arguments to be found—on this topic and many others—and plenty of unanswered and unanswerable criticisms of Copenhagen.
Conclusive doesn’t mean infallible, but it does mean that it actually resolves the issue and doesn’t allow for:
The original statement was:
Clunkier is a criticism.