It shows you can know general facts about a system that creates knew knowledge, despite not knowing all the specific facts/bits of knowledge that it will create. We can know Kasparov will beat us despite not knowing exactly what move he’ll take; we can know that an AGI will destroy/save/whatever us despite not knowing exactly how.
Chess playing programs don’t create new knowledge.
So, the argument is wrong without me fixing it (human chess players do).
Small amounts of new knowledge in very limited areas is predictable. Like writers can predict they will finish writing a book (even if they haven’t worked out 100% of the plot yet) in advance.
This doesn’t have much to do with large scale prediction that depends on new types of knowledge, does it?
Whether you call it new knowledge or not it not relevant. Nor is new types of knowledge what is generally relevant (aside from the not at all small issue that “type” isn’t a well-defined notion in this context.)
Like writers can predict they will finish writing a book (even if they haven’t worked out 100% of the plot yet) in advance.
Actually, writers sometimes start a book and find part way through that they don’t want to finish, or the book might even change genres in the process of writing. If you prefer an example, I can predict that Brandon Sanderson’s next Mistborn book will be awesome. I can predict that it will sell well, and get good reviews. I can even predict a fair number of plot points just based on stuff Sanderson has done before and various comments he has made. But, at the same time, I can’t write a novel nearly as well as he does, and if he and I were to have a novel writing contest, he will beat me. I don’ t know how he will beat me, but he will.
Similarly, a sufficiently smart AI has the same problem. If it decides that human existence is non-optimal for it to carry out its goals, then it will try to find ways to eliminate us. It doesn’t matter if all the ways it comes up with of doing so are in a fairly limited set of domains. If it is really good at chemistry is might make nasty nanotech to reduce organic life into constituent atoms. If it is really good at math it might break all our cryptography, and then hack into our missiles and trigger a nuclear war (this one is obvious enough that there are multiple movies about it). If it is really good at social psychology it might manipulate us over a few years into just handing over control to it.
Just as I don’t know how Kasparov will beat me but I know he will, I don’t know how a sufficiently intelligent AI will beat me, but I know it will. There may be issues with how sufficiently intelligent it needs to be and whether or not an AGI will be likely undergo fast, substantial, recursive self-improvement to get to be that intelligent is an issue of much discussion on LW (Eliezer considers it likely. Some other people such as myself consider it to be unlikely.) but the basic point about sufficient intelligent seems clear.
Whether you call it new knowledge or not it not relevant.
Considering that Deutsch was talking about new knowledge, and I use the same terminology as him, it is relevant.
Actually, writers sometimes start a book and find part way through that they don’t want to finish,
I know that? And if I played Kasparov I might win. It’s not a 100% guaranteed prediction.
@Sanderson: you understand what kind of thing he’s doing pretty well. writers are a well known phenomenon. the less you know what processes he uses to write, what tradition he’s following—in general what’s going on—the less you can make any kind of useful predictions.
If it decides that human existence is non-optimal for it to carry out its goals
why would it?
Deutsch doesn’t think AGI’s will do fast recursive self-improvement. They can’t because the first ones will already be universal and there’s nothing much left to improve, besides their knowledge (not their design, besides making it faster). Improving knowledge with intelligence is the same process for AGI and humans. It won’t magically get super fast.
And if I played Kasparov I might win. It’s not a 100% guaranteed prediction.
The fallacy of gray? Between zero chance of winning a lottery, and epsilon chance, there is an order-of-epsilon difference. If you doubt this, let epsilon equal one over googolplex.
I said, basically: so what? And I pointed out his same “argument” works just as well (that is, not at all) in other cases.
There is a not-order-of-epsilon difference between an order-of-epsilon difference and a plausible difference. You winning against Kasparov vs. a writer finding part way through a book that they don’t want to finish.
You’ve assumed i’m a chess beginner. You did the same thing when you assumed i never beat any halfway decent chess program. I’m actually a strong player and don’t have a 0.0001% chance against kasparov. i have friends who are GMs who i can play decent games with.
also, didn’t i specify a writer can make such a prediction before being 100% done? e.g. at 99.9%. or, perhaps 90%. it depends. but i didn’t just say when part way done. you don’t read carefully enough.
here it was
Like writers can predict they will finish writing a book (even if they haven’t worked out 100% of the plot yet) in advance.
That you are not an order-of-Kasparov chess player is the right prior, even if in fact it so turns out that you happen to be Kasparov himself. These people are rare, and you’ve previously given no indication to me that you’re one of them. But again, LCPW.
It’s not correct to assume a statement i make is wrong, based on your prior about how much I know about chess. I used my own knowledge of how much i know about chess when making statements. you should respect that knowledge instead of ignore it and assuming i’m making basic LCPW mistakes (btw Popper made that same point too, in a different way. of course i know it.). or at least question my statement instead of assuming i’m wrong about how much i know about chess. you’re basically assuming i’m an idiot who makes sloppy statements. if you really think that you shouldn’t even be talking to me.
btw i’ve noticed you didn’t acknowledge your other mistakes or apologize. is that because you refuse to change your mind, or what?
you should respect that knowledge instead of ignore it and assuming i’m making basic LCPW mistakes
It is easily observable in this thread that you are making LCPW mistakes. You haven’t solved the game of chess, therefore the Least Convenient Possible World contains an AI powerful enough to explore the entire game tree of chess, solve the game, and beat you every time.
You could make a program like that. So what? No one gave an argument why the possibility of making such a program like that actually contradicts Deutsch. Such a program wouldn’t be creating knowledge as it played (in Deutsch’s terminology), it’d be doing some pretty trivial math (the hard part being the memory and speed for dealing with all the data), so it can’t be an example of the unpredictability of knowledge creation in Deutsch’s sense.
My initial point was merely that a statement was false. I think that’s important. We should try to correct our mistakes, starting with the ones we see first, and then after correcting them we might find more.
I saw. That’s no reason not to do the same with others. It doesn’t change that you imagined a convenient world where i’m bad at chess in order to dispute the specific details of an argument i made which had a substantive point that could still be made using other details. It doesn’t change that you misread my position in the stuff about authors. And so on.
It doesn’t change that you imagined a convenient world where i’m bad at chess in order to dispute the specific details of an argument i made which had a substantive point that could still be made using other details.
I think you missed my point about the books. I may not have made it very clear and so I apologize. The point was that even in an area which you consider to be a small type of knowledge the actual results can be extremely unpredictable.
Deutsch doesn’t think AGI’s will do fast recursive self-improvement. They can’t because the first ones will already be universal and there’s nothing much left to improve, besides their knowledge (not their design, besides making it faster).
Ok. So I’m someone who finds extreme recursive self-improvement to be unlikely and I find this to be a really unhelpful argument. Improvements in speed matter. A lot. Imagine for example, that our AI finds a proofs that P=NP and that this proof gives a O(n^2) algorithm for solving your favorite NP-complete problem, and that the constant in the O is really small. That means that the AI will do pretty much everything faster, and the more computing power it gets the more disparity there will be between it and the entities that don’t have access to this algorithm. It wants to engineer a new virus? Oh what luck, protein folding is under many models NP-compete. The AI decides to improve its memory design? Well, that involves graph coloring and the traveling salesman, also NP-complete problems. The AI decides that it really wants access to all the world’s servers and add them to its computational power? Well most of those have remote access capability that is based on cryptographic problems which are much weaker than NP-complete. So, um, yeah. It got those too.
Now, this scenario seems potentially far-fetched. After all, most experts consider it to be unlikely that P=NP, and consider it to be extremely unlikely that there’s any sort of fast algorithm for NP complete problems. So let’s just assume instead that the AI tries to make itself a lot faster. Well, let’s see, what can our AI do. It could give itself some nice quantum computing hardware and then use Shor’s algorithm to break factoring in polynomial time and then all AI can just take over lots of computers and have fun that way.
Improving knowledge with intelligence is the same process for AGI and humans. It won’t magically get super fast
This is not at all obvious. Humans can’t easily self-modify our hardware. We have no conscious access to most of our computational capability, and our computational capability is very weak. We’re pathetic sacks of meat that can’t even multiply four or five digits numbers in our heads. We also can’t save states and swap out cognitive modules. An AGI can potentially do all of that.
Don’t underestimate the dangers of a recursively self-improving entity or the value of speed.
It isn’t 100% guaranteed that if I jump off a tall building that I will then die.
Indeed. You’re the one who told me that writers sometimes don’t finish books… They aren’t 100% guaranteed to. I know that. Why did you say that?
Imagine for example, that our AI finds a proofs that P=NP and that this proof gives a O(n^2) algorithm for solving your favorite NP-complete problem, and that the constant in the O is really small.
Umm. Imagine a human does the same thing. What’s your point? My/Deutsch’s point is AGIs have no special advantage over non-artificial intelligences at finding a proof like that in the first place.
We’re pathetic sacks of meat that can’t even multiply four or five digits numbers in our heads.
That’s not even close to true. First of all, I could do that if I trained a bit. Many people could. Second, many people can memorize long sequences of the digits of pi with some training. And many other things. Ever reading about Renschaw and how he trained people to see faster and more accurately?
The point about jumping off a building was due to a miscommunication with you. See my remark here and I then misinterpreted your reply. Illusion of transparency is annoying. The section concerning that is now very confused and irrelevant. The relevant point I was trying to make regarding the writer is that even when knowledge areas are highly restricted making predictions about what will happen is really difficult.
And yes, I’ve read your essays, and nothing there is at all precise enough to be helpful. Maybe taboo knowledge and make your point without it?
Umm. Imagine a human does the same thing. What’s your point? My/Deutsch’s point is AGIs have no special advantage over non-artificial intelligences at finding a proof like that in the first place.
There are a lot of differences. Humans won’t in general have an easy a time modifying their structure. Moreover, human values fall in a very small cluster in mindspace. Humans aren’t for example paperclip maximizers or Pi digit calculators. There are two twin dangers, an AGI is has advantages in improving itself and an AGI is unlikely to share to our values. Those are both bad.
That’s not even close to true. First of all, I could do that if I trained a bit. Many people could. Second, many people can memorize long sequences of the digits of pi with some training.
Sure. Humans can do that if they train a lot. A simple computer can do that with much less effort, so an AGI which uses a digital base at all similar to a human won’t need to spend days training to be able to multiply 5 digit numbers quickly. And if you prefer a slightly more extreme example, a computer can factor a random 15 digit number in seconds with minimal optimization. A human can’t. And no amount of training will allow you to do so. Computers can do a lot of tasks we can’t. At present, we can do a lot of tasks that they can’t. A computer that can do both sets of tasks better than we can is the basic threat model.
There are a lot of differences. Humans won’t in general have an easy a time modifying their structure.
But it doesn’t matter because it’s universal (we are universal thinkers, we can create any ideas that any thinking things can). The implementation details of universal things are not super important because the repertoire remains the same.
And if you prefer a slightly more extreme example, a computer can factor a random 15 digit number in seconds with minimal optimization.
Not by the method of thinking. A human can factor it using a calculator. An AI could also factor it using a calculator program. An AI doing it the way humans do—by thinking—won’t be as fast!
Maybe taboo knowledge and make your point without it?
But it’s the central concept of epistemology. If you want to understand me or Popper you need to learn to understand it. Many points depend on it.
And yes, I’ve read your essays, and nothing there is at all precise enough to be helpful.
Would you like to know the details of those essays? If you want to discuss them I can elaborate on any issue (or if I can’t, I will be surprised and learn something, at least about my ignorance). If you want to discuss them, can we go somewhere else (that has other people who will know the answers to your questions too)? Tell me and I’ll PM you the place if you’re interested (I don’t want too many random people to come, currently).
BTW no matter what you write people always complain. They always have questions or misconceptions that aren’t the ones you addressed. No writing is perfect. Even if you were to write all of Popper’s books, you’d still get complaints...
(we are universal thinkers, we can create any ideas that any thinking things can)
This seems about as likely as saying “We are universal runners, we can run on any surface that any running thing can”. If you’ve been keeping up, you’d have heard that the brain is a lump of biological tissue, and as such is subject to limitations imposed by its substrate.
And btw we can run on any surface, that any running thing can, with the aid of technology. What’s the problem?
And instead of
If you’ve been keeping up
You should say what it really means:
I’m better than you, so I don’t need to argue, condescension suffices
If you actually want an explanation of the ideas, apologize and ask nicely. If you just want to flame me for contradicting your worldview, then go away.
And in the analogy to thinking machines, is that more like our current brains, or more like the kind of brains we will be building and calling artificial intelligence?
Remind me again; these new bodies are going to run better on some surfaces? In the analogy, these artificial brains are going to think differently?
You’re funny. First you make up an analogy you think is false to say I’m wrong. Then you say geckos are fundamentally superior to technology, while linking to a technology. Now you’re saying I’m wrong because the analogy is true. Do you think at any point in this you were wrong?
(Do note that I linked to mecho-gecko as an example of a technology that can run on a surface that we, even using that technology, would not be able to run on. The actual gecko is irrelevant, I just couldn’t find a clip that didn’t include the comparison.)
No, I don’t. I am aware that you also think you have not been wrong at any point during this either, which has caused me to re-evaluate my own estimation of my correctness.
Having re-evaluated, I still believe I have been right all along.
To expand further on the analogy: the human brain is not a universal thinker, any more than the human leg is a universal runner. The brain thinks, and the leg runs, but they both do so in ways that are limited in some aspects, underperform in some domains, and suffer from quirks and idiosyncrasies. To say that the kind of thinking that a human brain does, is the only kind of thinking and AIs won’t do any different, is isomorphic to saying that the kind of running a human leg does, is the same kind of running that a gecko’s leg does.
This constitutes a pretty good argument against our brains having universal intelligence.
I thought I understood what you meant by “universal intelligence”—that any idea that is conceivable, could be conceived by a human mind—but I am open to the possibility you are referring to a technical term of some sort. If you’d care to enlighten me?
Previously you refused to ask. Why did you change your mind?
Do you know what the arguments that human minds are universal are? I asked this in my previous comment. You didn’t engage with it. Do you not consider it important to know that?
I was unable to find any relevant argument at the link. It did beg the question several times (which is OK if it was written for a different purpose). Quote the passage you thought was an argument.
Previously you refused to ask. Why did you change your mind?
I re-read our conversation looking for possible hidden disputes of definition. It’s one of the argument resolution tools LessWrong has taught me.
Do you know what the arguments that human minds are universal are?
I don’t claim familiarity with all of them. If you’d care to enlighten me?
I was unable to find any relevant argument at the link.
The strongest part would be this:
If we focus on the bounded subspace of mind design space which contains all those minds whose makeup can be specified in a trillion bits or less, then every universal generalization that you make has two to the trillionth power chances to be falsified.
Conversely, every existential generalization—“there exists at least one mind such that X”—has two to the trillionth power chances to be true.
Why did you think you could tell me what link would refute my position, if you didn’t know what arguments my position consisted of?
BTW you have the concept of universal correct.
Well, I think you do. Except that the part from the link you quoted is talking about a different kind of universality (of generalizations, not of minds). How is that supposed to be relevant?
edit: Thinking about it more, I think he’s got a background assumption where he assumes that most minds in the abstract mind design space are not universal and that they come on a continuum of functionality. Or possibly not that but something else? I do not accept this unargued assumption and I note that’s not what the computer design space looks like.
Because my link wasn’t a refutation. It was a statement of a correct position, with which any kind of universality of minds position is incompatible.
It is easily relevant. Anything we wish to say about universal ideas is a universal generalisation about every mind in mindspace. If you wish to say that all ideas are concepts, for example, that is equivalent to saying that all minds in mindspace are capable of containing concepts.
This constitutes a pretty good argument against our brains having universal intelligence.
If you meant
This constitutes a pretty good statement of the correct position, with no argument against your position.
Do you understand the difference between an argument which engages with someone’s position, and simply a statement which ignores them?
I’ve run into this kind of issue with several people here. In my view, the way of thinking where you build up your position without worrying about criticisms of it, and without worrying about criticizing other positions, is anti-critical. Do you think it’s good? Why is it good? Doesn’t it go wrong whenever there is a criticism you don’t know about, or another way of thinking which is better that you don’t know about? Doesn’t it tend to not seek those things out since you think your position is correct and that’s that?
Do you understand the difference between an argument which engages with someone’s position, and simply a statement which ignores them?
Yes. In practical terms of coming to the most correct worldview, there isn’t much difference. I suspect your Popper fetish has misled you into thinking that arguments and refutations of positions are what matters—what matters is truth, maps-to-reality-ness, correctness. That is, if I have a correct thing, and your thing is incompatible with my thing, due to the nature of reality, your thing is wrong. I don’t need to show you that it’s wrong, or how it’s wrong—the mere existence of my correct thing does more than enough.
I’ve run into this kind of issue with several people here.
I noticed; hence why I caused this particular exchange.
In my view, the way of thinking where you build up your position without worrying about criticisms of it, and without worrying about criticizing other positions, is anti-critical.
We need to insert a few very important thing into this description: the way of thinking where you build up your position to match reality as closely as possible without worrying about criticisms of it, and especially without worrying about criticizing other positions, is anti-critical, and pro-truth.
Do you think it’s good? Why is it good? Doesn’t it go wrong whenever there is a criticism you don’t know about, or another way of thinking which is better that you don’t know about? Doesn’t it tend to not seek those things out since you think your position is correct and that’s that?
I do think this new, updated description is good. It’s good because reversed stupidity isn’t intelligence. It’s good because it’s a much better search pattern in the space of all possible ideas than rejecting all falsified ideas. If you have a formal scheme built of Is and Us, then building strings from the rules is a better way to get correct strings than generating random strings, or strings that seem like they should be right, and sticking with them until someone proves they’re not.
That is, Popperian philosophy is all about the social rules of belief: you are allowed to belief whatever you like, until it’s falsified, criticized, or refuted. It’s rude, impolite, gauche to continue believing something that’s falsified. As long as you don’t believe anything that’s wrong. And so on.
Here at LessWrong, we have a better truth-seeking method. The Bayesian perspective is a better paradigm. You can’t just beg the question and say it’s not a better paradigm because it lacks criticism or refutation; these are elements of your paradigm that are unnecessary to the Bayesian view.
And if you doubt this: I can show you that the Bayesian perspective is better than the Popperian perspective at coming to the truth. Say there were two scientific theories, both attempting to explain some aspect of the world. Both of these theories are well-developed; both make predictions that, while couched in very different terminology, make us expect mostly the same events to happen. They differ radically in their description of the underlying structure of the phenomenon, but these cash out to more or less the same events. Now, one of these theories is a little older, a little more supported by scientists, a little clunkier, a little less parsimonious. The other is newer, less supported, but simpler. Neither of these theories have had criticisms beyond simple appeals to incredulity directed at them. Neither of these theories has had any real refutations put forward. An event is observed, which provides strong evidence for the newer theory, but doesn’t contradict anything in the older theory.
I put it to you that Popperians would be almost unanimously supporting the first theory—they would have learned of it first, and seen no reason to change—no refutation, etc. Bayesians would be almost unanimously supporting the second theory, because it more strongly predicted this event.
I can’t take too much credit. The entire second half is mostly just what Eliezer was saying in the sequences around Quantum Physics. Well, sure, I can take credit for expressing it well, I guess.
(nods) Yes, the latter is what I was considering meritorious.
I mention it not because it’s a huge deal—it isn’t, and ordinarily I would have just quietly upvoted it—but given that I really don’t want more of the thread that comment is in, I felt obligated to clarify what my upvote meant.
I put it to you that Popperians would be almost unanimously supporting the first theory
As someone who actually knows many Popperians, and as one myself, I can tell you they would not be. The second theory sounds way better, as you describe it.
the way of thinking where you build up your position to match reality as closely as possible without worrying about criticisms of it, and especially without worrying about criticizing other positions, is anti-critical, and pro-truth.
But what if you’re making a mistake? Don’t we need criticism just in case your way of building up the truth has a mistake?
Popper fetish
I see that you do like one kind of criticism: ad-hominems.
if I have a correct thing, and your thing is incompatible with my thing, due to the nature of reality, your thing is wrong.
Logically, yes. But do you have a correct thing? What if you don’t. That’s why you need criticism. Because you’re fallible, and your methods are fallible too, and your choice of methods fallible yet again.
That is, Popperian philosophy is all about the social rules of belief: you are allowed to belief whatever you like, until it’s falsified, criticized, or refuted. It’s rude, impolite, gauche to continue believing something that’s falsified.
As a Popperian far more familiar with the Popperian community than you, let me tell you:
this is wrong. This is not what Popperians think, it’s not what Popper wrote, it’s not what Popperians do.
Where are you getting this nonsense? Now that I’ve told you it’s not what we’re about, will you reconsider and try to learn our actual views before you reject Popper?
As someone who actually knows many Popperians, and as one myself, I can tell you they would not be. The second theory sounds way better, as you describe it.
Can you tell me what process they would use to move over to the new theory? Do keep in mind that everyone started on the first theory—the second theory didn’t even exist around the time the first theory picked up momentum.
You come up with a criticism of the old theory, and an explanation of what the new theory is and how it does better (e.g. by solving the problem that was wrong with the old theory). And people are free the whole time to criticize either theory, and suggest new ones, as they see fit. If they see something wrong with the old one, but not the new one, they will change their minds.
But there is no criticism of the old theory! At least, no criticism that isn’t easily dismantled by proponents of the old theory. There is no problem that is wrong with the old theory!
This is not some thought experiment, either. This situation is actually happening, right now, with the Copenhagen and Many Worlds interpretations of quantum physics. Copenhagen has the clumsy ‘decoherence’, Many Worlds has the elegant, well, many worlds. The event that supports Many Worlds strongly but also supports Copenhagen weakly is the double-slit experiment.
Bad example. Decoherence is a phenomenon that exists in any interpretation of quantum mechanics, and is heavily used in MWI as a tool to explain when branches effectively no longer interact.
But the Copenhagen interpretation has no defense. It doesn’t even make sense.
Decoherence is a major concept in MWI. Maybe if you learned the arguments on both sides the situation would be clearer to you.
I think you’ve basically given up on the possibility of arguing reaching a conclusion, without even learning the views of both sides first. There are conclusive arguments to be found—on this topic and many others—and plenty of unanswered and unanswerable criticisms of Copenhagen.
Conclusive doesn’t mean infallible, but it does mean that it actually resolves the issue and doesn’t allow for:
easily dismantled by proponents of the old theory
The original statement was:
Now, one of these theories is a little older, a little more supported by scientists, a little clunkier, a little less parsimonious.
It shows you can know general facts about a system that creates knew knowledge, despite not knowing all the specific facts/bits of knowledge that it will create. We can know Kasparov will beat us despite not knowing exactly what move he’ll take; we can know that an AGI will destroy/save/whatever us despite not knowing exactly how.
Chess playing programs don’t create new knowledge.
So, the argument is wrong without me fixing it (human chess players do).
Small amounts of new knowledge in very limited areas is predictable. Like writers can predict they will finish writing a book (even if they haven’t worked out 100% of the plot yet) in advance.
This doesn’t have much to do with large scale prediction that depends on new types of knowledge, does it?
Whether you call it new knowledge or not it not relevant. Nor is new types of knowledge what is generally relevant (aside from the not at all small issue that “type” isn’t a well-defined notion in this context.)
Actually, writers sometimes start a book and find part way through that they don’t want to finish, or the book might even change genres in the process of writing. If you prefer an example, I can predict that Brandon Sanderson’s next Mistborn book will be awesome. I can predict that it will sell well, and get good reviews. I can even predict a fair number of plot points just based on stuff Sanderson has done before and various comments he has made. But, at the same time, I can’t write a novel nearly as well as he does, and if he and I were to have a novel writing contest, he will beat me. I don’ t know how he will beat me, but he will.
Similarly, a sufficiently smart AI has the same problem. If it decides that human existence is non-optimal for it to carry out its goals, then it will try to find ways to eliminate us. It doesn’t matter if all the ways it comes up with of doing so are in a fairly limited set of domains. If it is really good at chemistry is might make nasty nanotech to reduce organic life into constituent atoms. If it is really good at math it might break all our cryptography, and then hack into our missiles and trigger a nuclear war (this one is obvious enough that there are multiple movies about it). If it is really good at social psychology it might manipulate us over a few years into just handing over control to it.
Just as I don’t know how Kasparov will beat me but I know he will, I don’t know how a sufficiently intelligent AI will beat me, but I know it will. There may be issues with how sufficiently intelligent it needs to be and whether or not an AGI will be likely undergo fast, substantial, recursive self-improvement to get to be that intelligent is an issue of much discussion on LW (Eliezer considers it likely. Some other people such as myself consider it to be unlikely.) but the basic point about sufficient intelligent seems clear.
Considering that Deutsch was talking about new knowledge, and I use the same terminology as him, it is relevant.
I know that? And if I played Kasparov I might win. It’s not a 100% guaranteed prediction.
@Sanderson: you understand what kind of thing he’s doing pretty well. writers are a well known phenomenon. the less you know what processes he uses to write, what tradition he’s following—in general what’s going on—the less you can make any kind of useful predictions.
why would it?
Deutsch doesn’t think AGI’s will do fast recursive self-improvement. They can’t because the first ones will already be universal and there’s nothing much left to improve, besides their knowledge (not their design, besides making it faster). Improving knowledge with intelligence is the same process for AGI and humans. It won’t magically get super fast.
The fallacy of gray? Between zero chance of winning a lottery, and epsilon chance, there is an order-of-epsilon difference. If you doubt this, let epsilon equal one over googolplex.
No, the fallacy of you not paying attention to the context of statements, and their purpose.
I said authors predict they will finish books.
Someone told me that those predictions are not 100% accurate.
I said, basically: so what? And I pointed out his same “argument” works just as well (that is, not at all) in other cases.
So, the other guy did the “fallacy of gray”, not me. And you didn’t read carefully.
There is a not-order-of-epsilon difference between an order-of-epsilon difference and a plausible difference. You winning against Kasparov vs. a writer finding part way through a book that they don’t want to finish.
You’ve assumed i’m a chess beginner. You did the same thing when you assumed i never beat any halfway decent chess program. I’m actually a strong player and don’t have a 0.0001% chance against kasparov. i have friends who are GMs who i can play decent games with.
also, didn’t i specify a writer can make such a prediction before being 100% done? e.g. at 99.9%. or, perhaps 90%. it depends. but i didn’t just say when part way done. you don’t read carefully enough.
here it was
That you are not an order-of-Kasparov chess player is the right prior, even if in fact it so turns out that you happen to be Kasparov himself. These people are rare, and you’ve previously given no indication to me that you’re one of them. But again, LCPW.
It’s not correct to assume a statement i make is wrong, based on your prior about how much I know about chess. I used my own knowledge of how much i know about chess when making statements. you should respect that knowledge instead of ignore it and assuming i’m making basic LCPW mistakes (btw Popper made that same point too, in a different way. of course i know it.). or at least question my statement instead of assuming i’m wrong about how much i know about chess. you’re basically assuming i’m an idiot who makes sloppy statements. if you really think that you shouldn’t even be talking to me.
btw i’ve noticed you didn’t acknowledge your other mistakes or apologize. is that because you refuse to change your mind, or what?
It is easily observable in this thread that you are making LCPW mistakes. You haven’t solved the game of chess, therefore the Least Convenient Possible World contains an AI powerful enough to explore the entire game tree of chess, solve the game, and beat you every time.
You could make a program like that. So what? No one gave an argument why the possibility of making such a program like that actually contradicts Deutsch. Such a program wouldn’t be creating knowledge as it played (in Deutsch’s terminology), it’d be doing some pretty trivial math (the hard part being the memory and speed for dealing with all the data), so it can’t be an example of the unpredictability of knowledge creation in Deutsch’s sense.
My initial point was merely that a statement was false. I think that’s important. We should try to correct our mistakes, starting with the ones we see first, and then after correcting them we might find more.
If that is true, (and you don’t just mean that it only generated the knowledge when it solved the game initially, and is merely looking up that knowledge during the game), then I don’t care much about whatever it is that Deutsch calls knowledge.
It was not false. You were just confused about the referent of “chess AI”.
It so happens that I acknowledged this mistake.
I saw. That’s no reason not to do the same with others. It doesn’t change that you imagined a convenient world where i’m bad at chess in order to dispute the specific details of an argument i made which had a substantive point that could still be made using other details. It doesn’t change that you misread my position in the stuff about authors. And so on.
On an absolute scale, you are bad at chess.
I think you missed my point about the books. I may not have made it very clear and so I apologize. The point was that even in an area which you consider to be a small type of knowledge the actual results can be extremely unpredictable.
Then the define the term.
So what? How is that at all relevant. It isn’t 100% guaranteed that if I jump off a tall building that I will then die. That doesn’t mean I’m going to try. You can’t use the fact that something isn’t definite as an argument to ignore the issue wholesale.
Ok. So I’m someone who finds extreme recursive self-improvement to be unlikely and I find this to be a really unhelpful argument. Improvements in speed matter. A lot. Imagine for example, that our AI finds a proofs that P=NP and that this proof gives a O(n^2) algorithm for solving your favorite NP-complete problem, and that the constant in the O is really small. That means that the AI will do pretty much everything faster, and the more computing power it gets the more disparity there will be between it and the entities that don’t have access to this algorithm. It wants to engineer a new virus? Oh what luck, protein folding is under many models NP-compete. The AI decides to improve its memory design? Well, that involves graph coloring and the traveling salesman, also NP-complete problems. The AI decides that it really wants access to all the world’s servers and add them to its computational power? Well most of those have remote access capability that is based on cryptographic problems which are much weaker than NP-complete. So, um, yeah. It got those too.
Now, this scenario seems potentially far-fetched. After all, most experts consider it to be unlikely that P=NP, and consider it to be extremely unlikely that there’s any sort of fast algorithm for NP complete problems. So let’s just assume instead that the AI tries to make itself a lot faster. Well, let’s see, what can our AI do. It could give itself some nice quantum computing hardware and then use Shor’s algorithm to break factoring in polynomial time and then all AI can just take over lots of computers and have fun that way.
This is not at all obvious. Humans can’t easily self-modify our hardware. We have no conscious access to most of our computational capability, and our computational capability is very weak. We’re pathetic sacks of meat that can’t even multiply four or five digits numbers in our heads. We also can’t save states and swap out cognitive modules. An AGI can potentially do all of that.
Don’t underestimate the dangers of a recursively self-improving entity or the value of speed.
See the essay on knowledge: http://fallibleideas.com/
Or read Deutsch’s books.
Indeed. You’re the one who told me that writers sometimes don’t finish books… They aren’t 100% guaranteed to. I know that. Why did you say that?
Umm. Imagine a human does the same thing. What’s your point? My/Deutsch’s point is AGIs have no special advantage over non-artificial intelligences at finding a proof like that in the first place.
That’s not even close to true. First of all, I could do that if I trained a bit. Many people could. Second, many people can memorize long sequences of the digits of pi with some training. And many other things. Ever reading about Renschaw and how he trained people to see faster and more accurately?
The point about jumping off a building was due to a miscommunication with you. See my remark here and I then misinterpreted your reply. Illusion of transparency is annoying. The section concerning that is now very confused and irrelevant. The relevant point I was trying to make regarding the writer is that even when knowledge areas are highly restricted making predictions about what will happen is really difficult.
And yes, I’ve read your essays, and nothing there is at all precise enough to be helpful. Maybe taboo knowledge and make your point without it?
There are a lot of differences. Humans won’t in general have an easy a time modifying their structure. Moreover, human values fall in a very small cluster in mindspace. Humans aren’t for example paperclip maximizers or Pi digit calculators. There are two twin dangers, an AGI is has advantages in improving itself and an AGI is unlikely to share to our values. Those are both bad.
Sure. Humans can do that if they train a lot. A simple computer can do that with much less effort, so an AGI which uses a digital base at all similar to a human won’t need to spend days training to be able to multiply 5 digit numbers quickly. And if you prefer a slightly more extreme example, a computer can factor a random 15 digit number in seconds with minimal optimization. A human can’t. And no amount of training will allow you to do so. Computers can do a lot of tasks we can’t. At present, we can do a lot of tasks that they can’t. A computer that can do both sets of tasks better than we can is the basic threat model.
But it doesn’t matter because it’s universal (we are universal thinkers, we can create any ideas that any thinking things can). The implementation details of universal things are not super important because the repertoire remains the same.
Not by the method of thinking. A human can factor it using a calculator. An AI could also factor it using a calculator program. An AI doing it the way humans do—by thinking—won’t be as fast!
But it’s the central concept of epistemology. If you want to understand me or Popper you need to learn to understand it. Many points depend on it.
Would you like to know the details of those essays? If you want to discuss them I can elaborate on any issue (or if I can’t, I will be surprised and learn something, at least about my ignorance). If you want to discuss them, can we go somewhere else (that has other people who will know the answers to your questions too)? Tell me and I’ll PM you the place if you’re interested (I don’t want too many random people to come, currently).
BTW no matter what you write people always complain. They always have questions or misconceptions that aren’t the ones you addressed. No writing is perfect. Even if you were to write all of Popper’s books, you’d still get complaints...
This seems about as likely as saying “We are universal runners, we can run on any surface that any running thing can”. If you’ve been keeping up, you’d have heard that the brain is a lump of biological tissue, and as such is subject to limitations imposed by its substrate.
I don’t think you should say
in place of
And btw we can run on any surface, that any running thing can, with the aid of technology. What’s the problem?
And instead of
You should say what it really means:
If you actually want an explanation of the ideas, apologize and ask nicely. If you just want to flame me for contradicting your worldview, then go away.
Mecho-Gecko disagrees..
If you have been updating your worldview in light of evidence streaming in from neuroscience and biology, you’d have heard …
You realize we can build new bodies with technology? Or maybe you don’t...
And in the analogy to thinking machines, is that more like our current brains, or more like the kind of brains we will be building and calling artificial intelligence?
Remind me again; these new bodies are going to run better on some surfaces? In the analogy, these artificial brains are going to think differently?
You’re funny. First you make up an analogy you think is false to say I’m wrong. Then you say geckos are fundamentally superior to technology, while linking to a technology. Now you’re saying I’m wrong because the analogy is true. Do you think at any point in this you were wrong?
(Do note that I linked to mecho-gecko as an example of a technology that can run on a surface that we, even using that technology, would not be able to run on. The actual gecko is irrelevant, I just couldn’t find a clip that didn’t include the comparison.)
No, I don’t. I am aware that you also think you have not been wrong at any point during this either, which has caused me to re-evaluate my own estimation of my correctness.
Having re-evaluated, I still believe I have been right all along.
To expand further on the analogy: the human brain is not a universal thinker, any more than the human leg is a universal runner. The brain thinks, and the leg runs, but they both do so in ways that are limited in some aspects, underperform in some domains, and suffer from quirks and idiosyncrasies. To say that the kind of thinking that a human brain does, is the only kind of thinking and AIs won’t do any different, is isomorphic to saying that the kind of running a human leg does, is the same kind of running that a gecko’s leg does.
Do you have an argument that our brains do not have universality WRT intelligence?
Do you understand what the theory I’m advocating is and says? Do you know why it says it?
This constitutes a pretty good argument against our brains having universal intelligence.
I thought I understood what you meant by “universal intelligence”—that any idea that is conceivable, could be conceived by a human mind—but I am open to the possibility you are referring to a technical term of some sort. If you’d care to enlighten me?
Previously you refused to ask. Why did you change your mind?
Do you know what the arguments that human minds are universal are? I asked this in my previous comment. You didn’t engage with it. Do you not consider it important to know that?
I was unable to find any relevant argument at the link. It did beg the question several times (which is OK if it was written for a different purpose). Quote the passage you thought was an argument.
I re-read our conversation looking for possible hidden disputes of definition. It’s one of the argument resolution tools LessWrong has taught me.
I don’t claim familiarity with all of them. If you’d care to enlighten me?
The strongest part would be this:
Why did you think you could tell me what link would refute my position, if you didn’t know what arguments my position consisted of?
BTW you have the concept of universal correct.
Well, I think you do. Except that the part from the link you quoted is talking about a different kind of universality (of generalizations, not of minds). How is that supposed to be relevant?
edit: Thinking about it more, I think he’s got a background assumption where he assumes that most minds in the abstract mind design space are not universal and that they come on a continuum of functionality. Or possibly not that but something else? I do not accept this unargued assumption and I note that’s not what the computer design space looks like.
Because my link wasn’t a refutation. It was a statement of a correct position, with which any kind of universality of minds position is incompatible.
It is easily relevant. Anything we wish to say about universal ideas is a universal generalisation about every mind in mindspace. If you wish to say that all ideas are concepts, for example, that is equivalent to saying that all minds in mindspace are capable of containing concepts.
Why did you say
If you meant
Do you understand the difference between an argument which engages with someone’s position, and simply a statement which ignores them?
I’ve run into this kind of issue with several people here. In my view, the way of thinking where you build up your position without worrying about criticisms of it, and without worrying about criticizing other positions, is anti-critical. Do you think it’s good? Why is it good? Doesn’t it go wrong whenever there is a criticism you don’t know about, or another way of thinking which is better that you don’t know about? Doesn’t it tend to not seek those things out since you think your position is correct and that’s that?
Yes. In practical terms of coming to the most correct worldview, there isn’t much difference. I suspect your Popper fetish has misled you into thinking that arguments and refutations of positions are what matters—what matters is truth, maps-to-reality-ness, correctness. That is, if I have a correct thing, and your thing is incompatible with my thing, due to the nature of reality, your thing is wrong. I don’t need to show you that it’s wrong, or how it’s wrong—the mere existence of my correct thing does more than enough.
I noticed; hence why I caused this particular exchange.
We need to insert a few very important thing into this description: the way of thinking where you build up your position to match reality as closely as possible without worrying about criticisms of it, and especially without worrying about criticizing other positions, is anti-critical, and pro-truth.
I do think this new, updated description is good. It’s good because reversed stupidity isn’t intelligence. It’s good because it’s a much better search pattern in the space of all possible ideas than rejecting all falsified ideas. If you have a formal scheme built of Is and Us, then building strings from the rules is a better way to get correct strings than generating random strings, or strings that seem like they should be right, and sticking with them until someone proves they’re not.
That is, Popperian philosophy is all about the social rules of belief: you are allowed to belief whatever you like, until it’s falsified, criticized, or refuted. It’s rude, impolite, gauche to continue believing something that’s falsified. As long as you don’t believe anything that’s wrong. And so on.
Here at LessWrong, we have a better truth-seeking method. The Bayesian perspective is a better paradigm. You can’t just beg the question and say it’s not a better paradigm because it lacks criticism or refutation; these are elements of your paradigm that are unnecessary to the Bayesian view.
And if you doubt this: I can show you that the Bayesian perspective is better than the Popperian perspective at coming to the truth. Say there were two scientific theories, both attempting to explain some aspect of the world. Both of these theories are well-developed; both make predictions that, while couched in very different terminology, make us expect mostly the same events to happen. They differ radically in their description of the underlying structure of the phenomenon, but these cash out to more or less the same events. Now, one of these theories is a little older, a little more supported by scientists, a little clunkier, a little less parsimonious. The other is newer, less supported, but simpler. Neither of these theories have had criticisms beyond simple appeals to incredulity directed at them. Neither of these theories has had any real refutations put forward. An event is observed, which provides strong evidence for the newer theory, but doesn’t contradict anything in the older theory.
I put it to you that Popperians would be almost unanimously supporting the first theory—they would have learned of it first, and seen no reason to change—no refutation, etc. Bayesians would be almost unanimously supporting the second theory, because it more strongly predicted this event.
And the Bayesians would be right.
Upvoted for being merit-worthily well-expressed, despite my desire to see less of this discussion thread in general.
I can’t take too much credit. The entire second half is mostly just what Eliezer was saying in the sequences around Quantum Physics. Well, sure, I can take credit for expressing it well, I guess.
(nods) Yes, the latter is what I was considering meritorious.
I mention it not because it’s a huge deal—it isn’t, and ordinarily I would have just quietly upvoted it—but given that I really don’t want more of the thread that comment is in, I felt obligated to clarify what my upvote meant.
As someone who actually knows many Popperians, and as one myself, I can tell you they would not be. The second theory sounds way better, as you describe it.
But what if you’re making a mistake? Don’t we need criticism just in case your way of building up the truth has a mistake?
I see that you do like one kind of criticism: ad-hominems.
Logically, yes. But do you have a correct thing? What if you don’t. That’s why you need criticism. Because you’re fallible, and your methods are fallible too, and your choice of methods fallible yet again.
As a Popperian far more familiar with the Popperian community than you, let me tell you:
this is wrong. This is not what Popperians think, it’s not what Popper wrote, it’s not what Popperians do.
Where are you getting this nonsense? Now that I’ve told you it’s not what we’re about, will you reconsider and try to learn our actual views before you reject Popper?
Can you tell me what process they would use to move over to the new theory? Do keep in mind that everyone started on the first theory—the second theory didn’t even exist around the time the first theory picked up momentum.
You come up with a criticism of the old theory, and an explanation of what the new theory is and how it does better (e.g. by solving the problem that was wrong with the old theory). And people are free the whole time to criticize either theory, and suggest new ones, as they see fit. If they see something wrong with the old one, but not the new one, they will change their minds.
But there is no criticism of the old theory! At least, no criticism that isn’t easily dismantled by proponents of the old theory. There is no problem that is wrong with the old theory!
This is not some thought experiment, either. This situation is actually happening, right now, with the Copenhagen and Many Worlds interpretations of quantum physics. Copenhagen has the clumsy ‘decoherence’, Many Worlds has the elegant, well, many worlds. The event that supports Many Worlds strongly but also supports Copenhagen weakly is the double-slit experiment.
Bad example. Decoherence is a phenomenon that exists in any interpretation of quantum mechanics, and is heavily used in MWI as a tool to explain when branches effectively no longer interact.
I think he meant wave-form collapse.
But the Copenhagen interpretation has no defense. It doesn’t even make sense.
Decoherence is a major concept in MWI. Maybe if you learned the arguments on both sides the situation would be clearer to you.
I think you’ve basically given up on the possibility of arguing reaching a conclusion, without even learning the views of both sides first. There are conclusive arguments to be found—on this topic and many others—and plenty of unanswered and unanswerable criticisms of Copenhagen.
Conclusive doesn’t mean infallible, but it does mean that it actually resolves the issue and doesn’t allow for:
The original statement was:
Clunkier is a criticism.