at some point we need something fundamentally non-algorithmic
No. Our brains are already implementing lots of algorithms. So far as we know, anything human beings come up with—however creative—is in some sense the product of algorithms. I suppose you could go further back—evolution, biochemistry, fundamental physics—but (1) it’s hard to see how those could actually be relevant here and (2) as it happens, so far as we know those are all ultimately algorithmic too.
we will always have to rely on some non-algorithmical intelligence to find more intelligent solutions.
No (not even if you were right about ultimately needing something fundamentally non-algorithmic). Suppose you have some initial magic non-algorithmic step where the Finger of God implants intelligence into something (a computer, a human being, whatever). After that, that intelligent thing can design more intelligent things which design more intelligent things, etc. The alleged requirement to avoid an infinite regress is satisfied by that initial Finger-of-God step, even if everything after that is algorithmic. There’s no reason to think that continued non-algorithmic stuff is called for.
we have no reason to suppose we can’t find another more powerful one.
That might be true. It might even be true—though I don’t think you’ve given coherent reasons to think so—that there’ll always be a possible Next Big Thing that can’t be found algorithmically. So what? A superintelligent AI isn’t any less useful, or any less dangerous, merely because a magical new-AI-creating process might be able to create an even more superintelligent AI.
No algorithm can determine the simple axioms of the natural numbers from anything weaker.
It is not clear that this means anything. You certainly have given no reasons to believe it.
There is simply no way to derive the axioms from anything that doesn’t already include it.
I think you are confusing derivations within some formal system such as Peano arithmetic (where, indeed, the only way to get the axioms is to begin with them, or with some other axioms that imply them) and (a quite different sort of) derivations outside that formal system, such as whatever Peano did to arrive at his axioms. I know of no reason to believe that the latter is fundamentally non-algorithmic, though for sure we don’t know what algorithms would be best.
general intelligence necessarily has to transcend rules
I know of no reason to believe this, and it seems to me that if it seems true it’s because what you imagine when you think about following rules is very simple rule-following, the sort of thing that might be done by a computer program at most a few pages in length running on a rather slow computer. In particular …
since at the very least the rules can’t be determined by rules
Whyever not? They have to be different rules, that’s all.
Instead, we should expect a singularity that happens due to emergent intelligence.
“Emergence” is not magic.
not just one particular kind of intelligence like formal reasoning used by computers
Well, that might well be correct, in the sense that good paths to AI might well involve plenty of things that aren’t best thought of as “formal reasoning”. (Though, if they run on conventional computers, they will be equivalent in some sense to monstrously complicated systems of formal reasoning.)
You didn’t really respond to my argument. You just said: “It’s all algorithmic, basta.”. The problem is that there is no algorithmic way to determine any algorithm, since if you try to find an algorithm for the algorithm you only have a bigger problem of determining that algorithm.
The universe can’t run solely on algorithms, except if you invoke “God did it! He created the first algorithm” or “The first algorithm just appered randomly out of nowhere”. I think this statement is ridiculous, but there is no refutation for dogma. If the universe would be so absurd, I could as well be a christian fundamentalist or just randomly do nonsensical things (since it’s all random either way).
No algorithm can determine the simple axioms of the natural numbers from anything weaker.
It is not clear that this means anything. You certainly have given no reasons to believe it.
What? The axioms of natural numbers can’t be determined because they are axioms. If that’s not true, “derive 0 is a natural number” and “1 is the succesor of 0″ without any notion of numbers.
It means that there is no way that an AI could invent the natural numbers. Hence there are important inventions that AIs can’t make—in principle.
There is simply no way to derive the axioms from anything that doesn’t already include it.
I think you are confusing derivations within some formal system such as Peano arithmetic (where, indeed, the only way to get the axioms is to begin with them, or with some other axioms that imply them) and (a quite different sort of) derivations outside that formal system, such as whatever Peano did to arrive at his axioms. I know of no reason to believe that the latter is fundamentally non-algorithmic, though for sure we don’t know what algorithms would be best.
Instead of asserting that, just try some way to derive the simplest axioms of arithmetic from something that’s not more complex (which of course can’t always work to arrive at the axioms since we have a limited amount of complex systems). It doesn’t work. The axioms of arithmetic are irreducible simple—to simple to be derived.
I know of no reason to believe this, and it seems to me that if it seems true it’s because what you imagine when you think about following rules is very simple rule-following, the sort of thing that might be done by a computer program at most a few pages in length running on a rather slow computer. In particular ..
Not at all! It doesn’t matter how complex the rules are. You can’t go beyond the axioms of the rules, because that is what makes the rules rules. Yet still it is easily possible to invent new axioms. This is essential for intelligence, yet an AI can’t do it, since it only works by its axioms. It can do it on a meta-level, for sure, but that’s not enough, since in this case the new axioms are just derived from the old ones. Well, or it uses user input, but in this case the program isn’t a self-contained intelligence anymore.
since at the very least the rules can’t be determined by rules
Whyever not? They have to be different rules, that’s all.
And how are these rules determined? Either you have an infinite chain of rules, which itself can’t be derived from an rule, or you start picking out a rule without any rule.
Instead, we should expect a singularity that happens due to emergent intelligence.
“Emergence” is not magic.
Really? I think it is, not of course in any anthrophomorphic sense. What else could describe, for example, the emergence of the patterns out of cellular automata rules? It seems to me nature is inherently magical. We just have to be careful to not project our superstitious ideas of magic into nature.
Even all materialist have to rely on magic at the most critical points. Look at the anthropic principle. Or at the question “Where do the laws of nature come from?”. Either we deny that the question is meaningful or important, or we have to admit it is fundamentally mysterious and magical.
No, I didn’t say “it’s all algorithmic, basta”; I said “so far as we know, it’s all algorithmic”. Of course it’s possible that we’ll somehow discover that actually our minds run on magic fairies and unicorns or something, but so far as I can tell all the available evidence is consistent with everything being basically algorithmic. You’re the one claiming to know that that isn’t so; I invite you to explain how you know.
I haven’t claimed that the axioms of arithmetic are derived from something simpler. I have suggested that for all we know, the process by which we found those axioms was basically algorithmic, though doubtless very complicated. (I’m not claiming that that algorithmic process is why the axioms are right. If you’re really arguing not about the processes by which discoveries are made but about why arithmetic is the way it is, then we need to have a different discussion.)
it is easily possible to invent new axioms. This is essential for intelligence, yet an AI can’t do it, since it only works by its axioms.
I’m afraid this is very, very wrong. Perhaps the following analogy will help: suppose I said “It is easily possible to contemplate arbitrarily large numbers, even ones bigger than 2^32 or 2^64. This is essential for intelligence, yet an AI can’t do it, since it only works with 32-bit or 64-bit arithmetic.” That would be crazy, right?, because an AI (or anything else) implemented on a hardware substrate that can only do a very limited set of operations can still do higher-level things if it’s programmed to. A computer can do arbitrary-precision arithmetic by doing lots of 32-bit arithmetic, if the latter is organized in the right way. Similarly, it can cook up new axioms and rules by following fixed rules satisfying fixed axioms, if the latter are organized in the right way.
And how are these rules determined?
Depends how far back the chain of causation you want to go. There’ll be some rules programmed into the computer by human beings. Those were determined by whatever complicated algorithms human brains execute. Those were determined by whatever complicated algorithms human cultures and biological evolution execute. Those were determined by … etc. As you go further back, you get algorithms with less direct connection to intelligence (ours, or a computer’s, or whatever). Ultimately, you end up with whatever the basic laws of nature are, and no one knows those for sure. (But, again, so far as anyone knows they’re algorithmic in nature.)
So: no infinite chain, probably (though it’s not clear to me that there’s anything actually impossible about that); you start with whatever the laws of nature are, and so far as anyone knows they just are what they are. (I suppose you could try to work that up into some kind of first-cause argument for the existence of God, but I should warn you that it isn’t likely to work well.)
Really? I think it [emergence] is [magic] … It seems to me nature is inherently magical.
Oh. Either you’re using the word “magical” in a nonstandard way that I don’t currently understand, or at least one of us is so terribly wrong about the nature of the universe that further discussion seems unlikely to be helpful.
The universe can’t run solely on algorithms, except if you invoke “God did it! He created the first algorithm” or “The first algorithm just appered randomly out of nowhere”. I think this statement is ridiculous, but there is no refutation for dogma. If the universe would be so absurd, I could as well be a christian fundamentalist or just randomly do nonsensical things (since it’s all random either way).
As a general rule, arguments which rely on having exhausted the hypothesis space are weak. You can’t say, “It can’t be algorithms, because algorithms don’t solve the problem of the first cause.” Well, so what? Neither do the straw men you suggest. Neither, indeed, do “emergence” or “magic”, which aren’t explanations at all. It’s one of those hard problems—it doesn’t just trouble positions you disagree with.
The axioms of natural numbers can’t be determined because they are axioms. If that’s not true, “derive 0 is a natural number” and “1 is the succesor of 0″ without any notion of numbers.
Again, they cannot be derived within the formal system where they are axioms. They can be determined in a different system which uses distinct axioms or derivation rules. This is, more or less, how you could interpret the parent comment.
The axioms of arithmetic are irreducible simple—to simple to be derived.
Your argument seems to be
Humans have derived arithmetics.
Arithmetics can’t be algorithmically derived from a simpler system.
Therefore, humans are not algorithmic.
It seems that you are equivocating in your demands. Your original assertion is that an algorithm can’t derive (in this case, meaning invent) formal arithmetics, but the quoted argument supports another claim, namely that the formalisation of arithmetics is the most austere possible. But this claim is not (at least not obviously) relevant to the original question whether intelligence is algorithmic or not. Humans haven’t derived formal arithmetics from a simpler formal system. Removing the equivocation, the argument is a clear non-sequitur:
Humans have invented arithmetics.
Arithmetics can’t be simplified.
Therefore, humans are not algorithmic.
What else could describe, for example, the emergence of the patterns out of cellular automata rules? It seems to me nature is inherently magical.
What do you mean by magical? Saying “emergence is magical” doesn’t look like a description.
I think this statement is ridiculous
I would suggest you to be more careful with such statements. It comes across as confrontational.
I’m afraid just about everything here is wrong.
No. Our brains are already implementing lots of algorithms. So far as we know, anything human beings come up with—however creative—is in some sense the product of algorithms. I suppose you could go further back—evolution, biochemistry, fundamental physics—but (1) it’s hard to see how those could actually be relevant here and (2) as it happens, so far as we know those are all ultimately algorithmic too.
No (not even if you were right about ultimately needing something fundamentally non-algorithmic). Suppose you have some initial magic non-algorithmic step where the Finger of God implants intelligence into something (a computer, a human being, whatever). After that, that intelligent thing can design more intelligent things which design more intelligent things, etc. The alleged requirement to avoid an infinite regress is satisfied by that initial Finger-of-God step, even if everything after that is algorithmic. There’s no reason to think that continued non-algorithmic stuff is called for.
That might be true. It might even be true—though I don’t think you’ve given coherent reasons to think so—that there’ll always be a possible Next Big Thing that can’t be found algorithmically. So what? A superintelligent AI isn’t any less useful, or any less dangerous, merely because a magical new-AI-creating process might be able to create an even more superintelligent AI.
It is not clear that this means anything. You certainly have given no reasons to believe it.
I think you are confusing derivations within some formal system such as Peano arithmetic (where, indeed, the only way to get the axioms is to begin with them, or with some other axioms that imply them) and (a quite different sort of) derivations outside that formal system, such as whatever Peano did to arrive at his axioms. I know of no reason to believe that the latter is fundamentally non-algorithmic, though for sure we don’t know what algorithms would be best.
I know of no reason to believe this, and it seems to me that if it seems true it’s because what you imagine when you think about following rules is very simple rule-following, the sort of thing that might be done by a computer program at most a few pages in length running on a rather slow computer. In particular …
Whyever not? They have to be different rules, that’s all.
“Emergence” is not magic.
Well, that might well be correct, in the sense that good paths to AI might well involve plenty of things that aren’t best thought of as “formal reasoning”. (Though, if they run on conventional computers, they will be equivalent in some sense to monstrously complicated systems of formal reasoning.)
You didn’t really respond to my argument. You just said: “It’s all algorithmic, basta.”. The problem is that there is no algorithmic way to determine any algorithm, since if you try to find an algorithm for the algorithm you only have a bigger problem of determining that algorithm. The universe can’t run solely on algorithms, except if you invoke “God did it! He created the first algorithm” or “The first algorithm just appered randomly out of nowhere”. I think this statement is ridiculous, but there is no refutation for dogma. If the universe would be so absurd, I could as well be a christian fundamentalist or just randomly do nonsensical things (since it’s all random either way).
What? The axioms of natural numbers can’t be determined because they are axioms. If that’s not true, “derive 0 is a natural number” and “1 is the succesor of 0″ without any notion of numbers.
It means that there is no way that an AI could invent the natural numbers. Hence there are important inventions that AIs can’t make—in principle.
Instead of asserting that, just try some way to derive the simplest axioms of arithmetic from something that’s not more complex (which of course can’t always work to arrive at the axioms since we have a limited amount of complex systems). It doesn’t work. The axioms of arithmetic are irreducible simple—to simple to be derived.
Not at all! It doesn’t matter how complex the rules are. You can’t go beyond the axioms of the rules, because that is what makes the rules rules. Yet still it is easily possible to invent new axioms. This is essential for intelligence, yet an AI can’t do it, since it only works by its axioms. It can do it on a meta-level, for sure, but that’s not enough, since in this case the new axioms are just derived from the old ones. Well, or it uses user input, but in this case the program isn’t a self-contained intelligence anymore.
And how are these rules determined? Either you have an infinite chain of rules, which itself can’t be derived from an rule, or you start picking out a rule without any rule.
Really? I think it is, not of course in any anthrophomorphic sense. What else could describe, for example, the emergence of the patterns out of cellular automata rules? It seems to me nature is inherently magical. We just have to be careful to not project our superstitious ideas of magic into nature. Even all materialist have to rely on magic at the most critical points. Look at the anthropic principle. Or at the question “Where do the laws of nature come from?”. Either we deny that the question is meaningful or important, or we have to admit it is fundamentally mysterious and magical.
No, I didn’t say “it’s all algorithmic, basta”; I said “so far as we know, it’s all algorithmic”. Of course it’s possible that we’ll somehow discover that actually our minds run on magic fairies and unicorns or something, but so far as I can tell all the available evidence is consistent with everything being basically algorithmic. You’re the one claiming to know that that isn’t so; I invite you to explain how you know.
I haven’t claimed that the axioms of arithmetic are derived from something simpler. I have suggested that for all we know, the process by which we found those axioms was basically algorithmic, though doubtless very complicated. (I’m not claiming that that algorithmic process is why the axioms are right. If you’re really arguing not about the processes by which discoveries are made but about why arithmetic is the way it is, then we need to have a different discussion.)
I’m afraid this is very, very wrong. Perhaps the following analogy will help: suppose I said “It is easily possible to contemplate arbitrarily large numbers, even ones bigger than 2^32 or 2^64. This is essential for intelligence, yet an AI can’t do it, since it only works with 32-bit or 64-bit arithmetic.” That would be crazy, right?, because an AI (or anything else) implemented on a hardware substrate that can only do a very limited set of operations can still do higher-level things if it’s programmed to. A computer can do arbitrary-precision arithmetic by doing lots of 32-bit arithmetic, if the latter is organized in the right way. Similarly, it can cook up new axioms and rules by following fixed rules satisfying fixed axioms, if the latter are organized in the right way.
Depends how far back the chain of causation you want to go. There’ll be some rules programmed into the computer by human beings. Those were determined by whatever complicated algorithms human brains execute. Those were determined by whatever complicated algorithms human cultures and biological evolution execute. Those were determined by … etc. As you go further back, you get algorithms with less direct connection to intelligence (ours, or a computer’s, or whatever). Ultimately, you end up with whatever the basic laws of nature are, and no one knows those for sure. (But, again, so far as anyone knows they’re algorithmic in nature.)
So: no infinite chain, probably (though it’s not clear to me that there’s anything actually impossible about that); you start with whatever the laws of nature are, and so far as anyone knows they just are what they are. (I suppose you could try to work that up into some kind of first-cause argument for the existence of God, but I should warn you that it isn’t likely to work well.)
Oh. Either you’re using the word “magical” in a nonstandard way that I don’t currently understand, or at least one of us is so terribly wrong about the nature of the universe that further discussion seems unlikely to be helpful.
As a general rule, arguments which rely on having exhausted the hypothesis space are weak. You can’t say, “It can’t be algorithms, because algorithms don’t solve the problem of the first cause.” Well, so what? Neither do the straw men you suggest. Neither, indeed, do “emergence” or “magic”, which aren’t explanations at all. It’s one of those hard problems—it doesn’t just trouble positions you disagree with.
Again, they cannot be derived within the formal system where they are axioms. They can be determined in a different system which uses distinct axioms or derivation rules. This is, more or less, how you could interpret the parent comment.
Your argument seems to be
Humans have derived arithmetics.
Arithmetics can’t be algorithmically derived from a simpler system.
Therefore, humans are not algorithmic.
It seems that you are equivocating in your demands. Your original assertion is that an algorithm can’t derive (in this case, meaning invent) formal arithmetics, but the quoted argument supports another claim, namely that the formalisation of arithmetics is the most austere possible. But this claim is not (at least not obviously) relevant to the original question whether intelligence is algorithmic or not. Humans haven’t derived formal arithmetics from a simpler formal system. Removing the equivocation, the argument is a clear non-sequitur:
Humans have invented arithmetics.
Arithmetics can’t be simplified.
Therefore, humans are not algorithmic.
What do you mean by magical? Saying “emergence is magical” doesn’t look like a description.
I would suggest you to be more careful with such statements. It comes across as confrontational.