For example, a musician friend of mine who attended my PhD defense commented on what she said was a surreal experience: I was talking in English, and most of the words she knew, but most of what I said was meaningless to her.
Thats because you weren’t really speaking english, you were speaking the english words for math terms related to physics. The people who spoke the relevant math you were alluding to could follow, those who didn’t, could not, because they didn’t have concrete mathematical ideas to tie the words to. Its not just a matter of jargon, its an actual language barrier. I think you’d find, with a jargon cheat sheet, you could follow many non-mathematical phd defenses just fine.
The same thing happens in music, which is its own language (after years of playing, I find I can “listen” to a song by reading sheet music).
Is your argument, essentially, that you think a machine intelligence can create a mathematics humans cannot understand, even in principle?
Is your argument, essentially, that you think a machine intelligence can create a mathematics humans cannot understand, even in principle?
“mathematics” may be a wrong word for it. I totally think that a transhuman can create concepts and ideas which a mere human cannot understand even when patiently explained. I am quite surprised that other people here don’t find it an obvious default.
My impression was the question was not if it’d have those concepts, since as you say thats obvious, but if they’d be referenced necessarily by the utility function.
Sure, but I find “can’t understand” sort of fuzzy as a concept. i.e. I wouldn’t say I ‘understand’ compactification and calabi yau manifolds the same way I understand sheet music (or the same way I understand the word green), but I do understand them all in some way.
It seems unlikely to me that there exist concepts that can’t be at least broadly conveyed via some combination of those. My intuition is that existing human languages cover, with their descriptive power, the full range of explainable things.
for example- it seems unlikely there exists a law of physics that cannot be expressed as an equation. It seems equally unlikely there exists an equation I would be totally incapable of working with. Even if I’ll never have the insight that lead someone to write it down, if you give it to me, I can use it to do things.
Human languages can encode anything, but a human can’t understand most things valid in human languages; most notably, extremely long things, and numbers specified with a lot of digits that actually matters. Just because you can count in binary on you hands does not mean you can comprehend the code of an operating system expressed in that format.
Humans seem “concept-complete” in much the same way your desktop PC seems turing complete. Except it’s much more easily broken because the human brain has absurdly shity memory.
numbers specified with a lot of digits that actually matters
Thats why we have paper, I can write it down. “Understanding” and “remembering” seem somewhat orthogonal here. I can’t recite Moby Dick from memory, but I understood the book. If you give me a 20 digit number 123… and I can’t hold it but retain “a number slightly larger than 1.23 * 10^20,” that doesn’t mean I can’t understand you.
Just because you can count in binary on you hands does not mean you can comprehend the code of an operating system expressed in that format.
Print it out for me, and give me enough time, and I will be able to understand it, especially if you give me some context.
Yes, you can encode things in a way that make them harder for humans to understand, no one would argue that. The question is- are there concepts that are simply impossible to explain to a human? I point out that while I can’t remember a 20 digit number, I can derive pretty much all of classical physics, so certainly humans can hold quite complex ideas in their head, even if they aren’t optimized for storage of long numbers.
You can construct a system consisting of a planet’s worth of paper and pencils and an immortal version of yourself (or a vast dynasty of successors) that can understand it, if nothing else because it’s turing complete and can simulate the AGI. this is not the same as you understanding it while still remaining fully human. Even if you did somehow integrate the paper-system sufficiently that’d be just as big a change as uploading and intelligence-augmenting the normal way.
The approximation thing is why I specified digits mattering. It wont help one bit when talking about something like gödel numbering.
The approximation thing is why I specified digits mattering.
I understand, my point was simply that “understanding” and “holding in your head at one time” are not at all the same thing. “There are numbers you can’t remember if I tell them to you” is not at all the same claim that “there are ideas I can’t explain to you.”
Neither of your cases are unexplainable- give me the source code in a high level language, instead of binary and I can understand it. If you give me the binary code and the instruction set I can convert it to assembly and then a higher level language, via disassembly.
Of course, i can deliberately obfuscate an idea and make it harder to understand, either by encryption or by presenting the most obtuse possible form, that is not the same as an idea that fundamentally cannot be explained.
“There are numbers you can’t remember if I tell them to you” is not at all the same claim that “there are ideas I can’t explain to you.”
But they might be related. Perhaps there are interesting and useful concepts that would take, say, 100,000 pages of English text to write down, such that each page cannot be understood without holding most of the rest of the text in working memory, and such that no useful, shorter, higher-level version of the concept exists.
Humans can only think about things that can be taken one small piece at a time, because our working memories are pretty small. It’s plausible to me that there are atomic ideas that are simply too big to fit in a human’s working memory, and which do need to be held in your head at one time in order to be understood.
It seems unlikely to me that there exist concepts that can’t be at least broadly conveyed via some combination of those. My intuition is that existing human languages cover, with their descriptive power, the full range of explainable things.
My intuition is the exact opposite.
it seems unlikely there exists a law of physics that cannot be expressed as an equation
I can totally imagine that some models are not reducible to equations, but that’s not the point, really.
Even if I’ll never have the insight that lead someone to write it down, if you give it to me, I can use it to do things.
Unless this “use” requires more brainpower than you have… You might still be able to work with some simplified version, but you’d have to have transhuman intelligence to “do things” with the full equation.
Thats because you weren’t really speaking english, you were speaking the english words for math terms related to physics. The people who spoke the relevant math you were alluding to could follow, those who didn’t, could not, because they didn’t have concrete mathematical ideas to tie the words to. Its not just a matter of jargon, its an actual language barrier. I think you’d find, with a jargon cheat sheet, you could follow many non-mathematical phd defenses just fine.
The same thing happens in music, which is its own language (after years of playing, I find I can “listen” to a song by reading sheet music).
Is your argument, essentially, that you think a machine intelligence can create a mathematics humans cannot understand, even in principle?
“mathematics” may be a wrong word for it. I totally think that a transhuman can create concepts and ideas which a mere human cannot understand even when patiently explained. I am quite surprised that other people here don’t find it an obvious default.
My impression was the question was not if it’d have those concepts, since as you say thats obvious, but if they’d be referenced necessarily by the utility function.
Sure, but I find “can’t understand” sort of fuzzy as a concept. i.e. I wouldn’t say I ‘understand’ compactification and calabi yau manifolds the same way I understand sheet music (or the same way I understand the word green), but I do understand them all in some way.
It seems unlikely to me that there exist concepts that can’t be at least broadly conveyed via some combination of those. My intuition is that existing human languages cover, with their descriptive power, the full range of explainable things.
for example- it seems unlikely there exists a law of physics that cannot be expressed as an equation. It seems equally unlikely there exists an equation I would be totally incapable of working with. Even if I’ll never have the insight that lead someone to write it down, if you give it to me, I can use it to do things.
Human languages can encode anything, but a human can’t understand most things valid in human languages; most notably, extremely long things, and numbers specified with a lot of digits that actually matters. Just because you can count in binary on you hands does not mean you can comprehend the code of an operating system expressed in that format.
Humans seem “concept-complete” in much the same way your desktop PC seems turing complete. Except it’s much more easily broken because the human brain has absurdly shity memory.
Thats why we have paper, I can write it down. “Understanding” and “remembering” seem somewhat orthogonal here. I can’t recite Moby Dick from memory, but I understood the book. If you give me a 20 digit number 123… and I can’t hold it but retain “a number slightly larger than 1.23 * 10^20,” that doesn’t mean I can’t understand you.
Print it out for me, and give me enough time, and I will be able to understand it, especially if you give me some context.
Yes, you can encode things in a way that make them harder for humans to understand, no one would argue that. The question is- are there concepts that are simply impossible to explain to a human? I point out that while I can’t remember a 20 digit number, I can derive pretty much all of classical physics, so certainly humans can hold quite complex ideas in their head, even if they aren’t optimized for storage of long numbers.
You can construct a system consisting of a planet’s worth of paper and pencils and an immortal version of yourself (or a vast dynasty of successors) that can understand it, if nothing else because it’s turing complete and can simulate the AGI. this is not the same as you understanding it while still remaining fully human. Even if you did somehow integrate the paper-system sufficiently that’d be just as big a change as uploading and intelligence-augmenting the normal way.
The approximation thing is why I specified digits mattering. It wont help one bit when talking about something like gödel numbering.
I understand, my point was simply that “understanding” and “holding in your head at one time” are not at all the same thing. “There are numbers you can’t remember if I tell them to you” is not at all the same claim that “there are ideas I can’t explain to you.”
Neither of your cases are unexplainable- give me the source code in a high level language, instead of binary and I can understand it. If you give me the binary code and the instruction set I can convert it to assembly and then a higher level language, via disassembly.
Of course, i can deliberately obfuscate an idea and make it harder to understand, either by encryption or by presenting the most obtuse possible form, that is not the same as an idea that fundamentally cannot be explained.
But they might be related. Perhaps there are interesting and useful concepts that would take, say, 100,000 pages of English text to write down, such that each page cannot be understood without holding most of the rest of the text in working memory, and such that no useful, shorter, higher-level version of the concept exists.
Humans can only think about things that can be taken one small piece at a time, because our working memories are pretty small. It’s plausible to me that there are atomic ideas that are simply too big to fit in a human’s working memory, and which do need to be held in your head at one time in order to be understood.
My intuition is the exact opposite.
I can totally imagine that some models are not reducible to equations, but that’s not the point, really.
Unless this “use” requires more brainpower than you have… You might still be able to work with some simplified version, but you’d have to have transhuman intelligence to “do things” with the full equation.
But that seems incredibly nebulous. What is the exact failure mode?