I expect the tabo/explanation to look like a list of 10^20, 1000 hour long clips of incomprehensible n-dimensional multimedia, each with a real number attached representing the amount of [untranslatable 92] it has, with a jupiter brain being required to actually find any pattern.
I’m talking about the simplest possible in principle expression in the human language being that long and complex.
Ah, I see. Even if that were a possibility, I’m not sure that would be such a problem. I’m happy to allow the AGI to spend a few centuries manipulating our culture, our literature, our public discourse etc. in the name of making its goals clear to us. Our understanding something doesn’t depend on us being able to understand a single complex expression of it, or to be able to produce such. It’s not like we all understood our own goals from day one either, and I’m not sure we totally understand them now. Terminal goals are basically pretty hard to understand, but I don’t see why we should expect the (terminal) goals of a super-intelligence to be harder.
I expect it to be false in at least some cases talked about because it’s not 3 but 100 levels, and each one makes it 1000 times longer because complex explanations and examples are needed for almost every “word”.
It may be that there’s a lot of inferential and semantic ground to cover. But again: practical problem. My point has been to show that we shouldn’t expect there to be a problem of in principle untranslatability. I’m happy to admit there might be serious practical problems in translation. The question is now whether we should default to thinking ‘An AGI is going to solve those problems handily, given the resources it has for doing so’, or ‘An AGI’s thought is going to be so much more complex and sophisticated, that it will be unable to solve the practical problem of communication’. I admit, I don’t have good ideas about how to come down on the issue. I was just trying to respond to Shim’s point about untranslatable meta-languages.
Form my part, I don’t see any reason to expect the AGI’s terminal goals to be any more complex than ours, or any harder to communicate, so I see the practical problem as relatively trivial. Instrumental goals, forget about it. But terminal goals aren’t the sorts of things that seem to admit of very much complexity.
Form my part, I don’t see any reason to expect the AGI’s terminal goals to be any more complex than ours, or any harder to communicate, so I see the practical problem as relatively trivial. Instrumental goals, forget about it. But terminal goals aren’t the sorts of things that seem to admit of very much complexity.
That the AI can have a simple goal is obvious, I never argued against that. The AIs goal might be “maximize the amount of paperclips”, which is explained in that many words. I dont expect the AI as a whole to have anything directly analogous to instrumental goals on the highest level either, so that’s a non issue. I thought we were talking about the AI’s decision theory.
On manipulating culture for centuries and solving as practical problem: Or it could just instal an implant or guide evolution to increase intelligence until we were smart enough. The implicit constraint of “translate” is that it’s to an already existing specific human, and they have to still be human at the end of the process. Not “could something that was once human come to understand it”.
I thought we were talking about the AI’s decision theory.
No, Shiminux and I were talking about (I think) terminal goals: that is, we were talking about whether or not we could come to understand what an AGI was after, assuming it wanted us to know. We started talking about a specific part of this problem, namely translating concepts novel to the AGI’s outlook into our own language.
I suppose my intuition, like yours, is that the AGI decision theory would be a much more serious problem, and not one subject to my linguistic argument. Since I expect we also agree that it’s the decision theory that’s really the core of the safety issue, my claim about terminal goals is not meant to undercut the concern for AGI safety. I agree that we could be radically ignorant about how safe an AGI is, even given a fairly clear understanding of its terminal goals.
The implicit constraint of “translate” is that it’s to an already existing specific human, and they have to still be human at the end of the process.
I’d actually like to remain indifferent to the question of how intelligent the end-user of the translation has to be. My concern was really just whether or not there are in principle any languages that are mutually untranslatable. I tried to argue that there may be, but they wouldn’t be mutually recognizable as languages anyway, and that if they are so recognizable, then they are at least partly inter-translatable, and that any two languages that are partly inter-translatable are in fact wholly inter-translatable. But this is a point about the nature of languages, not degrees of intelligence.
So one of the questions we actually agreed on the whole time and the other were just the semantics of “language” and “translate”. Oh well, discussion over.
Ha! Well, I did argue that all languages (recognizable as such) were in principle inter-translatable for what could only be described as metaphysical reasons. I’d be surprised if you couldn’t find holes in an argument that ambitious and that unempirical. But it may be that some of the motivation is lost.
I expect the tabo/explanation to look like a list of 10^20, 1000 hour long clips of incomprehensible n-dimensional multimedia, each with a real number attached representing the amount of [untranslatable 92] it has, with a jupiter brain being required to actually find any pattern.
Ah, I see. Even if that were a possibility, I’m not sure that would be such a problem. I’m happy to allow the AGI to spend a few centuries manipulating our culture, our literature, our public discourse etc. in the name of making its goals clear to us. Our understanding something doesn’t depend on us being able to understand a single complex expression of it, or to be able to produce such. It’s not like we all understood our own goals from day one either, and I’m not sure we totally understand them now. Terminal goals are basically pretty hard to understand, but I don’t see why we should expect the (terminal) goals of a super-intelligence to be harder.
It may be that there’s a lot of inferential and semantic ground to cover. But again: practical problem. My point has been to show that we shouldn’t expect there to be a problem of in principle untranslatability. I’m happy to admit there might be serious practical problems in translation. The question is now whether we should default to thinking ‘An AGI is going to solve those problems handily, given the resources it has for doing so’, or ‘An AGI’s thought is going to be so much more complex and sophisticated, that it will be unable to solve the practical problem of communication’. I admit, I don’t have good ideas about how to come down on the issue. I was just trying to respond to Shim’s point about untranslatable meta-languages.
Form my part, I don’t see any reason to expect the AGI’s terminal goals to be any more complex than ours, or any harder to communicate, so I see the practical problem as relatively trivial. Instrumental goals, forget about it. But terminal goals aren’t the sorts of things that seem to admit of very much complexity.
That the AI can have a simple goal is obvious, I never argued against that. The AIs goal might be “maximize the amount of paperclips”, which is explained in that many words. I dont expect the AI as a whole to have anything directly analogous to instrumental goals on the highest level either, so that’s a non issue. I thought we were talking about the AI’s decision theory.
On manipulating culture for centuries and solving as practical problem: Or it could just instal an implant or guide evolution to increase intelligence until we were smart enough. The implicit constraint of “translate” is that it’s to an already existing specific human, and they have to still be human at the end of the process. Not “could something that was once human come to understand it”.
No, Shiminux and I were talking about (I think) terminal goals: that is, we were talking about whether or not we could come to understand what an AGI was after, assuming it wanted us to know. We started talking about a specific part of this problem, namely translating concepts novel to the AGI’s outlook into our own language.
I suppose my intuition, like yours, is that the AGI decision theory would be a much more serious problem, and not one subject to my linguistic argument. Since I expect we also agree that it’s the decision theory that’s really the core of the safety issue, my claim about terminal goals is not meant to undercut the concern for AGI safety. I agree that we could be radically ignorant about how safe an AGI is, even given a fairly clear understanding of its terminal goals.
I’d actually like to remain indifferent to the question of how intelligent the end-user of the translation has to be. My concern was really just whether or not there are in principle any languages that are mutually untranslatable. I tried to argue that there may be, but they wouldn’t be mutually recognizable as languages anyway, and that if they are so recognizable, then they are at least partly inter-translatable, and that any two languages that are partly inter-translatable are in fact wholly inter-translatable. But this is a point about the nature of languages, not degrees of intelligence.
Human languages? Alien languages? Machine languages?
I don’t think those distinctions really mean very much. Languages don’t come in types in any significant sense.
Yes they do. Eg the Chomsky Hierarchy, the Aglutinative /synthetic/Ananytical distinction, etc.
Also. We recognise ,maths as a language.,but have no idea now to translate, as opposed to re code, English into it.
So one of the questions we actually agreed on the whole time and the other were just the semantics of “language” and “translate”. Oh well, discussion over.
Ha! Well, I did argue that all languages (recognizable as such) were in principle inter-translatable for what could only be described as metaphysical reasons. I’d be surprised if you couldn’t find holes in an argument that ambitious and that unempirical. But it may be that some of the motivation is lost.