Form my part, I don’t see any reason to expect the AGI’s terminal goals to be any more complex than ours, or any harder to communicate, so I see the practical problem as relatively trivial. Instrumental goals, forget about it. But terminal goals aren’t the sorts of things that seem to admit of very much complexity.
That the AI can have a simple goal is obvious, I never argued against that. The AIs goal might be “maximize the amount of paperclips”, which is explained in that many words. I dont expect the AI as a whole to have anything directly analogous to instrumental goals on the highest level either, so that’s a non issue. I thought we were talking about the AI’s decision theory.
On manipulating culture for centuries and solving as practical problem: Or it could just instal an implant or guide evolution to increase intelligence until we were smart enough. The implicit constraint of “translate” is that it’s to an already existing specific human, and they have to still be human at the end of the process. Not “could something that was once human come to understand it”.
I thought we were talking about the AI’s decision theory.
No, Shiminux and I were talking about (I think) terminal goals: that is, we were talking about whether or not we could come to understand what an AGI was after, assuming it wanted us to know. We started talking about a specific part of this problem, namely translating concepts novel to the AGI’s outlook into our own language.
I suppose my intuition, like yours, is that the AGI decision theory would be a much more serious problem, and not one subject to my linguistic argument. Since I expect we also agree that it’s the decision theory that’s really the core of the safety issue, my claim about terminal goals is not meant to undercut the concern for AGI safety. I agree that we could be radically ignorant about how safe an AGI is, even given a fairly clear understanding of its terminal goals.
The implicit constraint of “translate” is that it’s to an already existing specific human, and they have to still be human at the end of the process.
I’d actually like to remain indifferent to the question of how intelligent the end-user of the translation has to be. My concern was really just whether or not there are in principle any languages that are mutually untranslatable. I tried to argue that there may be, but they wouldn’t be mutually recognizable as languages anyway, and that if they are so recognizable, then they are at least partly inter-translatable, and that any two languages that are partly inter-translatable are in fact wholly inter-translatable. But this is a point about the nature of languages, not degrees of intelligence.
So one of the questions we actually agreed on the whole time and the other were just the semantics of “language” and “translate”. Oh well, discussion over.
Ha! Well, I did argue that all languages (recognizable as such) were in principle inter-translatable for what could only be described as metaphysical reasons. I’d be surprised if you couldn’t find holes in an argument that ambitious and that unempirical. But it may be that some of the motivation is lost.
That the AI can have a simple goal is obvious, I never argued against that. The AIs goal might be “maximize the amount of paperclips”, which is explained in that many words. I dont expect the AI as a whole to have anything directly analogous to instrumental goals on the highest level either, so that’s a non issue. I thought we were talking about the AI’s decision theory.
On manipulating culture for centuries and solving as practical problem: Or it could just instal an implant or guide evolution to increase intelligence until we were smart enough. The implicit constraint of “translate” is that it’s to an already existing specific human, and they have to still be human at the end of the process. Not “could something that was once human come to understand it”.
No, Shiminux and I were talking about (I think) terminal goals: that is, we were talking about whether or not we could come to understand what an AGI was after, assuming it wanted us to know. We started talking about a specific part of this problem, namely translating concepts novel to the AGI’s outlook into our own language.
I suppose my intuition, like yours, is that the AGI decision theory would be a much more serious problem, and not one subject to my linguistic argument. Since I expect we also agree that it’s the decision theory that’s really the core of the safety issue, my claim about terminal goals is not meant to undercut the concern for AGI safety. I agree that we could be radically ignorant about how safe an AGI is, even given a fairly clear understanding of its terminal goals.
I’d actually like to remain indifferent to the question of how intelligent the end-user of the translation has to be. My concern was really just whether or not there are in principle any languages that are mutually untranslatable. I tried to argue that there may be, but they wouldn’t be mutually recognizable as languages anyway, and that if they are so recognizable, then they are at least partly inter-translatable, and that any two languages that are partly inter-translatable are in fact wholly inter-translatable. But this is a point about the nature of languages, not degrees of intelligence.
Human languages? Alien languages? Machine languages?
I don’t think those distinctions really mean very much. Languages don’t come in types in any significant sense.
Yes they do. Eg the Chomsky Hierarchy, the Aglutinative /synthetic/Ananytical distinction, etc.
Also. We recognise ,maths as a language.,but have no idea now to translate, as opposed to re code, English into it.
So one of the questions we actually agreed on the whole time and the other were just the semantics of “language” and “translate”. Oh well, discussion over.
Ha! Well, I did argue that all languages (recognizable as such) were in principle inter-translatable for what could only be described as metaphysical reasons. I’d be surprised if you couldn’t find holes in an argument that ambitious and that unempirical. But it may be that some of the motivation is lost.