Yes, FAI problem is an overfitting problem; you have a vast space of parameters (all the algorithms, if you consider the general case).
No matter how well you specify your values, your description can be hacked if it is expressed in a language which has enough room for different interpretations of the same expression. Even programming languages probably aren’t strict enough, since they sometimes have “implementation-specific” or “undefined behavior” expressions. So when you say “high-level”, you should mean “high-level as Haskell”, not “high-level as natural languages”.
And then good luck defining your values, e.g. love, in a programming language.
Has there been any research into the development of languages specifically for this purpose? I would still consider the highest-level programming languages as not high level enough.
Maybe I am wrong, and it can turn out that either the highest level programming languages are too high for the purpose, or that a language close to natural languages is impossible to create without super-intelligence, both of these could falsify my assumptions. However, is there significant research or at least debate regarding these issues?
EDIT: I’m not here to offer a readily usable solution of any kind. My goal was just to better define some concepts and present some issues, which might already have been presented, but maybe not in this shape, form, or context.
People certainly have motivation to create programming languages as high-level as possible, as using such languages reduces development costs. So there are languages which are more or less directly optimized for high-levelness, like Python.
On the other hand, programming languages design is limited by the fact that programs on it must actually work on real computers. Also, most effort in creating programming languages is directed to imperative ones, as they are usually more convenient for programming purposes.
But still, programming languages seem to be the humanity’s best effort at creating rigorous enough languages. There are other approaches, like logical artificial languages (e.g. Loglan), but I think they are still too imprecise for FAI purposes.
I agree, however I still think that because programming languages were developed and optimized for a different purpose than defining utility functions for AIs, there might be other languages somewhat better suited for the job.
If this line of thinking proves to be a dead end, I would remove some parts of the article and focus on the “changing values” aspect, as this is an issue I don’t remember seeing in the FAI debate.
Yes, FAI problem is an overfitting problem; you have a vast space of parameters (all the algorithms, if you consider the general case).
No matter how well you specify your values, your description can be hacked if it is expressed in a language which has enough room for different interpretations of the same expression. Even programming languages probably aren’t strict enough, since they sometimes have “implementation-specific” or “undefined behavior” expressions. So when you say “high-level”, you should mean “high-level as Haskell”, not “high-level as natural languages”.
And then good luck defining your values, e.g. love, in a programming language.
Has there been any research into the development of languages specifically for this purpose? I would still consider the highest-level programming languages as not high level enough.
Maybe I am wrong, and it can turn out that either the highest level programming languages are too high for the purpose, or that a language close to natural languages is impossible to create without super-intelligence, both of these could falsify my assumptions. However, is there significant research or at least debate regarding these issues?
EDIT: I’m not here to offer a readily usable solution of any kind. My goal was just to better define some concepts and present some issues, which might already have been presented, but maybe not in this shape, form, or context.
The Urbit system uses an extremely simple virtual machine as it’s core in ircerto remove semantic ambiguity, Vat the aim is more about security.
http://doc.urbit.org/
Declarative languages, maybe? like Prolog.
People certainly have motivation to create programming languages as high-level as possible, as using such languages reduces development costs. So there are languages which are more or less directly optimized for high-levelness, like Python.
On the other hand, programming languages design is limited by the fact that programs on it must actually work on real computers. Also, most effort in creating programming languages is directed to imperative ones, as they are usually more convenient for programming purposes.
But still, programming languages seem to be the humanity’s best effort at creating rigorous enough languages. There are other approaches, like logical artificial languages (e.g. Loglan), but I think they are still too imprecise for FAI purposes.
I agree, however I still think that because programming languages were developed and optimized for a different purpose than defining utility functions for AIs, there might be other languages somewhat better suited for the job.
If this line of thinking proves to be a dead end, I would remove some parts of the article and focus on the “changing values” aspect, as this is an issue I don’t remember seeing in the FAI debate.