Well, not quite that old, but yes, not very recent. The internet archive says the page was created at the end of 2009, but it was probably not done by EY himself. The earliest reference google gives is in 2007...
So, you’re saying, now the party line is on single-level formal system-style architectures? But does it even make sense to try to define FAI-meaningful concepts in such architecture? Isn’t it like trying to define ‘love’, ‘freedom’, and ‘justice’ in terms of atoms?
I remember EY saying somewhere (can’t find where now) that AIXI design was very commendable in the sense that here finally is a full AGI design that can be clearly shown to kill you :)
I only know what the decision theory folks are doing, don’t know about the SingInst party line.
Formally defining “love” may be easier than you think. For example, Paul Christiano’s blog has some posts about using “pointers” to our world: take a long bitstring, like the text of Finnegans Wake, and tell the AI to influence whatever algorithm was most likely to produce that string under the universal prior. Also I have played with the idea of using UDT to increase the measure of specified bitstrings. Such ideas don’t require knowing correct physics down to the level of atoms, and I can easily imagine that we may find a formal way of pointing the AI at any human-recognizable idea without going through atoms.
Thanks for the reference! I skimmed over the blog, and wow! The amount of seriously considered weirdness is staggering :) (like: acausal counterfactual takeover by a simulating UFAI!). It is of huge entertainment value, of course, but… most of it appears to be conditioned on blatantly impossible premises, so it’s hard to take the concerns seriously. Maybe it’s lack of imagination on my part...
Regarding the solution to defining complex concepts via low-level inputs, as far as I understood the idea, you do not remove the multi-leveledness, just let it be inferred internally by the AI and refuse to look at how it is done. Besides, it does not appear to solve the problem: metaphorically speaking, we are not really interested in getting the precise text (Finnegans Wake) down to its last typo, but in a probability measure over all possible texts, which is concentrated on texts that are “sufficiently similar”. In fact, we are most interested in defining this similarity, which is extremely intricate and non-trivial (it may include, for example, translations into other languages).
Your comment reminded me of a post I’ve long wanted to write. The idea is that examining assumptions is unproductive. The only way to make intellectual progress, either individually or as a group, is to stop arguing about assumptions and instead explore their implications wherever they might lead. The #1 reason why I took so long to understand Newcomb’s Problem or Counterfactual Mugging was my insistence on denying the assumptions behind these problems. Instead I should have said to myself, okay, is this direction of inquiry interesting when taken on its own terms?
Many assumptions seemed divorced from real life at first, e.g. people dismissed the study of electromagnetism as an impractical toy, and considered number theory hopelessly abstract until cryptography arrived. People seem unable to judge the usefulness of assumptions before exploring their implications in detail, but they absolutely love arguing about assumptions instead of getting anything done.
There, thanks for encouraging me to write the first draft :-)
Absolutely, I agree of course. If a line of inquiry is interesting in itself and a progress is being made, why not pursue it? My question was only about its direct relevance to FAI. Or, rather, whether the arguments that I made to myself about its non-relevance can be easily refuted.
And, you know, questioning of assumptions can sometimes be useful too. From a false assumption anything follows :)
In any case, I’m glad to be of service, however small. Your posts are generally excellent.
Ouch, that text sounds painful, it’s probably about as old as LOGI.
Well, not quite that old, but yes, not very recent. The internet archive says the page was created at the end of 2009, but it was probably not done by EY himself. The earliest reference google gives is in 2007...
So, you’re saying, now the party line is on single-level formal system-style architectures? But does it even make sense to try to define FAI-meaningful concepts in such architecture? Isn’t it like trying to define ‘love’, ‘freedom’, and ‘justice’ in terms of atoms?
I remember EY saying somewhere (can’t find where now) that AIXI design was very commendable in the sense that here finally is a full AGI design that can be clearly shown to kill you :)
Here is a 2003 reference to the original SL4 wiki post, which is still online but for some reason not indexed by Google.
I only know what the decision theory folks are doing, don’t know about the SingInst party line.
Formally defining “love” may be easier than you think. For example, Paul Christiano’s blog has some posts about using “pointers” to our world: take a long bitstring, like the text of Finnegans Wake, and tell the AI to influence whatever algorithm was most likely to produce that string under the universal prior. Also I have played with the idea of using UDT to increase the measure of specified bitstrings. Such ideas don’t require knowing correct physics down to the level of atoms, and I can easily imagine that we may find a formal way of pointing the AI at any human-recognizable idea without going through atoms.
Thanks for the reference! I skimmed over the blog, and wow! The amount of seriously considered weirdness is staggering :) (like: acausal counterfactual takeover by a simulating UFAI!). It is of huge entertainment value, of course, but… most of it appears to be conditioned on blatantly impossible premises, so it’s hard to take the concerns seriously. Maybe it’s lack of imagination on my part...
Regarding the solution to defining complex concepts via low-level inputs, as far as I understood the idea, you do not remove the multi-leveledness, just let it be inferred internally by the AI and refuse to look at how it is done. Besides, it does not appear to solve the problem: metaphorically speaking, we are not really interested in getting the precise text (Finnegans Wake) down to its last typo, but in a probability measure over all possible texts, which is concentrated on texts that are “sufficiently similar”. In fact, we are most interested in defining this similarity, which is extremely intricate and non-trivial (it may include, for example, translations into other languages).
Your comment reminded me of a post I’ve long wanted to write. The idea is that examining assumptions is unproductive. The only way to make intellectual progress, either individually or as a group, is to stop arguing about assumptions and instead explore their implications wherever they might lead. The #1 reason why I took so long to understand Newcomb’s Problem or Counterfactual Mugging was my insistence on denying the assumptions behind these problems. Instead I should have said to myself, okay, is this direction of inquiry interesting when taken on its own terms?
Many assumptions seemed divorced from real life at first, e.g. people dismissed the study of electromagnetism as an impractical toy, and considered number theory hopelessly abstract until cryptography arrived. People seem unable to judge the usefulness of assumptions before exploring their implications in detail, but they absolutely love arguing about assumptions instead of getting anything done.
There, thanks for encouraging me to write the first draft :-)
Absolutely, I agree of course. If a line of inquiry is interesting in itself and a progress is being made, why not pursue it? My question was only about its direct relevance to FAI. Or, rather, whether the arguments that I made to myself about its non-relevance can be easily refuted.
And, you know, questioning of assumptions can sometimes be useful too. From a false assumption anything follows :)
In any case, I’m glad to be of service, however small. Your posts are generally excellent.
Interesting. Do you see any current arguments over assumptions that we should stop (on LW or elsewhere)?
Hmm, looks like I sometimes attack people for starting from (what I consider) wrong assumptions. Maybe I should rethink my position on those issues.