I didn’t mean to say no attack existed, only that I don’t have one ready. I can program okay and have spent enough time reading about AGI to see how the field is floundering.
I’ve grown out of seeing FAI as an AI problem, at least on the conceptual stage where there are very important parts still missing, like what exactly are we trying to do. If you see it as a math problem, the particular excuse of there being a crackpot-ridden AGI field, stagnating AI field and the machine learning field with no impending promise of crossing over into AGI, ceases to apply, just like the failed overconfident predictions of AI researchers in the past are not evidence that AI won’t be developed in two hundred years.
In the same sense AIXI is a mathematical formulation of a solution to the AGI problem, we don’t have a good idea of what FAI is supposed to be. As a working problem statement, I’m thinking of how to define “preference” for a given program (formal term), with this program representing an agent that imperfectly implements that preference, for example a human upload could be such a program. This “preference” needs to define criteria for decision-making on the unknown-physics real world from within a (temporary) computer environment with known semantics, in the same sense that a human could learn about what could/should be done in the real world while remaining inside a computer simulation, but having an I/O channel to interact with the outside, without prior knowledge of the physical laws.
I’m gradually writing up the idea of this direction of research on my blog. It’s vague, but there is some hope that it can put people into a more constructive state of mind about how to approach FAI.
Thanks (and upvoted) for the link to your blog posts about preference. They are some of the best pieces of writings I’ve seen on the topic. Why not post them (or the rest of the sequence) on Less Wrong? I’m pretty sure you’ll get a bigger audience and more feedback that way.
Thanks. I’ll probably post a link when I finish the current sequence—by current plan, it’s 5-7 posts to go. As is, I think this material is off-topic for Less Wrong and shouldn’t be posted here directly/in detail. If we had a transhumanist/singularitarian subreddit, it would be more appropriate.
I didn’t mean to say no attack existed, only that I don’t have one ready. I can program okay and have spent enough time reading about AGI to see how the field is floundering.
What you are saying in the last sentence is that you estimate that there unlikely to be an attack for some time, which is a much stronger statement than “only that I don’t have one ready”, and actually is a probabilistic statement that no attack exists (“I didn’t mean to say no attack existed”). This statement feeds into the estimate that marginal value of investment in search for such an attack is very low at this time.
That seems to diminish the relevance of Hamming’s quote, since the problems he names are all ones where we have good reason to believe an attack doesn’t exist.
I didn’t mean to say no attack existed, only that I don’t have one ready. I can program okay and have spent enough time reading about AGI to see how the field is floundering.
I’ve grown out of seeing FAI as an AI problem, at least on the conceptual stage where there are very important parts still missing, like what exactly are we trying to do. If you see it as a math problem, the particular excuse of there being a crackpot-ridden AGI field, stagnating AI field and the machine learning field with no impending promise of crossing over into AGI, ceases to apply, just like the failed overconfident predictions of AI researchers in the past are not evidence that AI won’t be developed in two hundred years.
How is FAI a math problem? I never got that either.
In the same sense AIXI is a mathematical formulation of a solution to the AGI problem, we don’t have a good idea of what FAI is supposed to be. As a working problem statement, I’m thinking of how to define “preference” for a given program (formal term), with this program representing an agent that imperfectly implements that preference, for example a human upload could be such a program. This “preference” needs to define criteria for decision-making on the unknown-physics real world from within a (temporary) computer environment with known semantics, in the same sense that a human could learn about what could/should be done in the real world while remaining inside a computer simulation, but having an I/O channel to interact with the outside, without prior knowledge of the physical laws.
I’m gradually writing up the idea of this direction of research on my blog. It’s vague, but there is some hope that it can put people into a more constructive state of mind about how to approach FAI.
Thanks (and upvoted) for the link to your blog posts about preference. They are some of the best pieces of writings I’ve seen on the topic. Why not post them (or the rest of the sequence) on Less Wrong? I’m pretty sure you’ll get a bigger audience and more feedback that way.
Thanks. I’ll probably post a link when I finish the current sequence—by current plan, it’s 5-7 posts to go. As is, I think this material is off-topic for Less Wrong and shouldn’t be posted here directly/in detail. If we had a transhumanist/singularitarian subreddit, it would be more appropriate.
What you are saying in the last sentence is that you estimate that there unlikely to be an attack for some time, which is a much stronger statement than “only that I don’t have one ready”, and actually is a probabilistic statement that no attack exists (“I didn’t mean to say no attack existed”). This statement feeds into the estimate that marginal value of investment in search for such an attack is very low at this time.
That seems to diminish the relevance of Hamming’s quote, since the problems he names are all ones where we have good reason to believe an attack doesn’t exist.