“I probably need to take at least a year to study up on math, and then—though it may be an idealistic dream—I intend to plunge into the decision theory of self-modifying decision systems and never look back. (And finish the decision theory and implement it and run the AI, at which point, if all goes well, we Win.)”
Yudkowsky’s position, widely known, is that it is unsafe to do otherwise. I imagine that is why they are not funding researchers to work on extending MOSES (or any other AGI work for that matter), but that’s just speculation on my part.
To learn more about the work people are doing to build AGI, check out the conferences series on AGI at http://agi-conf.org/agi-conf.org, organized by Ben Goertzel, advisor to SIAI (formerly Director of Research). Videos of most of the talks and tutorials are available for free, along with PDFs of the conference papers.
“What is missing for the SIAI to actually start working on friendly AI?”
The biggest problem in designing FAI is that nobody knows how to build AI. If you don’t know how to build an AI, it’s hard to figure out how to make it friendly. It’s like thinking about how to make a computer play chess well before anybody knows how to make a computer.
In the meantime, there’s lots of pre-FAI work to be done. There are many unsolved problems in metaethics, decision theory, anthropics, cosmology, and other subjects that seem to be highly relevant to later FAI development. I’m currently working (with others) toward defining those problems so that they can be engaged by the wider academic community.
Even if we presume to know how to build an AI, figuring out the Friendly part still seems to be a long way off. Some AI building plans or/and architectures (e.g. evolutionary methods) are also totally useless F-wise, even though they may lead to a general AI.
What we actually need is knowledge about how to build a very specific type of an AI, and unfortunately, it appears that the A(G)I (sub)field with it’s “anything that works” attitude isn’t going to provide one.
“What is missing for the SIAI to actually start working on friendly AI?”
I think that question is answered by Yudkowsky in his interview with Baez:
Yudkowsky’s position, widely known, is that it is unsafe to do otherwise. I imagine that is why they are not funding researchers to work on extending MOSES (or any other AGI work for that matter), but that’s just speculation on my part.
To learn more about the work people are doing to build AGI, check out the conferences series on AGI at http://agi-conf.org/ agi-conf.org, organized by Ben Goertzel, advisor to SIAI (formerly Director of Research). Videos of most of the talks and tutorials are available for free, along with PDFs of the conference papers.
The biggest problem in designing FAI is that nobody knows how to build AI. If you don’t know how to build an AI, it’s hard to figure out how to make it friendly. It’s like thinking about how to make a computer play chess well before anybody knows how to make a computer.
In the meantime, there’s lots of pre-FAI work to be done. There are many unsolved problems in metaethics, decision theory, anthropics, cosmology, and other subjects that seem to be highly relevant to later FAI development. I’m currently working (with others) toward defining those problems so that they can be engaged by the wider academic community.
Even if we presume to know how to build an AI, figuring out the Friendly part still seems to be a long way off. Some AI building plans or/and architectures (e.g. evolutionary methods) are also totally useless F-wise, even though they may lead to a general AI.
What we actually need is knowledge about how to build a very specific type of an AI, and unfortunately, it appears that the A(G)I (sub)field with it’s “anything that works” attitude isn’t going to provide one.
Correct!
(You don’t make an AI friendly. You make a Friendly AI. Making an AI friendly is like making a text file good reading.)
Yes, I know. ‘Making an AI friendly’ is just a manner of speaking, like talking about humans having utility functions.
I assumed you know, which is why it was a parenthetical, mainly clarifying for the benefit of others. Disagreement with the method of presentation.
Okay.