“What is missing for the SIAI to actually start working on friendly AI?”
The biggest problem in designing FAI is that nobody knows how to build AI. If you don’t know how to build an AI, it’s hard to figure out how to make it friendly. It’s like thinking about how to make a computer play chess well before anybody knows how to make a computer.
In the meantime, there’s lots of pre-FAI work to be done. There are many unsolved problems in metaethics, decision theory, anthropics, cosmology, and other subjects that seem to be highly relevant to later FAI development. I’m currently working (with others) toward defining those problems so that they can be engaged by the wider academic community.
Even if we presume to know how to build an AI, figuring out the Friendly part still seems to be a long way off. Some AI building plans or/and architectures (e.g. evolutionary methods) are also totally useless F-wise, even though they may lead to a general AI.
What we actually need is knowledge about how to build a very specific type of an AI, and unfortunately, it appears that the A(G)I (sub)field with it’s “anything that works” attitude isn’t going to provide one.
The biggest problem in designing FAI is that nobody knows how to build AI. If you don’t know how to build an AI, it’s hard to figure out how to make it friendly. It’s like thinking about how to make a computer play chess well before anybody knows how to make a computer.
In the meantime, there’s lots of pre-FAI work to be done. There are many unsolved problems in metaethics, decision theory, anthropics, cosmology, and other subjects that seem to be highly relevant to later FAI development. I’m currently working (with others) toward defining those problems so that they can be engaged by the wider academic community.
Even if we presume to know how to build an AI, figuring out the Friendly part still seems to be a long way off. Some AI building plans or/and architectures (e.g. evolutionary methods) are also totally useless F-wise, even though they may lead to a general AI.
What we actually need is knowledge about how to build a very specific type of an AI, and unfortunately, it appears that the A(G)I (sub)field with it’s “anything that works” attitude isn’t going to provide one.
Correct!
(You don’t make an AI friendly. You make a Friendly AI. Making an AI friendly is like making a text file good reading.)
Yes, I know. ‘Making an AI friendly’ is just a manner of speaking, like talking about humans having utility functions.
I assumed you know, which is why it was a parenthetical, mainly clarifying for the benefit of others. Disagreement with the method of presentation.
Okay.