I’m stuck wondering on a peculiar question lately—which are the useful areas of AI study? What got me thinking is the opinion occasionally stated (or implied) by Eliezer here that performing general AI research might likely have negative utility, due to indirectly facilitating a chance of unfriendly AI being developed. I’ve been chewing on the implications of this for quite a while, as acceptance of these arguments would require quite a change in my behavior.
Right now I’m about to start my CompSci PhD studies soon, and had initially planned to focus on unsupervised domain-specific knowledge extraction from the internet, as my current research background is mostly with narrow AI issues in computational linguistics, such as machine-learning, formation of concepts and semantics extraction. However, in the last year my expectations of singularity and existential risks of unfriendly AI have lead me to believe that focusing my efforts on Friendly AI concepts would be a more valuable choice; as a few years of studies in the area would increase the chance of me making some positive contribution later on.
What is your opinion?
Do studies of general AI topics and research in the area carry a positive or negative utility ? What are the research topics that would be of use to Friendly AI, but still are narrow and shallow enough to make some measurable progress by a single individual/tiny team in the course of a few years of PhD thesis preparation? Are there specific research areas that should be better avoided until more progress has been made on Friendliness research ?
Which are the useful areas of AI study?
I’m stuck wondering on a peculiar question lately—which are the useful areas of AI study? What got me thinking is the opinion occasionally stated (or implied) by Eliezer here that performing general AI research might likely have negative utility, due to indirectly facilitating a chance of unfriendly AI being developed. I’ve been chewing on the implications of this for quite a while, as acceptance of these arguments would require quite a change in my behavior.
Right now I’m about to start my CompSci PhD studies soon, and had initially planned to focus on unsupervised domain-specific knowledge extraction from the internet, as my current research background is mostly with narrow AI issues in computational linguistics, such as machine-learning, formation of concepts and semantics extraction. However, in the last year my expectations of singularity and existential risks of unfriendly AI have lead me to believe that focusing my efforts on Friendly AI concepts would be a more valuable choice; as a few years of studies in the area would increase the chance of me making some positive contribution later on.
What is your opinion?
Do studies of general AI topics and research in the area carry a positive or negative utility ? What are the research topics that would be of use to Friendly AI, but still are narrow and shallow enough to make some measurable progress by a single individual/tiny team in the course of a few years of PhD thesis preparation? Are there specific research areas that should be better avoided until more progress has been made on Friendliness research ?