What are the main objections to the likelihood of the Singularity occurring?
What actions might people take to stop a Singularity from occurring?
How will competition among businesses and governments impact the amount of care taken with respect to friendly AI?
What’s the difference between AI and AGI?
What organizations are working on the friendly AI problem?
How will expectations of friendly/unfriendly AI impact the amount of resources devoted to AI, i.e. if financial markets expect utopia to arrive in the next decade savings rates will fall which will lower tech research.
What are AGI researchers estimates for when/if AGI will happen and the likelihood of it being “good” if it does occur.
Why do most computer engineers dismiss the possibility of AGI?
What is the track records of AGI predictions?
Who has donated significant sums to friendly AI research?
How much money is being spent on AGI/friendly AGI research?
As an academicish person, I suggest a few questions that bothered me at first: Why aren’t more artificial intelligence research groups at universities working on FAI? Why doesn’t the Singularity Institute publish all of its literature reviews and other work?
Singularity/AI is a reasonable (if you agree with a number of assumption and extrapolations SI/EY are fond of) but an ultimately untestable concept (until it is too late, anyway), so govt funding would be hard to come by. Add to this the expected time frame of at least a few decades, and good luck getting a grant application approved for this research.
What are the main objections to the likelihood of the Singularity occurring? What actions might people take to stop a Singularity from occurring? How will competition among businesses and governments impact the amount of care taken with respect to friendly AI? What’s the difference between AI and AGI? What organizations are working on the friendly AI problem? How will expectations of friendly/unfriendly AI impact the amount of resources devoted to AI, i.e. if financial markets expect utopia to arrive in the next decade savings rates will fall which will lower tech research. What are AGI researchers estimates for when/if AGI will happen and the likelihood of it being “good” if it does occur. Why do most computer engineers dismiss the possibility of AGI? What is the track records of AGI predictions? Who has donated significant sums to friendly AI research? How much money is being spent on AGI/friendly AGI research?
As an academicish person, I suggest a few questions that bothered me at first: Why aren’t more artificial intelligence research groups at universities working on FAI? Why doesn’t the Singularity Institute publish all of its literature reviews and other work?
“Are there refereed publications on FAI in mainstream academic Ai research venues? Why not?”
Singularity/AI is a reasonable (if you agree with a number of assumption and extrapolations SI/EY are fond of) but an ultimately untestable concept (until it is too late, anyway), so govt funding would be hard to come by. Add to this the expected time frame of at least a few decades, and good luck getting a grant application approved for this research.
Thanks.