Do you think the AI research community or LW is caricatured in any way that is harmful to AI research?
I don’t know if the overall sign is positive or negative, but I’d guess there are likely non-zero caricatures that are harmful.
Are there any specific issues around AI that concern you the most?
The alignment problem (or are you asking what concerns us the most within that scope?)
If someone said they didn’t believe AI can have any positive impact on humanity, what’s your go-to positive impact/piece of research to share?
I don’t have one. Depends where they’re coming from with that belief.
How did your interest in AI begin?
I don’t know if I became interested in LessWrong or machine learning first—one of those.
Do you think there is enough general awareness around AI research and safety?
If not, what do you think would help ferment AI safety in public and political discourse?
That’s assuming most people here want this—I don’t think that’s the case
Or to a lesser extreme: hamper AI funding?
I don’t know if by “hamper” you mean reduce, but it seems to be like there are conflicting views/models here about whether that would be good or bad.
What do you personally think the likelihood of AGI is?
That is, that humans eventually create AGI, right?
This seems like something that would be better done as a Google form. That would make it easier for people to correlate questions + answers (especially on mobile) and it can be less stressful to answer questions when the answers are going to be kept private.
I think so
https://futureoflife.org/background/aimyths/
See What are fiction stories related to AI alignment?. Not all of them qualify, but some do. I think those ones are very good: The Intelligence Explosion and NeXt
FLI has articles and a podcast: https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
80,000 Hours has some articles and episodes on this: https://80000hours.org/podcast/
The AI Revolution: The Road to Superintelligence by WaitButWhy
Superintelligence by Nick Bostrom
Human Compatible by Stuart Russell
For more, see my list of lists here: https://www.facebook.com/groups/aisafetyopen/posts/263224891047211/
I don’t know if the overall sign is positive or negative, but I’d guess there are likely non-zero caricatures that are harmful.
The alignment problem (or are you asking what concerns us the most within that scope?)
I don’t have one. Depends where they’re coming from with that belief.
I don’t know if I became interested in LessWrong or machine learning first—one of those.
That’s assuming most people here want this—I don’t think that’s the case
I don’t know if by “hamper” you mean reduce, but it seems to be like there are conflicting views/models here about whether that would be good or bad.
That is, that humans eventually create AGI, right?
Yes, what issue concerns you most within the scope of AI alignment? (Edited original q for clarity, thanks)
Why do you think most people here would not want greater public awareness around the topic of AI safety? (Removed the assumption from the original q)
Indeed! (Edited original q to specify this)
This seems like something that would be better done as a Google form. That would make it easier for people to correlate questions + answers (especially on mobile) and it can be less stressful to answer questions when the answers are going to be kept private.
Those are great points! Google forms added.