That explains a lot, thanks for clarifying the misunderstanding. I for one wasn’t specifically referring to LW, but I was wondering whether in the coming decades people involved with AGI (AI researchers and others) should be as outspoken about the dangers and capabilities of self-improving AGI, as we currently are here on LW. I think I made clear, why I wouldn’t count on having the support of the global public, even if we did communicate our cause openly and in detail—so if (as I would predict) public outreach won’t cut it and may even have severe adverse effects, I’d personally favor to keep a low profile.
In case it helps other readers: the upside/downside is pro/con openly promoting your AGI+FAI work and its importance (vs. working in secret).
No. I was talking about discussing the topic, not a specific project. The post discusses LW, and we don’t have any specific project on our hands.
That explains a lot, thanks for clarifying the misunderstanding. I for one wasn’t specifically referring to LW, but I was wondering whether in the coming decades people involved with AGI (AI researchers and others) should be as outspoken about the dangers and capabilities of self-improving AGI, as we currently are here on LW. I think I made clear, why I wouldn’t count on having the support of the global public, even if we did communicate our cause openly and in detail—so if (as I would predict) public outreach won’t cut it and may even have severe adverse effects, I’d personally favor to keep a low profile.