The potential benefit is that more of the people who could be working on random-goals AGI, or on FAI, or contributing to support/funding of such projects, would consider the idea, and so AGI risk gets reduced and FAI progress gets a boost. The potential downside is what, exactly, at what marginal cost? I don’t think this tradeoff works the way you suggest.
I’m not sure I’m following… do you honestly think, that the cost of openly working on self-improving AGI and openly making statements along the lines of “we need to get this AI exactly right, or else we’ll probably kill every man, woman and child on this planet” will be marginal in -say- 30 years, once the majority of people no longer views AGI as the product of a loony imagination but an actual possibility due to advances in robotics and narrow AI all around them? Don’t you think open development of AGI would draw massive media attention once the public is surrounded and accustomed to all kinds of robots and narrow AI’s?
Why this optimism about how reasonable people will react to our notion of self-improving AGI, am I somehow missing something profound from my model of reality? I still expect people to be crazy, religious and irrational in 30 years and the easiest way of dealing with that would simply be to not arouse their attention. Now that most people perceive us as as hopeless sci-fi nerds (at best) and AGI still seems at least 500 years away in their mind, of course I’m all for being open and drawing in people and funding—but do you expect such an open approach to work (without interference by the public or politics) until the very completion of a godlike AGI? I severely doubt that, and I find it surprising that this is somehow perceived as a wildly marginal concern. As if it’s not even worth thinking about… why is that?
That explains a lot, thanks for clarifying the misunderstanding. I for one wasn’t specifically referring to LW, but I was wondering whether in the coming decades people involved with AGI (AI researchers and others) should be as outspoken about the dangers and capabilities of self-improving AGI, as we currently are here on LW. I think I made clear, why I wouldn’t count on having the support of the global public, even if we did communicate our cause openly and in detail—so if (as I would predict) public outreach won’t cut it and may even have severe adverse effects, I’d personally favor to keep a low profile.
The potential benefit is that more of the people who could be working on random-goals AGI, or on FAI, or contributing to support/funding of such projects, would consider the idea, and so AGI risk gets reduced and FAI progress gets a boost. The potential downside is what, exactly, at what marginal cost? I don’t think this tradeoff works the way you suggest.
I’m not sure I’m following… do you honestly think, that the cost of openly working on self-improving AGI and openly making statements along the lines of “we need to get this AI exactly right, or else we’ll probably kill every man, woman and child on this planet” will be marginal in -say- 30 years, once the majority of people no longer views AGI as the product of a loony imagination but an actual possibility due to advances in robotics and narrow AI all around them? Don’t you think open development of AGI would draw massive media attention once the public is surrounded and accustomed to all kinds of robots and narrow AI’s?
Why this optimism about how reasonable people will react to our notion of self-improving AGI, am I somehow missing something profound from my model of reality? I still expect people to be crazy, religious and irrational in 30 years and the easiest way of dealing with that would simply be to not arouse their attention. Now that most people perceive us as as hopeless sci-fi nerds (at best) and AGI still seems at least 500 years away in their mind, of course I’m all for being open and drawing in people and funding—but do you expect such an open approach to work (without interference by the public or politics) until the very completion of a godlike AGI? I severely doubt that, and I find it surprising that this is somehow perceived as a wildly marginal concern. As if it’s not even worth thinking about… why is that?
In case it helps other readers: the upside/downside is pro/con openly promoting your AGI+FAI work and its importance (vs. working in secret).
No. I was talking about discussing the topic, not a specific project. The post discusses LW, and we don’t have any specific project on our hands.
That explains a lot, thanks for clarifying the misunderstanding. I for one wasn’t specifically referring to LW, but I was wondering whether in the coming decades people involved with AGI (AI researchers and others) should be as outspoken about the dangers and capabilities of self-improving AGI, as we currently are here on LW. I think I made clear, why I wouldn’t count on having the support of the global public, even if we did communicate our cause openly and in detail—so if (as I would predict) public outreach won’t cut it and may even have severe adverse effects, I’d personally favor to keep a low profile.