If some FAI project is already right about everything and is fully funded, secrecy is helpful because it reduces outside interference.
If it’s not, then secrecy is bad. Secrecy loses all sorts of cool community resources, from bug finding to funding to someone to bounce ideas off of (See JoshuaZ’s longer post).
So the problem is one of balancing the cost of lost resources if they’re wrong against the chance of interference if right. I guess I’m more hopeful about the low costs of openness (edit: not democracy, just non-secrecy) than you. The people most likely to object to building an AI even when they’re wrong are the least likely to understand, after all :P
If some FAI project is already right about everything and is fully funded, secrecy is helpful because it reduces outside interference.
If it’s not, then secrecy is bad. Secrecy loses all sorts of cool community resources, from bug finding to funding to someone to bounce ideas off of (See JoshuaZ’s longer post).
So the problem is one of balancing the cost of lost resources if they’re wrong against the chance of interference if right. I guess I’m more hopeful about the low costs of openness (edit: not democracy, just non-secrecy) than you. The people most likely to object to building an AI even when they’re wrong are the least likely to understand, after all :P