How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?
NSA spying isn’t a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it’s just against NSA spying doesn’t seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities. The NSA doesn’t need to be mentioned in the uFAI chain mail in order for any NSA AI projects to be forced to comply with friendliness principles.
If you want to do something you can, earn to give and give money to MIRI.
You don’t get points for pressuring people to address arguments. That doesn’t prevent an UFAI from killing you.
It does if the people addressing those arguments learn/accept the danger of unfriendliness in being pressured to do so.
We probably don’t have to solve it in the next 5 years.
Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI’s warning isn’t properly heeded?
If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities.
I think your idea of a democracy in which letter writing is the way to create political change, just doesn’t accurately describe the world in which we are living.
Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI’s warning isn’t properly heeded?
If I remember right the median lesswrong prediction is that singularity happens after 2100. It might happen sooner.
I think 30 years is a valid time frame for FAI strategy.
That timeframe is long enough to invest in rationality movement building.
That is not a valid path if MIRI is willfully ignoring valid solutions.
Not taking the time to respond in detail to every suggestion can be a valid strategy. Especially for a post that get’s voted down to −3. People voted it down, so it’s not ignored.
If MIRI wouldn’t respond to a highly upvoted solution on lesswrong, then I would agree that’s a sign of concern.
NSA spying isn’t a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it’s just against NSA spying doesn’t seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities. The NSA doesn’t need to be mentioned in the uFAI chain mail in order for any NSA AI projects to be forced to comply with friendliness principles.
That is not a valid path if MIRI is willfully ignoring valid solutions.
It does if the people addressing those arguments learn/accept the danger of unfriendliness in being pressured to do so.
Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI’s warning isn’t properly heeded?
I think your idea of a democracy in which letter writing is the way to create political change, just doesn’t accurately describe the world in which we are living.
If I remember right the median lesswrong prediction is that singularity happens after 2100. It might happen sooner. I think 30 years is a valid time frame for FAI strategy.
That timeframe is long enough to invest in rationality movement building.
Not taking the time to respond in detail to every suggestion can be a valid strategy. Especially for a post that get’s voted down to −3. People voted it down, so it’s not ignored. If MIRI wouldn’t respond to a highly upvoted solution on lesswrong, then I would agree that’s a sign of concern.