Really, it’s not. Tons of people discuss politics without getting their briefs in a knot about it. It’s only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn’t that common elseways. People, on large, are willing to seriously debate political issues. “Politics is the mind-killer” is a result of some pretty severe selection bias.
Even ignoring that, you’ve only stated that we should do our best to ensure it does not become a hot political issue. Widespread attention to the idea is still useful; if we can’t get the concept to penetrate the academia where AI is likely to be developed, we’re not yet mitigating the threat. A thousand angry letters demanding this research, “Stop at once,” or, “Address the issue of friendliness,” isn’t something that is easy to ignore—no matter how bad you think the arguments for uFAI are.
You’re not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition. Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it. People researching AI who’ve argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky’s argument has gained widespread attention, but if it pressures them to properly address Yudkowsky’s arguments, then it has legitimately helped.
Really, it’s not. Tons of people discuss politics without getting their briefs in a knot about it. It’s only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn’t that common elseways.
There’s a reason why it’s general advice to not talk about religion, sex and politics. It’s not because the average person does well in discussing politics.
Dismiss your opponent out-of-hand as unintelligent isn’t the only failure mode of politics mindkill. I don’t even think it’s the most important one.
You’re not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition.
How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?
You’re not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition.
Take two important enviromental challenges and look at the first presidency of Obama. One is limiting CO2 emissions. The second is limiting mercury pollution.
The EPA under Obama was very effective at limiting mercury pollution but not at limiting CO2 emissions.
CO2 emissions are a very political issue charged issue with a lot of mindkill on both sides while mercury pollution isn’t. The people who pushed mercury pollution regulation won, not because they wrote a lot of letters.
Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it.
If you want to do something you can, earn to give and give money to MIRI.
People researching AI who’ve argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky’s argument has gained widespread attention, but if it pressures them to properly address Yudkowsky’s arguments, then it has legitimately helped.
You don’t get points for pressuring people to address arguments. That doesn’t prevent an UFAI from killing you.
UFAI is an important problem but we probably don’t have to solve it in the next 5 years. We do have some time to do things right.
How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?
NSA spying isn’t a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it’s just against NSA spying doesn’t seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities. The NSA doesn’t need to be mentioned in the uFAI chain mail in order for any NSA AI projects to be forced to comply with friendliness principles.
If you want to do something you can, earn to give and give money to MIRI.
You don’t get points for pressuring people to address arguments. That doesn’t prevent an UFAI from killing you.
It does if the people addressing those arguments learn/accept the danger of unfriendliness in being pressured to do so.
We probably don’t have to solve it in the next 5 years.
Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI’s warning isn’t properly heeded?
If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities.
I think your idea of a democracy in which letter writing is the way to create political change, just doesn’t accurately describe the world in which we are living.
Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI’s warning isn’t properly heeded?
If I remember right the median lesswrong prediction is that singularity happens after 2100. It might happen sooner.
I think 30 years is a valid time frame for FAI strategy.
That timeframe is long enough to invest in rationality movement building.
That is not a valid path if MIRI is willfully ignoring valid solutions.
Not taking the time to respond in detail to every suggestion can be a valid strategy. Especially for a post that get’s voted down to −3. People voted it down, so it’s not ignored.
If MIRI wouldn’t respond to a highly upvoted solution on lesswrong, then I would agree that’s a sign of concern.
Really, it’s not. Tons of people discuss politics without getting their briefs in a knot about it. It’s only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn’t that common elseways. People, on large, are willing to seriously debate political issues. “Politics is the mind-killer” is a result of some pretty severe selection bias.
Even ignoring that, you’ve only stated that we should do our best to ensure it does not become a hot political issue. Widespread attention to the idea is still useful; if we can’t get the concept to penetrate the academia where AI is likely to be developed, we’re not yet mitigating the threat. A thousand angry letters demanding this research, “Stop at once,” or, “Address the issue of friendliness,” isn’t something that is easy to ignore—no matter how bad you think the arguments for uFAI are.
You’re not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition. Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it. People researching AI who’ve argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky’s argument has gained widespread attention, but if it pressures them to properly address Yudkowsky’s arguments, then it has legitimately helped.
There’s a reason why it’s general advice to not talk about religion, sex and politics. It’s not because the average person does well in discussing politics.
Dismiss your opponent out-of-hand as unintelligent isn’t the only failure mode of politics mindkill. I don’t even think it’s the most important one.
How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?
Take two important enviromental challenges and look at the first presidency of Obama. One is limiting CO2 emissions. The second is limiting mercury pollution.
The EPA under Obama was very effective at limiting mercury pollution but not at limiting CO2 emissions.
CO2 emissions are a very political issue charged issue with a lot of mindkill on both sides while mercury pollution isn’t. The people who pushed mercury pollution regulation won, not because they wrote a lot of letters.
If you want to do something you can, earn to give and give money to MIRI.
You don’t get points for pressuring people to address arguments. That doesn’t prevent an UFAI from killing you.
UFAI is an important problem but we probably don’t have to solve it in the next 5 years. We do have some time to do things right.
NSA spying isn’t a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it’s just against NSA spying doesn’t seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities. The NSA doesn’t need to be mentioned in the uFAI chain mail in order for any NSA AI projects to be forced to comply with friendliness principles.
That is not a valid path if MIRI is willfully ignoring valid solutions.
It does if the people addressing those arguments learn/accept the danger of unfriendliness in being pressured to do so.
Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI’s warning isn’t properly heeded?
I think your idea of a democracy in which letter writing is the way to create political change, just doesn’t accurately describe the world in which we are living.
If I remember right the median lesswrong prediction is that singularity happens after 2100. It might happen sooner. I think 30 years is a valid time frame for FAI strategy.
That timeframe is long enough to invest in rationality movement building.
Not taking the time to respond in detail to every suggestion can be a valid strategy. Especially for a post that get’s voted down to −3. People voted it down, so it’s not ignored. If MIRI wouldn’t respond to a highly upvoted solution on lesswrong, then I would agree that’s a sign of concern.