Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food.
Is there reason to believe someone in the field of genetic engineering would make such a mistake? Shouldn’t someone in the field be more aware of that and other potential dangers, despite the GE FUD they’ve no doubt encountered outside of academia? It seems like the FUD should just be motivating them to understand the risks even more—if for no other reason than simply to correct people’s misconceptions on the issue.
Your reasoning for why the “bad” publicity would have severe (or any notable) repercussions isn’t apparent.
If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I’m smart enough to program an AGI that does what I want.
This just doesn’t seem very realistic when you consider all the variables.
Is there reason to believe someone in the field of genetic engineering would make such a mistake?
Because those people do engineer plants to produce pesticides? Bt Potato was the first which was approved by the FDA in 1995.
The commerical incentives that exist encourage the development of such products. A customer in a store doesn’t see whether a potato is engineered to have more vitamins. He doesn’t see whether it’s engineered to produce pesticides.
He buys a potato. It’s cheaper to grow potatos that produce their own pesticides than it is to grow potatos that don’t.
In the case of potatos it might be harmless. We don’t eat the green of the potatos anyway, so why bother if the green has additional poison? But you can slip up. Biology is complicated. You could have changed something that also gets the poison to be produced in the edible parts.
It seems like the FUD should just be motivating them to understand the risks even more
It’s not a question of motivation. Politics is the mindkiller. If a topic gets political people on all sides of the debate get stupid.
This just doesn’t seem very realistic when you consider all the variables.
According to Eliezer it takes strong math skills to see how an AGI can overtake their own utility function and is therefore dangerous. Eliezer made the point that it’s very difficult to explain to people who are invested into their AGI design that it’s dangerous because that part needs complicated math.
It easy to say in abstract that some AGI might become UFAI, but it’s hard to do the assessment for any individual proposal.
Really, it’s not. Tons of people discuss politics without getting their briefs in a knot about it. It’s only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn’t that common elseways. People, on large, are willing to seriously debate political issues. “Politics is the mind-killer” is a result of some pretty severe selection bias.
Even ignoring that, you’ve only stated that we should do our best to ensure it does not become a hot political issue. Widespread attention to the idea is still useful; if we can’t get the concept to penetrate the academia where AI is likely to be developed, we’re not yet mitigating the threat. A thousand angry letters demanding this research, “Stop at once,” or, “Address the issue of friendliness,” isn’t something that is easy to ignore—no matter how bad you think the arguments for uFAI are.
You’re not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition. Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it. People researching AI who’ve argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky’s argument has gained widespread attention, but if it pressures them to properly address Yudkowsky’s arguments, then it has legitimately helped.
Really, it’s not. Tons of people discuss politics without getting their briefs in a knot about it. It’s only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn’t that common elseways.
There’s a reason why it’s general advice to not talk about religion, sex and politics. It’s not because the average person does well in discussing politics.
Dismiss your opponent out-of-hand as unintelligent isn’t the only failure mode of politics mindkill. I don’t even think it’s the most important one.
You’re not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition.
How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?
You’re not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition.
Take two important enviromental challenges and look at the first presidency of Obama. One is limiting CO2 emissions. The second is limiting mercury pollution.
The EPA under Obama was very effective at limiting mercury pollution but not at limiting CO2 emissions.
CO2 emissions are a very political issue charged issue with a lot of mindkill on both sides while mercury pollution isn’t. The people who pushed mercury pollution regulation won, not because they wrote a lot of letters.
Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it.
If you want to do something you can, earn to give and give money to MIRI.
People researching AI who’ve argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky’s argument has gained widespread attention, but if it pressures them to properly address Yudkowsky’s arguments, then it has legitimately helped.
You don’t get points for pressuring people to address arguments. That doesn’t prevent an UFAI from killing you.
UFAI is an important problem but we probably don’t have to solve it in the next 5 years. We do have some time to do things right.
How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?
NSA spying isn’t a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it’s just against NSA spying doesn’t seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities. The NSA doesn’t need to be mentioned in the uFAI chain mail in order for any NSA AI projects to be forced to comply with friendliness principles.
If you want to do something you can, earn to give and give money to MIRI.
You don’t get points for pressuring people to address arguments. That doesn’t prevent an UFAI from killing you.
It does if the people addressing those arguments learn/accept the danger of unfriendliness in being pressured to do so.
We probably don’t have to solve it in the next 5 years.
Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI’s warning isn’t properly heeded?
If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities.
I think your idea of a democracy in which letter writing is the way to create political change, just doesn’t accurately describe the world in which we are living.
Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI’s warning isn’t properly heeded?
If I remember right the median lesswrong prediction is that singularity happens after 2100. It might happen sooner.
I think 30 years is a valid time frame for FAI strategy.
That timeframe is long enough to invest in rationality movement building.
That is not a valid path if MIRI is willfully ignoring valid solutions.
Not taking the time to respond in detail to every suggestion can be a valid strategy. Especially for a post that get’s voted down to −3. People voted it down, so it’s not ignored.
If MIRI wouldn’t respond to a highly upvoted solution on lesswrong, then I would agree that’s a sign of concern.
Is there reason to believe someone in the field of genetic engineering would make such a mistake? Shouldn’t someone in the field be more aware of that and other potential dangers, despite the GE FUD they’ve no doubt encountered outside of academia? It seems like the FUD should just be motivating them to understand the risks even more—if for no other reason than simply to correct people’s misconceptions on the issue.
Your reasoning for why the “bad” publicity would have severe (or any notable) repercussions isn’t apparent.
This just doesn’t seem very realistic when you consider all the variables.
Because those people do engineer plants to produce pesticides? Bt Potato was the first which was approved by the FDA in 1995.
The commerical incentives that exist encourage the development of such products. A customer in a store doesn’t see whether a potato is engineered to have more vitamins. He doesn’t see whether it’s engineered to produce pesticides.
He buys a potato. It’s cheaper to grow potatos that produce their own pesticides than it is to grow potatos that don’t.
In the case of potatos it might be harmless. We don’t eat the green of the potatos anyway, so why bother if the green has additional poison? But you can slip up. Biology is complicated. You could have changed something that also gets the poison to be produced in the edible parts.
It’s not a question of motivation. Politics is the mindkiller. If a topic gets political people on all sides of the debate get stupid.
According to Eliezer it takes strong math skills to see how an AGI can overtake their own utility function and is therefore dangerous. Eliezer made the point that it’s very difficult to explain to people who are invested into their AGI design that it’s dangerous because that part needs complicated math.
It easy to say in abstract that some AGI might become UFAI, but it’s hard to do the assessment for any individual proposal.
Really, it’s not. Tons of people discuss politics without getting their briefs in a knot about it. It’s only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn’t that common elseways. People, on large, are willing to seriously debate political issues. “Politics is the mind-killer” is a result of some pretty severe selection bias.
Even ignoring that, you’ve only stated that we should do our best to ensure it does not become a hot political issue. Widespread attention to the idea is still useful; if we can’t get the concept to penetrate the academia where AI is likely to be developed, we’re not yet mitigating the threat. A thousand angry letters demanding this research, “Stop at once,” or, “Address the issue of friendliness,” isn’t something that is easy to ignore—no matter how bad you think the arguments for uFAI are.
You’re not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition. Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it. People researching AI who’ve argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky’s argument has gained widespread attention, but if it pressures them to properly address Yudkowsky’s arguments, then it has legitimately helped.
There’s a reason why it’s general advice to not talk about religion, sex and politics. It’s not because the average person does well in discussing politics.
Dismiss your opponent out-of-hand as unintelligent isn’t the only failure mode of politics mindkill. I don’t even think it’s the most important one.
How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?
Take two important enviromental challenges and look at the first presidency of Obama. One is limiting CO2 emissions. The second is limiting mercury pollution.
The EPA under Obama was very effective at limiting mercury pollution but not at limiting CO2 emissions.
CO2 emissions are a very political issue charged issue with a lot of mindkill on both sides while mercury pollution isn’t. The people who pushed mercury pollution regulation won, not because they wrote a lot of letters.
If you want to do something you can, earn to give and give money to MIRI.
You don’t get points for pressuring people to address arguments. That doesn’t prevent an UFAI from killing you.
UFAI is an important problem but we probably don’t have to solve it in the next 5 years. We do have some time to do things right.
NSA spying isn’t a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it’s just against NSA spying doesn’t seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities. The NSA doesn’t need to be mentioned in the uFAI chain mail in order for any NSA AI projects to be forced to comply with friendliness principles.
That is not a valid path if MIRI is willfully ignoring valid solutions.
It does if the people addressing those arguments learn/accept the danger of unfriendliness in being pressured to do so.
Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI’s warning isn’t properly heeded?
I think your idea of a democracy in which letter writing is the way to create political change, just doesn’t accurately describe the world in which we are living.
If I remember right the median lesswrong prediction is that singularity happens after 2100. It might happen sooner. I think 30 years is a valid time frame for FAI strategy.
That timeframe is long enough to invest in rationality movement building.
Not taking the time to respond in detail to every suggestion can be a valid strategy. Especially for a post that get’s voted down to −3. People voted it down, so it’s not ignored. If MIRI wouldn’t respond to a highly upvoted solution on lesswrong, then I would agree that’s a sign of concern.