Is “bad publicity” worse than “good publicity” here? If strong AI became a hot political topic, it would raise awareness considerably. The fiction surrounding strong AI should bias the population towards understanding it as a legitimate threat. Each political party in turn will have their own agenda, trying to attach whatever connotations they want to the issue, but if the public at large started really worrying about uFAI, that’s kind of the goal here.
Politically people who fear AI might go after companies like google.
but if the public at large started really worrying about uFAI, that’s kind of the goal here.
I don’t think that the public at large is the target audience. The important thing is that the people who could potential build an AGI understand that they are not smart enough to contain the AGI.
If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I’m smart enough to program an AGI that does what I want.
I mean take a topic like genetic engineering. There are valid dangers involved in genetic engineering. On the other hand the people who think that all gene manipulated food is poisons are wrong. As a result a lot of self professed skeptics and Atheists see it as their duty to defend genetic engineering.
Right, but what damage is really being done to GE? Does all the FUD stop the people who go into the science from understanding the dangers? If uFAI is popularized, the academia will pretty much be forced to seriously address the issue. Ideally, this is something we’ll only need to do once; after it’s known and taken seriously, the people who work on AI will be under intense pressure to ensure they’re avoiding the dangers here.
Google probably already has an AI (and AI-risk) team internally that they’ve just had no reason to publicize their having. If uFAI becomes widely worried about, you can bet they’d make it known they were taking their own precautions.
Right, but what damage is really being done to GE? Does all the FUD stop the people who go into the science from understanding the dangers?
Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food.
It makes things much easier for the farmer, but to me it doesn’t sound like a road that we should go on.
I wouldn’t want to buy such food in the supermarket but I have no problem with buying genetic manipulated that adds extra vitamins.
Then there are various issues with introducing new species. Issues about monocultures.
Bioweapons.
after it’s known and taken seriously, the people who work on AI will be under intense pressure to ensure they’re avoiding the dangers here.
The whole work is dangerous. Safety is really hard.
Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food. It makes things much easier for the farmer, but to me it doesn’t sound like a road that we should go on.
This is more or less the opposite of what we actually actually use genetic engineering of crops for. Production of pesticides isn’t something that plants were incapable of until we started tinkering with their genes, it’s something they’ve been doing for hundreds of millions of years. Plants in nature have to deal with tradeoffs between producing their own natural pesticides and using their biological resources for other things, such as more rapid growth, greater drought resistance, etc. In general, genetically engineered plants actually have less innate pest resistance, which farmers then compensate for by spraying pesticides onto them, because it allows them to trade off that natural pesticide production for faster growth.
In general, genetically engineered plants actually have less innate pest resistance, which farmers then compensate for by spraying pesticides onto them, because it allows them to trade off that natural pesticide production for faster growth.
ChristianKl may be thinking of Bt corn (maize) and, for instance, the Starlink corn recall. Bt corn certainly does express a pesticide, namely Bacillus thuringiensis toxin.
Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food.
Somewhat tangentially: does it sound like a better or a worse strategy than not letting plants do this, and growing the plants in an environment where external pesticides are regularly applied to them?
(This really is a question about GMOs, not some kind of oblique analogical question about AIs.)
Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food.
Is there reason to believe someone in the field of genetic engineering would make such a mistake? Shouldn’t someone in the field be more aware of that and other potential dangers, despite the GE FUD they’ve no doubt encountered outside of academia? It seems like the FUD should just be motivating them to understand the risks even more—if for no other reason than simply to correct people’s misconceptions on the issue.
Your reasoning for why the “bad” publicity would have severe (or any notable) repercussions isn’t apparent.
If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I’m smart enough to program an AGI that does what I want.
This just doesn’t seem very realistic when you consider all the variables.
Is there reason to believe someone in the field of genetic engineering would make such a mistake?
Because those people do engineer plants to produce pesticides? Bt Potato was the first which was approved by the FDA in 1995.
The commerical incentives that exist encourage the development of such products. A customer in a store doesn’t see whether a potato is engineered to have more vitamins. He doesn’t see whether it’s engineered to produce pesticides.
He buys a potato. It’s cheaper to grow potatos that produce their own pesticides than it is to grow potatos that don’t.
In the case of potatos it might be harmless. We don’t eat the green of the potatos anyway, so why bother if the green has additional poison? But you can slip up. Biology is complicated. You could have changed something that also gets the poison to be produced in the edible parts.
It seems like the FUD should just be motivating them to understand the risks even more
It’s not a question of motivation. Politics is the mindkiller. If a topic gets political people on all sides of the debate get stupid.
This just doesn’t seem very realistic when you consider all the variables.
According to Eliezer it takes strong math skills to see how an AGI can overtake their own utility function and is therefore dangerous. Eliezer made the point that it’s very difficult to explain to people who are invested into their AGI design that it’s dangerous because that part needs complicated math.
It easy to say in abstract that some AGI might become UFAI, but it’s hard to do the assessment for any individual proposal.
Really, it’s not. Tons of people discuss politics without getting their briefs in a knot about it. It’s only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn’t that common elseways. People, on large, are willing to seriously debate political issues. “Politics is the mind-killer” is a result of some pretty severe selection bias.
Even ignoring that, you’ve only stated that we should do our best to ensure it does not become a hot political issue. Widespread attention to the idea is still useful; if we can’t get the concept to penetrate the academia where AI is likely to be developed, we’re not yet mitigating the threat. A thousand angry letters demanding this research, “Stop at once,” or, “Address the issue of friendliness,” isn’t something that is easy to ignore—no matter how bad you think the arguments for uFAI are.
You’re not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition. Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it. People researching AI who’ve argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky’s argument has gained widespread attention, but if it pressures them to properly address Yudkowsky’s arguments, then it has legitimately helped.
Really, it’s not. Tons of people discuss politics without getting their briefs in a knot about it. It’s only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn’t that common elseways.
There’s a reason why it’s general advice to not talk about religion, sex and politics. It’s not because the average person does well in discussing politics.
Dismiss your opponent out-of-hand as unintelligent isn’t the only failure mode of politics mindkill. I don’t even think it’s the most important one.
You’re not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition.
How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?
You’re not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition.
Take two important enviromental challenges and look at the first presidency of Obama. One is limiting CO2 emissions. The second is limiting mercury pollution.
The EPA under Obama was very effective at limiting mercury pollution but not at limiting CO2 emissions.
CO2 emissions are a very political issue charged issue with a lot of mindkill on both sides while mercury pollution isn’t. The people who pushed mercury pollution regulation won, not because they wrote a lot of letters.
Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it.
If you want to do something you can, earn to give and give money to MIRI.
People researching AI who’ve argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky’s argument has gained widespread attention, but if it pressures them to properly address Yudkowsky’s arguments, then it has legitimately helped.
You don’t get points for pressuring people to address arguments. That doesn’t prevent an UFAI from killing you.
UFAI is an important problem but we probably don’t have to solve it in the next 5 years. We do have some time to do things right.
How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?
NSA spying isn’t a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it’s just against NSA spying doesn’t seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities. The NSA doesn’t need to be mentioned in the uFAI chain mail in order for any NSA AI projects to be forced to comply with friendliness principles.
If you want to do something you can, earn to give and give money to MIRI.
You don’t get points for pressuring people to address arguments. That doesn’t prevent an UFAI from killing you.
It does if the people addressing those arguments learn/accept the danger of unfriendliness in being pressured to do so.
We probably don’t have to solve it in the next 5 years.
Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI’s warning isn’t properly heeded?
If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities.
I think your idea of a democracy in which letter writing is the way to create political change, just doesn’t accurately describe the world in which we are living.
Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI’s warning isn’t properly heeded?
If I remember right the median lesswrong prediction is that singularity happens after 2100. It might happen sooner.
I think 30 years is a valid time frame for FAI strategy.
That timeframe is long enough to invest in rationality movement building.
That is not a valid path if MIRI is willfully ignoring valid solutions.
Not taking the time to respond in detail to every suggestion can be a valid strategy. Especially for a post that get’s voted down to −3. People voted it down, so it’s not ignored.
If MIRI wouldn’t respond to a highly upvoted solution on lesswrong, then I would agree that’s a sign of concern.
Is there something wrong with climate change in the world today? Yes, it’s hotly debated by millions of people, a super-majority of them being entirely unqualified to even have an opinion, but is this a bad thing? Would less public awareness of the issue of climate change have been better? What differences would there be? Would organizations be investing in “green” and alternative energy if not for the publicity surrounding climate change?
It’s easy to look back after the fact and say, “The market handled it!” But the truth is that the publicity and the corresponding opinions of thousands of entrepreneurs is part of that market.
Looking at the two markets:
MIRI’s warning of uFAI is popularized.
MIRI’s warning of uFAI continues in obscurity.
The latter just seems a ton less likely to mitigate uFAI risks than the former.
The failure mode that I’m most concerned about is overreaction followed by a backlash of dismissal. If that happened, the end result would be far worse than obscurity.
Is “bad publicity” worse than “good publicity” here? If strong AI became a hot political topic, it would raise awareness considerably. The fiction surrounding strong AI should bias the population towards understanding it as a legitimate threat. Each political party in turn will have their own agenda, trying to attach whatever connotations they want to the issue, but if the public at large started really worrying about uFAI, that’s kind of the goal here.
Politically people who fear AI might go after companies like google.
I don’t think that the public at large is the target audience. The important thing is that the people who could potential build an AGI understand that they are not smart enough to contain the AGI.
If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I’m smart enough to program an AGI that does what I want.
I mean take a topic like genetic engineering. There are valid dangers involved in genetic engineering. On the other hand the people who think that all gene manipulated food is poisons are wrong. As a result a lot of self professed skeptics and Atheists see it as their duty to defend genetic engineering.
Right, but what damage is really being done to GE? Does all the FUD stop the people who go into the science from understanding the dangers? If uFAI is popularized, the academia will pretty much be forced to seriously address the issue. Ideally, this is something we’ll only need to do once; after it’s known and taken seriously, the people who work on AI will be under intense pressure to ensure they’re avoiding the dangers here.
Google probably already has an AI (and AI-risk) team internally that they’ve just had no reason to publicize their having. If uFAI becomes widely worried about, you can bet they’d make it known they were taking their own precautions.
Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food. It makes things much easier for the farmer, but to me it doesn’t sound like a road that we should go on.
I wouldn’t want to buy such food in the supermarket but I have no problem with buying genetic manipulated that adds extra vitamins.
Then there are various issues with introducing new species. Issues about monocultures. Bioweapons.
The whole work is dangerous. Safety is really hard.
This is more or less the opposite of what we actually actually use genetic engineering of crops for. Production of pesticides isn’t something that plants were incapable of until we started tinkering with their genes, it’s something they’ve been doing for hundreds of millions of years. Plants in nature have to deal with tradeoffs between producing their own natural pesticides and using their biological resources for other things, such as more rapid growth, greater drought resistance, etc. In general, genetically engineered plants actually have less innate pest resistance, which farmers then compensate for by spraying pesticides onto them, because it allows them to trade off that natural pesticide production for faster growth.
ChristianKl may be thinking of Bt corn (maize) and, for instance, the Starlink corn recall. Bt corn certainly does express a pesticide, namely Bacillus thuringiensis toxin.
Somewhat tangentially: does it sound like a better or a worse strategy than not letting plants do this, and growing the plants in an environment where external pesticides are regularly applied to them?
(This really is a question about GMOs, not some kind of oblique analogical question about AIs.)
“AIs” → “experts being informed in their field of study”
ETA: Was this not actually apparent?
As a matter of evolutionary biology plants have been doing this for many millions of years and are pretty good at making poisons.
Is there reason to believe someone in the field of genetic engineering would make such a mistake? Shouldn’t someone in the field be more aware of that and other potential dangers, despite the GE FUD they’ve no doubt encountered outside of academia? It seems like the FUD should just be motivating them to understand the risks even more—if for no other reason than simply to correct people’s misconceptions on the issue.
Your reasoning for why the “bad” publicity would have severe (or any notable) repercussions isn’t apparent.
This just doesn’t seem very realistic when you consider all the variables.
Because those people do engineer plants to produce pesticides? Bt Potato was the first which was approved by the FDA in 1995.
The commerical incentives that exist encourage the development of such products. A customer in a store doesn’t see whether a potato is engineered to have more vitamins. He doesn’t see whether it’s engineered to produce pesticides.
He buys a potato. It’s cheaper to grow potatos that produce their own pesticides than it is to grow potatos that don’t.
In the case of potatos it might be harmless. We don’t eat the green of the potatos anyway, so why bother if the green has additional poison? But you can slip up. Biology is complicated. You could have changed something that also gets the poison to be produced in the edible parts.
It’s not a question of motivation. Politics is the mindkiller. If a topic gets political people on all sides of the debate get stupid.
According to Eliezer it takes strong math skills to see how an AGI can overtake their own utility function and is therefore dangerous. Eliezer made the point that it’s very difficult to explain to people who are invested into their AGI design that it’s dangerous because that part needs complicated math.
It easy to say in abstract that some AGI might become UFAI, but it’s hard to do the assessment for any individual proposal.
Really, it’s not. Tons of people discuss politics without getting their briefs in a knot about it. It’s only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn’t that common elseways. People, on large, are willing to seriously debate political issues. “Politics is the mind-killer” is a result of some pretty severe selection bias.
Even ignoring that, you’ve only stated that we should do our best to ensure it does not become a hot political issue. Widespread attention to the idea is still useful; if we can’t get the concept to penetrate the academia where AI is likely to be developed, we’re not yet mitigating the threat. A thousand angry letters demanding this research, “Stop at once,” or, “Address the issue of friendliness,” isn’t something that is easy to ignore—no matter how bad you think the arguments for uFAI are.
You’re not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition. Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it. People researching AI who’ve argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky’s argument has gained widespread attention, but if it pressures them to properly address Yudkowsky’s arguments, then it has legitimately helped.
There’s a reason why it’s general advice to not talk about religion, sex and politics. It’s not because the average person does well in discussing politics.
Dismiss your opponent out-of-hand as unintelligent isn’t the only failure mode of politics mindkill. I don’t even think it’s the most important one.
How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?
Take two important enviromental challenges and look at the first presidency of Obama. One is limiting CO2 emissions. The second is limiting mercury pollution.
The EPA under Obama was very effective at limiting mercury pollution but not at limiting CO2 emissions.
CO2 emissions are a very political issue charged issue with a lot of mindkill on both sides while mercury pollution isn’t. The people who pushed mercury pollution regulation won, not because they wrote a lot of letters.
If you want to do something you can, earn to give and give money to MIRI.
You don’t get points for pressuring people to address arguments. That doesn’t prevent an UFAI from killing you.
UFAI is an important problem but we probably don’t have to solve it in the next 5 years. We do have some time to do things right.
NSA spying isn’t a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it’s just against NSA spying doesn’t seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities. The NSA doesn’t need to be mentioned in the uFAI chain mail in order for any NSA AI projects to be forced to comply with friendliness principles.
That is not a valid path if MIRI is willfully ignoring valid solutions.
It does if the people addressing those arguments learn/accept the danger of unfriendliness in being pressured to do so.
Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI’s warning isn’t properly heeded?
I think your idea of a democracy in which letter writing is the way to create political change, just doesn’t accurately describe the world in which we are living.
If I remember right the median lesswrong prediction is that singularity happens after 2100. It might happen sooner. I think 30 years is a valid time frame for FAI strategy.
That timeframe is long enough to invest in rationality movement building.
Not taking the time to respond in detail to every suggestion can be a valid strategy. Especially for a post that get’s voted down to −3. People voted it down, so it’s not ignored. If MIRI wouldn’t respond to a highly upvoted solution on lesswrong, then I would agree that’s a sign of concern.
Based on my (subjective and anecdotal, I’ll admit) personal experiences, I think it would be bad. Look at climate change.
Is there something wrong with climate change in the world today? Yes, it’s hotly debated by millions of people, a super-majority of them being entirely unqualified to even have an opinion, but is this a bad thing? Would less public awareness of the issue of climate change have been better? What differences would there be? Would organizations be investing in “green” and alternative energy if not for the publicity surrounding climate change?
It’s easy to look back after the fact and say, “The market handled it!” But the truth is that the publicity and the corresponding opinions of thousands of entrepreneurs is part of that market.
Looking at the two markets:
MIRI’s warning of uFAI is popularized.
MIRI’s warning of uFAI continues in obscurity.
The latter just seems a ton less likely to mitigate uFAI risks than the former.
The failure mode that I’m most concerned about is overreaction followed by a backlash of dismissal. If that happened, the end result would be far worse than obscurity.