Where to start depends highly on where you are now. Would you consider yourself socially average? Which culture are you from and what context/situation are you most immediately seeking to optimize? Is this for your occupation? Want more friends?
BaconServ
Assuming you meant for the comment section to be used to convince you. Not necessarily because you meant it, but because making that assumption means not willfully acting against your wishes on what normally would be a trivial issue that holds no real preference for you. Maybe it would be better to do it with private messages, maybe not. There’s a general ambient utility to just making the argument here, so there shouldn’t be any fault in doing so.
Since this is a real-world issue rather than a simple matter of crunching numbers, what you’re really asking for here isn’t merely to be convinced, but to be happy with whatever decision you make. Ten months worth of payment for the relief of not having to pay an entirely useless cost every month, and whatever more immediate utility will accompany that “extra” 50$/month. If 50$ doesn’t buy much immediate utility for you, then a compelling argument needs to encompass in-depth discussion of trivial things. It would mean having to know precise information about what you actually value. Or at the very least, an accurate heuristic about how you feel about trivial decisions. As it stands, you feel the 50$/month investment is worth it for a very narrow type of investment: Cryonics.
This is simply restating the knowns in a particular format, but it emphasizes what the core argument needs to be here: Either that the investment harbors even less utility than 50$/month can buy, or that there are clearly superior investments you can make at the same price.
Awareness of just how severely confirmation bias exists in the brain (despite any tactics you might suspect would uproot it) should readily show that convincing you that there are better investments to make (and therefore to stop making this particular investment) is the route most likely to produce payment. Of course, this undermines the nature of the challenge: A reason to not invest at all.
In other words, all AGI researchers are already well aware of this problem and take precautions according to their best understanding?
Is there something wrong with climate change in the world today? Yes, it’s hotly debated by millions of people, a super-majority of them being entirely unqualified to even have an opinion, but is this a bad thing? Would less public awareness of the issue of climate change have been better? What differences would there be? Would organizations be investing in “green” and alternative energy if not for the publicity surrounding climate change?
It’s easy to look back after the fact and say, “The market handled it!” But the truth is that the publicity and the corresponding opinions of thousands of entrepreneurs is part of that market.
Looking at the two markets:
MIRI’s warning of uFAI is popularized.
MIRI’s warning of uFAI continues in obscurity.
The latter just seems a ton less likely to mitigate uFAI risks than the former.
It could be useful to attach a, “If you didn’t like/agree with the contents of this pamphlet, please tell us why at,” note to any given pamphlet.
Personally I’d find it easier to just look at the contents of the pamphlet with the understanding that 99% of people will ignore it and see if a second draft has the same flaws.
That would probably upset many existing Christians. Clearly Jesus’ second coming is in AI form.
How effective do you consider chain letter to be at stopping NSA spying? Do you think they will be more effective at stopping them from developing the AIs that analyse that data?
NSA spying isn’t a chain letter topic that is likely to succeed, no. A strong AI chain letter that makes itself sound like it’s just against NSA spying doesn’t seem like an effective approach. The intent of a chain letter about strong AI is that all such projects are a danger. If people come to the conclusion that the NSA is likely to develop and AI while being aware of the danger of uFAI, then they would write letters or seek to start a movement to ensure that any AI the NSA—or any government organizations, for that matter—would ensure friendliness to the best of their abilities. The NSA doesn’t need to be mentioned in the uFAI chain mail in order for any NSA AI projects to be forced to comply with friendliness principles.
If you want to do something you can, earn to give and give money to MIRI.
That is not a valid path if MIRI is willfully ignoring valid solutions.
You don’t get points for pressuring people to address arguments. That doesn’t prevent an UFAI from killing you.
It does if the people addressing those arguments learn/accept the danger of unfriendliness in being pressured to do so.
We probably don’t have to solve it in the next 5 years.
Five years may be the time it takes for the chain mail to effectively popularize the issue to the point where the pressure is on to ensure friendliness, whether we solve it decades from then or not. What is your estimate for when uFAI will be created if MIRI’s warning isn’t properly heeded?
Politics is the mindkiller.
Really, it’s not. Tons of people discuss politics without getting their briefs in a knot about it. It’s only people that consider themselves highly intelligent that get mind-killed by it. The tendency to dismiss your opponent out-of-hand as unintelligent isn’t that common elseways. People, on large, are willing to seriously debate political issues. “Politics is the mind-killer” is a result of some pretty severe selection bias.
Even ignoring that, you’ve only stated that we should do our best to ensure it does not become a hot political issue. Widespread attention to the idea is still useful; if we can’t get the concept to penetrate the academia where AI is likely to be developed, we’re not yet mitigating the threat. A thousand angry letters demanding this research, “Stop at once,” or, “Address the issue of friendliness,” isn’t something that is easy to ignore—no matter how bad you think the arguments for uFAI are.
You’re not the only one expressing hesitation at the idea of widespread acceptance of uFAI risk, but unless you can really provide arguments for exactly what negative effects it is very likely to have, some of us are about ready to start a chain mail of our own volition. Your hesitation is understandable, but we need to do something to mitigate the risk here, or the risk just remains unmitigated and all we did was talk about it. People researching AI who’ve argued with Yudkowsky before and failed to be convinced might begrudge that Yudkowsky’s argument has gained widespread attention, but if it pressures them to properly address Yudkowsky’s arguments, then it has legitimately helped.
Letting plants grow their own pesticides for killing of things that eat the plants sounds to me like a bad strategy if you want healthy food.
Is there reason to believe someone in the field of genetic engineering would make such a mistake? Shouldn’t someone in the field be more aware of that and other potential dangers, despite the GE FUD they’ve no doubt encountered outside of academia? It seems like the FUD should just be motivating them to understand the risks even more—if for no other reason than simply to correct people’s misconceptions on the issue.
Your reasoning for why the “bad” publicity would have severe (or any notable) repercussions isn’t apparent.
If you have a lot of people making bad arguments for why UFAI is a danger, smart MIT people might just say, hey those people are wrong I’m smart enough to program an AGI that does what I want.
This just doesn’t seem very realistic when you consider all the variables.
While not doubting the accuracy of the assertion, why precisely do you believe Kurzweil isn’t taken seriously anymore, and in what specific ways is this a bad thing for him/his goals/the effect it has on society?
Right, but what damage is really being done to GE? Does all the FUD stop the people who go into the science from understanding the dangers? If uFAI is popularized, the academia will pretty much be forced to seriously address the issue. Ideally, this is something we’ll only need to do once; after it’s known and taken seriously, the people who work on AI will be under intense pressure to ensure they’re avoiding the dangers here.
Google probably already has an AI (and AI-risk) team internally that they’ve just had no reason to publicize their having. If uFAI becomes widely worried about, you can bet they’d make it known they were taking their own precautions.
Ask all of MIRI’s donors, all LW readers, HPMOR subscribers, friends and family etc, to forward that one document to their friends.
There has got to be enough writing by now that an effective chain mail can be written.
ETA: The chain mail suggestion isn’t knocked down in luke’s comment. If it’s not relevant or worthy of acknowledging, please explain why.
ETA2: As annoying as some chain mail might be, it does work because it does get around. It can be a very effective method of spreading an idea.
- 29 Oct 2013 0:53 UTC; -4 points) 's comment on MIRI strategy by (
Is “bad publicity” worse than “good publicity” here? If strong AI became a hot political topic, it would raise awareness considerably. The fiction surrounding strong AI should bias the population towards understanding it as a legitimate threat. Each political party in turn will have their own agenda, trying to attach whatever connotations they want to the issue, but if the public at large started really worrying about uFAI, that’s kind of the goal here.
How specifically? Easy. Because LessWrong is highly dismissive, and because I’ve been heavily signalling that I don’t have any actual arguments or criticisms. I do, obviously, but I’ve been signalling that that’s just a bluff on my part, up to an including this sentence. Nobody’s supposed to read this and think, “You know, he might actually have something that he’s not sharing.” Frankly, I’m surprised that with all the attention this article got that I haven’t been downvoted a hell of a lot more. I’m not sure where I messed up that LessWrong isn’t hammering me and is actually bothering to ask for specifics, but you’re right; it doesn’t fit the pattern I’ve seen prior to this thread.
I’m not yet sure where the limits of LessWrong’s patience lie, but I’ve come too far to stop trying to figure that out now.
People also wrote unrealistic books about magicians flying through the air and scrying on each other with crystal balls. Yet we have planes and webcams.
Naturally, some of the ideas fiction holds are feasible. In order for your analogy to apply, however, we’d need a comprehensive run-down of how many and which fictional concepts have become feasible to date. I’d love to see some hard analysis across the span of human history. While I believe there is merit in nano-scale technology, I’m not holding my breath for femtoengineering. Nevertheless, if such things were as readily predictable as people seem to think, you have to ask why we don’t have the technology already. The answer is that actually expressing our ideas onto physical reality is non-trivial, and by direct consequence, potentially non-viable.
The human mind is finite, and there are infinitely many possible concepts.
I need backing on both of these points. As far as I know, there isn’t enough verified neuroscience to determine if our brains are conceptually limited in any way. Primarily because we don’t actually know how abstract mental concepts map onto physical neurons. Even ignoring that (contrary to memetic citation) the brain does grow new neural cells and repair itself in adults, even if the number of neurons is finite, the number of and potential for connections between them is astronomical. We simply don’t know the maximum conceptual complexity of the human brain.
As far as there being infinitely many concepts, “flying car” isn’t terribly more complicated than “car” and “flying.” Even if something in the far future is given a name other than “car,” we can still grasp the concept of “transportation device,” paired with any number of accessory concepts like, “cup holder,” “flies,” “transforms,” “teleports,” and so on. Maybe it’s closer to a “suit” than anything we would currently call a “car;” some sort of “jetpack” or other. I’d need an expansion on “concept” before you could effectively communicate that concept-space is infinite. Countably infinite or uncountably infinite? All the formal math I’m aware of indicates that things like conceptual language are incomputable or give rise to paradoxes or some other such problem that would make “infinite” simply be inapplicable/nonsensical.
This doesn’t actually counter my argument, for two main reasons:
That wasn’t my argument.
That doesn’t counter anything.
Please don’t bother replying to me unless you’re going to actually explain something. Anything else is disuseful and you know it. I want to know how you justify to yourself that LessWrong is anything but childish. If you’re not willing to explain that, I’m not interested.
What, do you just ignore it?
What, and you just ignore it?
No, I suppose you’ll need a fuller description to see why the similarity is relevant.
LessWrong is sci-fi. Check what’s popular. Superintelligent AI, space travel, suspended animation, hyper-advanced nanotech...
These concepts straight out of sci-fi have next to zero basis. Who is to say there even are concepts that the human mind simply can’t grasp? I can’t visualize in n-dimensional space, but I can certainly understand the concept. Grey goo? Sounds plausible, but then again, there is zero evidence that physics can create anything like stable nanites. How fragile will the molecular bonds be? Are generation ships feasible? Is there some way to warp space to go fast enough that you don’t need an entire ecosystem on board? If complex information processing nanites aren’t feasible, is reanimation? These concepts aren’t new, they’ve been around for ages. It’s Magic 2.0.
If it’s not about evidence, what is it about? I’m not denying any of these possibilities, but aside from being fun ideas, we are nowhere near close to proving them legitimate. It’s not something people are believing in because “it only makes sense.” It’s fantasy at it’s base, and if it turns out to be halfway possible, great. What if it doesn’t? Is there going to be some point in the future where LessWrong lets go of these childish ideas of simulated worlds and supertechnological abilities? 100 years from now, if we don’t have AI and utility fog, is LessWrong going to give up these ideas? No. Because that just means that we’re closer to finally realizing the technology! Grow up already. This stuff isn’t reasonable, it’s just plausible, and our predictions are nothing more than mere predictions. LessWrong believes this stuff because LessWrong wants to believe in this stuff. At this moment in time, it is pure fiction.
If it’s not rationa—No, you’ve stopped following along by now. It’s not enough to point out that the ideas are pure fiction that humanity has dreamed about for ages. I can’t make an argument within the context that it’s irrational because you’ve heard it all before. What, do you just ignore it? Do you have an actual counter-point? Do you just shrug it off because “it’s obvious” and you don’t like the implications?
Seriously. Grow up. If there’s a reason for me to think LessWrong isn’t filled with children who like to believe in Magic 2.0, I’m certainly not seeing it.
That’s true. The process does rely on finding a solution to the worst case scenario. If you’re going to be crippled by fear or anxiety, probably a very bad practice to emulate.
Christ is it hard to stop constantly refreshing here and ignore what I know will be a hot thread.
I’ve voted on the article, I’ve read a few comment, cast a few votes, made a few replies myself. I’m precommitting to never returning to this thread and going to bed immediately. If anyone catches me commenting here after the day of this comment, please downvote it.
Damn I hope nobody replies to my comments...
Do you find yourself refusing to yield in the latter case but not the former case? Or is this observation of mutually unrelenting parties purely an external observation?
If there is a bug in your behavior (inconsistencies and double standards), then some introspection should yield potential explanations.