The one “uncredible” claim mentioned—about Eliezer being “hit by a meteorite”—sounds as though it is the kind of thing he might plausibly think. Not too much of a big deal, IMO.
As with many charities, it is easy to think the SIAI might be having a negative effect—simply because it occupies the niche of another organisation that could be doing a better job—but what to do? Things could be worse as well—probably much worse.
I suggested what to do about this problem in my post: withhold funding from SIAI, and make it clear to them why you’re withholding funding from them, and promise to fund them if the issue is satisfactorily resolved to incentivize them to improve.
I suggested what to do about this problem in my post: withhold funding from SIAI.
Right—but that’s only advice for those who are already donating. Others would presumably seek reform or replacement. The decision there seems non-trivial.
I’m definitely interested in funding an existential risk organization. SIAI would have to be a lot more transparent than it is now right now for me to be interested in funding SIAI. For me personally, it wouldn’t be enough for SIAI to just take measures to avoid poisoning the meme, I would need to see a lot more evidence that SIAI is systematically working to maximize its impact on existential risk reduction.
As things stand I prefer to hold out for a better organization. But if SIAI exhibited transparency and accountability of levels similar to those of GiveWell (welcoming and publically responding to criticism regularly, regularly posting detailed plans of action, seeking out feedback from subject matter specialists and making this public when possible, etc.) I would definitely fund SIAI and advocate that others do so as well.
“transparency”? I thought the point of your post was that SIAI members should refrain from making some of their beliefs easily available to the public?
I see, maybe I should have been more clear. The point of my post is that SIAI members should not express controversial views without substantiating them with abundant evidence. If SIAI provided compelling evidence that Eliezer’s work has higher expected value to humanity than what virtually everybody else is doing, then I would think Eliezer’s comment appropriate.
As things stand SIAI has not provided such evidence. Eliezer himself may have such evidence, but if so he’s either unwilling or unable to share it.
There are a lot of second and higher order effects in PR. You can always shape your public statements for one audience and end up driving away (or failing to convince) another one that’s more important. If Eliezer had shied away from stating some of the more “uncredible” ideas because there wasn’t enough evidence to convince a typical smart person, it would surely prompt questions of “what do you really think about this?” or fail to attract people who are currently interested in SIAI because of those ideas.
If SIAI provided compelling evidence that Eliezer’s work has higher expected value to humanity than what virtually everybody else is doing, then I would think Eliezer’s comment appropriate.
Suppose Eliezer hadn’t made that claim, and somebody asks him, “do you think the work SIAI is doing has higher expected value to humanity than what everybody else is doing?”, which somebody is bound to, given that Eliezer is asking for donations from rationalists. What is he supposed to say? “I can’t give you the answer because I don’t have enough evidence to convince a typical smart person?”
I think you make a good point that it’s important to think about PR, but I’m not at all convinced that the specific advice you give are the right ones.
You can always shape your public statements for one audience and end up driving away (or failing to convince) another one that’s more important.
This is of course true. I myself am fairly certain that SIAI’s public statements are driving away the people who it’s most important to interest in existential risk.
Suppose Eliezer hadn’t made that claim, and somebody asks him, “do you think the work SIAI is doing has higher expected value to humanity than what everybody else is doing?”, which somebody is bound to, given that Eliezer is asking for donations from rationalists. What is he supposed to say? “I can’t give you the answer because I don’t have enough evidence to convince a typical smart person?”
•It’s standard public relations practice to reveal certain information only if asked.
•An organization that has the strongest case for room for more funding need not be an organization that’s doing something of higher expected value to humanity than what everybody else is doing. In particular, I simultaneously believe that there are politicians who have higher expected value to humanity than all existential risk researchers alive and that the cause of existential risk has the greatest room for more funding.
•One need not be confident in one’s belief that funding one’s organization has highest expected value to humanity to believe that funding one’s organization has highest expected to humanity. A major issue that I have with Eliezer’s rhetoric is that he projects what I perceive to be an unreasonably high degree of confidence in his beliefs.
•Another major issue with Eliezer’s rhetoric that I have is that even putting issues of PR aside, I personally believe that funding SIAI does not have anywhere near the highest expected value to humanity out of all possible uses of money. So from my point of view, I see no upside to Eliezer making extreme claims of the sort that he has—it looks to me as though Eliezer is making false claims and damaging public relations for existential risk as a result.
I will be detailing my reasons for thinking that SIAI’s research does not have high expected value in a future post.
The level of certainty is not up for grabs. You are as confident as you happen to be, this can’t be changed. You can change the appearance, but not your actual level of confidence. And changing the apparent level of confidence is equivalent to lying.
But it isn’t perceived as so by the general public—it seems to me that the usual perception of “confidence” has more to do with status than with probability estimates.
The non-technical people I work with often say that I use “maybe” and “probably” too much (I’m a programmer—“it’ll probably work” is a good description of how often it does work in practice) - as if having confidence in one’s statements was a sign of moral fibre, and not a sign of miscalibration.
Actually, making statements with high confidence is a positive trait, but most people address this by increasing the confidence they express, not by increasing their knowledge until they can honestly make high-confidence statements. And our culture doesn’t correct for that, because errors of calibration are not immediatly obvious (as they would be if, say, we had a widespread habit of betting on various things).
higher expected value to humanity than what virtually everybody else is doing,
For what definitions of “value to humanity” and “virtually everybody else”?
If “value to humanity” is assessed as in Bostrom’s Astronomical Waste paper, that hugely favors effects on existential risk vs alleviating current suffering or increasing present welfare (as such, those also have existential risk effects). Most people don’t agree with that view, so asserting that as a privileged frame can be seen as a hostile move (attacking the value systems of others in favor of a value system according to which one’s area of focus is especially important). Think of the anger directed at vegetarians, or those who guilt-trip others about not saving African lives. And of course, it’s easier to do well on a metric that others are mostly not focused on optimizing.
Dispute about what best reduces existential risk, and annoyance at overly confident statements there is a further issue, but I think that asserting uncommon moral principles (which happen to rank one’s activities as much more valuable than most people would rank them) is a big factor on its own.
In case my previous comment was ambiguous, I should say that I agree with you completely on this point. I’ve been wanting to make a top level post about this general topic for a while. Not sure when I’ll get a chance to do so.
Eliezer himself may have such evidence [that Eliezer’s work has higher expected value to humanity than what virtually everybody else is doing], but if so he’s either unwilling or unable to share it.
Now that is unfair.
Since 1997, Eliezer has published (mostly on mailing lists and blogs but also in monographs) an enormous amount (at least ten novels worth unless I am very mistaken) of writings supporting exactly that point. Of course most of this material is technical, but unlike the vast majority of technical prose, it is accessible to non-specialists and non-initiates with enough intelligence, a solid undergraduate education as a “scientific generalist” and a lot of free time on their hands because in his writings Eliezer is constantly “watching out for” the reader who does not yet know what he knows. (In other words, it is uncommonly good technical exposition.)
So my impression has been that the situation is that
(i) Eliezer’s writings contain a great deal of insightful material.
(ii) These writings do not substantiate the idea that [that Eliezer’s work has higher expected value to humanity than what virtually everybody else is doing].
I say this having read perhaps around a thousand pages of what Eliezer has written. I consider the amount of reading that I’ve done to be a good “probabilistic proof” that the points (i) and (ii) apply to the portion of his writings that I haven’t read.
That being said, if there are any particular documents that you would point me to which you feel do provide a satisfactory evidence for the idea [that Eliezer’s work has higher expected value to humanity than what virtually everybody else is doing], I would be happy to examine them.
I’m unwilling to read the whole of his opus given how much of it I’ve already read without being convinced. I feel that the time that I put into reducing existential risk can be used to better effect in other ways.
It would help to know what steps in the probabilistic proof don’t have high probability for you.
For example, you might think that the singularity has a good probability of being relatively smooth and some kind of friendly, even without FAI. or you might think that other existential risks may still be a bigger threat, or you may think that Eliezer isn’t putting a dent in the FAI problem.
This might be a convenient place to collect a variety of reasons why people are FOOM denialists. From my POV:
I am skeptical that safeguards against UFAI (unFAI) will not work. In part because:
I doubt that the “takeoff” will be “hard”. Because:
I am pretty sure the takeoff will require repeatedly doubling and quadrupling hardware, not just autorewriting software.
And hence an effective safeguard would be to simply not give the machine its own credit card!
And in any case, the Moore’s law curve for electronics does not arise from delays in thinking up clever ideas, it arises from delays in building machines to incredibly high tolerances.
Furthermore, even after the machine has more hardware, it doesn’t yet have higher intelligence until it reads lots more encyclopedias and proves for itself many more theorems. These things take time.
And finally, I have yet to see the argument that an FAI protects us from a future UFAI. That is, how does the SIAI help us?
Oh, and I do think that the other existential risks, particularly war and economic collapse, put the UFAI risk pretty far down the priority list. Sure, those other risks may not be quite so existential, but if they don’t kill us, they will at least prevent an early singularity.
Edit added two days later: Since writing this, I thought about it some more, shut up for a moment, and did the math. I still think that it is unlikely that the first takeoff will be a hard one; so hard that it gets out of control. But I now estimate something like a 10% chance that the first takeoff will be hard, and I estimate something like a 30% chance that at least one of the first couple dozen takeoffs will be hard. Multiply that by an estimated 10% chance that a hard takeoff will take place without adequate safeguards in place, and another 10% chance that a safeguardless hard takeoff will go rogue, and you get something like a 0.3% chance of a disaster of Forbin Project magnitude. Completely unacceptable.
Originally, I had discounted the chance that a simple software change could cause the takeoff; I assumed you would need to double and redouble the hardware capability. What I failed to notice was that a simple “tuning” change to the (soft) network connectivity parameters—changing the maximum number of inputs per “neuron” from
8 to 7, say, could have an (unexpected) effect on performance of several orders of magnitude simply by suppressing wasteful thrashing or some such thing.
I am pretty sure the takeoff will require repeatedly doubling and quadrupling hardware, not just autorewriting software.
Do you think that progress in AI is limited primarily by hardware? If hardware is the limiting factor, then you should think AI soon relatively plausible. If software is the limiting factor (the majority view, and the reason most AI folk reject claims such as those of Moravec), such that we won’t get AI until well beyond the minimum computational requirements, then either early AIs should be able to run fast or with numerous copies cheaply, or there will be a lot of room to reduce bloated hardware demands through software improvements.
Thinking that AI will take a long time (during which hardware will advance mightily towards physical limits) but also be sharply and stably hardware-limited when created is a hard view to defend.
I am imagining that it will work something like the human brain (but not by ‘scan and emulate’). We need to create hardware modules comparable to neurons, we need to have some kind of geometric organization which permits individual hardware modules to establish physical connections to a handful of nearby modules, and we need a ‘program’ (corresponding to human embryonic development) which establishes a few starting connections, and finally we need a training period (like training a neural net, and comparable to what the human brain experiences from the first neural activity in the womb through graduate school) which adds many more physical connections. I’m not sure whether to call these connections hardware or software. Actually, they are a hybrid of both—like PLAs (yeah, I way out of date on technology).
So I’m imagining a lot of theoretical work needed to come up with a good ‘neuron’ design (probably several dozen different kinds of neurons), more theoretical work to come up with a good ‘program’ to correspond to the embryonic interconnect, and someone willing to pay for lots and lots of neurons.
So, yeah, I’m thinking that the program will be relatively simple (equivalent to a few million lines of code), but it will take us a long time to find it. Not the 500 million years that it took evolution to come up with that program—apparently 500 million years after it had already invented the neuron. But for human designers, at least a few decades to find and write the program. I hope this explanation helps to make my position seem less weird.
4 . And hence an effective safeguard would be to simply not give the machine its own credit card!
(Powerful) optimization processes can find such ways of solving problems by exploiting every possible shortcut that it is hard to predict those ways in advance. Recently here was an example of that. Genetic algorithm found unexpected solution of a problem exploiting analog properties of particular FPGA chip.
3 and 4: hardware, sure—that is improving too—just not as fast, sometimes. A machine may find a way to obtain a credit card—or it will get a human to buy whatever it needs—as happens in companies today.
6: how much time? Surely a better example would be: “perform experiments”—and experiments that caan’t be minaturised and executed at high speeds—such as those done in the LHC.
7: AltaVista didn’t protect us from Google—nor did Friendster protect against MySpace. However, so far Google has mostly successfully crushed its rivals.
8: no way, IMO—e.g. see Matt Ridley. That is probably good advice for all DOOMsters, actually.
Some of the most obvious safeguards are likely to be self-imposed ones:
Can you be more specific than “it’s somewhere beneath an enormous amount of 13 years of material from the very same person whose arguments are scrutinized for evidence”?
This is not sufficient to scare people up to the point of having nightmares and ask them for most of their money.
Do you want me to repeat the links people gave you 24 hours ago?
The person who was scared to the point of having nightmares was almost certainly on a weeks-long or months-long visit to the big house in California where people come to discuss extremely powerful technologies and the far future and to learn from experts on these subjects. That environment would tend to cause a person to take certain ideas more seriously than a person usually would.
Also, are we really discrediting people because they were foolish enough to talk about their deranged sleep-thoughts? I’d sound pretty stupid too if I remembered and advertised every bit of nonsense I experienced while sleeping.
It was more than one person. Anyway, I haven’t read all of the comments yet so I might have missed some specific links. If you are talking about links to articles written by EY himself where he argues about AI going FOOM, I commented on one of them.
Here is an example of the kind of transparency in the form of strict calculations, references and evidence I expect.
As I said, I’m not sure what other links you are talking about. But if you mean the kind of LW posts dealing with antipredictions, I’m not impressed. Predicting superhuman AI to be a possible outcome of AI research is not sufficient. Where is the difference between claiming the LHC will go FOOM? I’m sure someone like EY would be able to write a thousand posts around such a scenario telling me that the high risk associated with the LHC going FOOM does outweigh its low probability. There might be sound arguments to support this conclusion. But it is a conclusion and a framework of arguments based on a assumption that is itself of unknown credibility. So is it too much to ask for some transparet evidence to fortify this basic premise? Evidence that is not somewhere to be found within hundreds of posts not directly concerned with the evidence in question but rather arguing based on the very assumption it is trying to justify?
Asteroids really are an easier problem: celestial mechanics in vacuum are pretty stable, we have the Moon providing a record of past cratering to calibrate on, etc. There’s still uncertainty about the technology of asteroid deflection (e.g. its potential for military use, or to incite conflict), but overall it’s perhaps the most tractable risk for analysis since the asteroids themselves don’t depend on recent events (save for some smallish anthropic shadow effects).
An analysis for engineered pathogens, where we have a lot of uncertainty about the difficulty of engineering various of diseases for maximum damage, and how the technology for detection, treatment and prevention will keep pace. We can make generalizations based on existing diseases and their evolutionary dynamics (selection for lower virulence over time with person-to-person transmission, etc), current public health measures, etc, the rarity of the relevant motivations, etc, but you’re still left with many more places where you can’t just plug in well-established numbers and crank forward.
You can still give probability estimates, and plug in well-understood past data where you can, but you can’t get asteroid-level exactitude.
The difference is that we understand both asteroids and particle physics far better than we do intelligence, and there is precedence for both asteroid impacts and high energy particle collisions (natural ones at far higher energy than in the LHC) while there is none for an engineered human level intelligence with access to its own source code.
So calculations of the kind you seem to be asking for just aren’t possible at this point (and calculations with exactly that level of evidence won’t be possible right up until it’s too late), while refutations of the kind LHC panic gets aren’t possible either. You should also note that Eliezer takes LHC panic more serious than most non-innumerate people.
But if you want some calculation anyway: Let’s assume there is a 1% chance of extinction by uFAI within the next 100 years. Let’s also assume that spending $10 million per year (in 2010 dollars, adjusting for inflation) allows us to reduce that risk by 10%, just by the dangers of uFAI being in the public eye and people being somewhat more cautious, and taking the right sort of caution instead of worrying about Skynet or homicidal robots. So $1 billion saves about an expected 1 million lives, a cost of $ 1000 per life, which is about the level of the most efficient conventional charities. And that’s with Robins low-balling estimate (which was for a more specific case, not uFAI extinction in general, so even Robin would likely estimate a higher chance in the case considered) and assuming that FAI research won’t succeed.
So calculations of the kind you seem to be asking for just aren’t possible at this point …
I’m asking for whatever calculations should lead people to donate most of their money to the SIAI or get nightmares from stories of distant FAI’s. Surely there must be something to outweigh the lack of evidence, or on what basis has anyone decided to take things serious?
I really don’t want to anger you but the “let’s assume X” attitude is what I have my problems with here. A 1% chance of extinction by uFAI? I just don’t see this, sorry. I can’t pull this out of my hat to make me believe either. I’m not saying this is wrong but I ask why there isn’t a detailed synopsis of this kind of estimations available? I think this is crucial.
You became aware of a possible danger. You didn’t think it up at random, so you can’t the heuristic that most complex hypotheses generated at random are wrong. There is no observational evidence, but the hypothesis doesn’t predict any observational evidence yet, so lack of evidence is no evidence against (like e.g. the lack of observation is against the danger of vampires). The best arguments for and against are about equally good (at least no order of magnitude differences). There seems to be a way to do something against the danger, but only before it manifests, that is before there can be any observational evidence either way. What do you do? Just assume that the danger is zero because that’s the default? Even though there is no particular reason to assume that’s a good heuristic in this particular case? (or do you think there are good reasons in this case? You mentioned the thought that it might be a scam, but it’s not like Eliezer invented the concept of hostile AIs).
The Bayesian way to deal with it would be to just use your prior (+ whatever evidence the arguments encountered provide, but the result probably mostly depends on your priors in this case). So this is a case where it’s OK to “just make numbers up”. It’s just that you should should make them up yourself, or rather base them on what you actually believe (if you can’t have experts you trust assess the issue and supply you with their priors). No one else can tell you what your priors are. The alternative to “just assuming” is “just assuming” zero, or one, or similar (or arbitrarily decide that everything that predicts observations that would be only 5% likely if it was false is true and everything without such observations is false, regardless of how many observations were actually made), purely based on context and how the questions are posed.
This is the kind of summary of a decision procedure I have been complaining about to be missing, or hidden within enormous amounts of content. I wish someone with enough skill could write a top-level post about it demanding that the SIAI creates an introductory paper exemplifying how to reach the conclusion that (1) the risks are to be taken seriously (2) you should donate to the SIAI to reduce the risks. There could either a be a few papers for different people with different backgrounds or one with different levels of detail. It should feature detailed references to what knowledge is necessary to understand the paper itself. Further it should feature the formulas, variables and decision procedures you have to follow to estimate the risks posed by and incentive to alleviate ufriendly AI. It should also include references to further information from people not associated with the SIAI.
This would allow for the transparency that is required by claims of this magnitude and calls for action, including donations.
I wonder why it took so long until you came along posting this comment.
You didn’t succeed in communicating your problem, otherwise someone else would have explained earlier. I had been reading your posts on the issue and didn’t have even the tiniest hint of an idea that the piece you were missing was an explanation of bayesian reasoning until just before writing that comment, and even then was less optimistic about the comment doing anything for you than I had been for earlier comments. I’m still puzzled and unsure whether it actually was Bayesian reasoning or something else in the comment that apparently helped you. if it was you should read http://yudkowsky.net/rational/bayes and some of the post here tagged “bayesian”.
I wonder why it took so long until you came along posting this comment.
Because thinking is work, and it’s not always obvious what question needs to be answered.
More generally (and this is something I’m still working on grasping fully). what’s obvious to you is not necessarily obvious to other people, even if you think you have enough in common with them that it’s hard to believe that they could have missed it.
I wouldn’t have said so even a week ago, but I’m now inclined to think that your short attention span is asset to LW.
Just as Eliezer has said (can someone remember the link?) that science as conventionally set up to be too leisurely (not enough thought put into coming up with good hypotheses), LW is set up on the assumption that people have a lot of time to put into the sequences and ability to remember what’s in them.
arbitrarily decide that everything that predicts observations that would be only 5% likely if it was false is true and everything without such observations is false, regardless of how many observations were actually made
This was hard to parse. I would have named “p-value” directly. My understanding is that a stated “p-value” will indeed depend on the number of observations, and that in practice meta-analyses pool the observations from many experiments. I agree that we should not use a hard p-value cutoff for publishing experimental results.
I should have said “a set of observations” and “sets of observations”. I meant things like that if you and other groups test lots of slightly different bogus hypotheses 5% of them will be “confirmed” with statistically significant relations.
Got it, and agreed. This is one of the most pernicious forms of dishonesty by professional researchers (lying about how many hypotheses were generated), and is far more common than merely faking everything.
1% chance of extinction by uFAI? I just don’t see this, sorry. I can’t pull this out of my hat to make me believe either. I’m not saying this is wrong but I ask why there isn’t a detailed synopsis of this kind of estimations available? I think this is crucial.
Have you yet bothered to read e.g. this synopsis of SIAI’s position:
“Many AIs will converge toward being optimizing systems, in the sense that, after self-modification, they will act to maximize some goal. For instance, AIs developed under evolutionary pressures would be selected for values that maximized reproductive fitness, and would prefer to allocate resources to reproduction rather than supporting humans. Such unsafe AIs might actively mimic safe benevolence until they became powerful, since being destroyed would prevent them from working toward their goals. Thus, a broad range of AI designs may initially appear safe, but if developed to the point of a Singularity could cause human extinction in the course of optimizing the Earth for their goals.”
Personally, I think that presents a very weak case for there being risk. It argues that there could be risk if we built these machines wrong, and the bad machines became powerful somehow. That is true—but the reader is inclined to respond “so what”. A dam can be dangerous if you build it wrong too. Such observations don’t say very much about the actual risk.
I am very sceptical about that being true for those alive now:
We have been looking for things that might hit us for a long while now—and we can see much more clearly what the chances are for that period than by looking at the historical record. Also, that is apparently assuming no mitigation attempts—which also seems totally unrealistic.
...gives 700 deaths/year for aircraft—and 1,400 deaths/year for 2km impacts—based on assumption that one quarter of the human population would perish in such an impact.
I’m planning to fund FHI rather than SIAI, when I have a stable income (although my preference is for a different organisation that doesn’t exist)
My position is roughly this.
The nature of intelligence (and its capability for FOOMing) is poorly understood
The correct actions to take depend upon the nature of intelligence.
As such I would prefer to fund an institute that questioned the nature of intelligence, rather than one that has made up its mind that a singularity is the way forward. And it is not just the name that makes me think that SIAI has settled upon this view.
And because the nature of intelligence is the largest wild card in the future of humanity, I would prefer FHI to concentrate on that. Rather than longevity etc.
When I read good popular science books the people will tend to come up with some idea. Then they will test the idea to destruction. Poking and prodding at the idea until it really can’t be anything but what they say it is.
I want to get the same feeling off the group studying intelligence as I do from that type of research. They don’t need to be running foomable AIs, but truth is entangled so they should be able to figure out the nature of intelligence from other facets of the world, including physics and the biological examples.
Questions I hope they would be asking:
Is the g factor related to ability to absorb cultural information? I.e. is peoples increased ability to solve problems if they have a high g due to them being able to get more information about solving problems from cultural information sources?
If it wasn’t then it would be further evidence for .something special in one intelligence over another and it might make sense to call one more intelligent, rather than just having different initial skill sets.
If SIAI had the ethos I’d like, we’d be going over and kicking every one of the supporting arguments for the likelihood of fooming and the nature of intelligence to make sure they were sound. Performing experiments where necessary. However people have forgotten them and moved on to decision theory and the like.
Interesting points. Speaking only for myself, it doesn’t feel as though most of my problem solving or idea generating approaches were picked up from the culture, but I could be kidding myself.
For a different angle, here’s an old theory of Michael Vassar’s—I don’t know whether he still holds it. Talent consists of happening to have a reward system which happens to make doing the right thing feel good.
Talent consists of happening to have a reward system which happens to make doing the right thing feel good.
Definitely not just that. Knowing what the right thing is, and being able to do it before it’s too late, are also required. And talent implies a greater innate capacity for learning to do so. (I’m sure he meant in prospect, not retrospect).
It’s fair to say that some of what we identify as “talent” in people is actually in their motivations as well as their talent-requisite abilities.
If SIAI had the ethos I’d like, we’d be going over and kicking every one of the supporting arguments for the likelihood of fooming and the nature of intelligence to make sure they were sound.
And then, hypothetically, if they found that fooming is not likely at all, and that dangerous fooming can be rendered nearly impossible by some easily enforced precautions/regulations, what then? If they found that the SIAI has no particular unique expertise to contribute to the development of FAI? An organization with an ethos you would like: what would it do then? To make it a bit more interesting, suppose they find themselves sitting on a substantial endowment when they reason their way to their own obsolescence?
How often in human history have organizations announced, “Mission accomplished—now we will release our employees to go out and do something else”?
It doesn’t seem likely. The paranoid can usually find something scary to worry about. If something turns out to be not really-frightening, fear mongers can just go on to the next-most frightening thing in line. People have been concerned about losing their jobs to machines for over a century now. Machines are a big and scary enough domain to keep generating fear for a long time.
I think that what SIAI works on is real and urgent, but if I’m wrong and what you describe here does come to pass, the world gets yet another organisation campaigning about something no-one sane should care about. It doesn’t seem like a disastrous outcome.
From a less cynical angle, building organizations is hard. If an organization has fulfilled its purpose, or that purpose turns out to be a mistake, it isn’t awful to look for something useful for the organization to do rather than dissolving it.
The American charity organization, The March of Dimes was originally created to combat polio. Now they are involved with birth defects and other infant health issues.
Since they are the one case I know of (other than ad hoc disaster relief efforts) in which an organized charity accomplished its mission, I don’t begrudge them a few additional decades of corporate existence.
The point of my post is not that there’s a problem of SIAI staff making claims that you find uncredible, the point of my post is that there’s a problem of SIAI making claims that people who are not already sold on taking existential risk seriously find uncredible.
Can you give a few more examples of claims made by SIAI staff that people find uncredible? Because it’s probably not entirely clear to them (or to others interested in existential risk advocacy) what kind of things a typical smart person would find uncredible.
Looking at your previous comments, I see that another example you gave was that AGI will be developed within the next century. Any other examples?
Is accepting multi-universes important to the SIAI argument? There are a very, very large number of smart people who know very little about physics. They give lip service to quantum theory and relativity because of authority—but they do not understand them. Mentioning multi-universes just slams a door in their minds. If it is important then you will have to continue referring to it but if it is not then it would be better not to sound like you have science fiction type ideas.
Is accepting multi-universes important to the SIAI argument?
Definitely not, for the purposes of public relations at least. It may make some difference when actually doing AI work.
If it is important then you will have to continue referring to it but if it is not then it would be better not to sound like you have science fiction type ideas.
Good point. Cryonics probably comes with a worse Sci. Fi. vibe but is unfortunately less avoidable.
Cryonics probably comes with a worse Sci. Fi. vibe
This is a large part of what I implicitly had in mind making my cryonics post (which I guess really rubbed you the wrong way). You might be interested in taking a look at the updated version if you haven’t already done so—I hope it’s more clear than it was before.
AI will be developed by a small team (at this time) in secret
That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world. It might be vaguely useful for looking at computing in the limit (e.g. Galaxy sized computers), but otherwise it is credibility stretching.
AI will be developed by a small team (at this time) in secret
I find this very unlikely as well, but Anna Salamon once put it as something like “9 Fields-Medalist types plus (an eventual) methodological revolution” which made me raise my probability estimate from “negligible” to “very small”, which I think given the potential payoffs, is enough for someone to be exploring the possibility seriously.
I have a suspicion that Eliezer isn’t privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.
That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.
Turing’s theories involving infinite computing power contributed to building actual computers, right? I don’t see why such theories wouldn’t be useful stepping stones for building AIs as well. There’s a lot of work on making AIXI practical, for example (which may be disastrous if they succeeded since AIXI wasn’t designed to be Friendly).
If this is really something that a typical smart person finds hard to believe at first, it seems like it would be relatively easy to convince them otherwise.
I have a suspicion that Eliezer isn’t privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.
The impression I have lingering from Sl4 days is that he thinks it the only way to do AI safely.
Turing’s theories involving infinite computing power contributed to building actual computers, right? I don’t see why such theories wouldn’t be useful stepping stones for building AIs as well.
They only generally had infinite memory, rather than infinite processing power. The trouble with infinite processing power is that it doesn’t encourage you to ask what hypotheses should be processed. You just sweep that issue under the carpet and do them all.
I don’t see this as being much of an issue for getting usable AI working: it may be an issue if we demand perfect modeling of reality from a system, but there is no reason to suppose we have that.
As I see it, we can set up a probabilistic model of reality and extend this model in an exploratory way. We would continually measure the relevance of features of the model—how much effect they have on predicted values that are of interest—and we would tend to keep those parts of the model that have high relevance. If we “grow” the model out from the existing model that is known to have high relevance, we should expect it to be more likely that we will encounter further, high-relevance “regions”.
That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.
I’m not sure why that stretches your credibility. Note for example, that computability results often tell us not to try something. Thus for example, the Turing Halting Theorem and related results mean that we know we can’t make a program that will in general tell if any arbitrary program will crash.
Similarly, theorems about the asymptotic ability of certain algorithms matters. A strong version of P != NP would have direct implications about AIs trying to go FOOM. Similarly, if trapdoor function or one-way functions exist they give us possible security procedures with handling young general AI.
I’m mainly talking about solomonoff induction here. Especially when Eliezer uses it as part of his argument about what we can expect from Super Intelligences. Or searching through 3^^^3 proofs without blinking an eye.
The point in the linked post doesn’t deal substantially with the limits of arbitrarily large computers. It is just an intuition pump for the idea that a fast moderately bright intelligence could be dangerous.
Is it a good intuition pump? To me it is like using a TM as an intuition pump about how much memory we might have in the future.
We will never have anywhere near infinite memory. We will have a lot more than what we have at the moment, but the concept of the TM is not useful in gauging the scope and magnitude.
I’m trying to find the other post that annoyed me in this fashion. Something to do with simulating universes.
Good question. I’ll get back to you on this when I get a chance, I should do a little bit of research on the topic first. The two examples that you’ve seen are the main ones that I have in mind that have been stated in public, but there may be others that I’m forgetting.
There are some other examples that I have in mind from my private correspondence with Michael Vassar. He’s made some claims which I personally do not find at all credible. (I don’t want to repeat these without his explicit permission.) I’m sold on the cause of existential risk reduction, so the issue in my top level post does not apply here. But in the course of the correspondence I got the impression that he may say similar things in private to other people who are not sold on the cause of existential risk.
I second that question. I am sure there probably are other examples but they for most part wouldn’t occur to me. The main examples that spring to mind are from cases where Robin has disagreed with Eliezer… but that is hardly a huge step away from SIAI mainline!
And if I was to spread the full context of the above and tell anyone outside of the hard core about it, do you seriously think that they would think these kind of reactions are credible?
Amusing anecdote: There was a story about this issue on Slashdot one time, where someone possessing kiddy porn had obscured the faces by doing a swirl distortion, but investigators were able to sufficiently reverse this by doing an opposite swirl and so were able to identify the victims.
Then someone posted a comment to say that if you ever want to avoid this problem, you need to do something like a Gaussian blur, which deletes the information contained in that portion of the image.
Somebody replied to that comment and said, “Yeah. Or, you know, you could just not molest children.”
Please stop doing this. You are adding spaced repetition to something that I, and others, positively do not want to think about. That is a real harm and you do not appear to have taken it seriously.
I’m sorry, but people like Wei force me to do this as they make this whole movement look like being completely down-to-earth, when in fact most people, if they knew about the full complexity of beliefs within this community, would laugh out loud.
You have a good point. It would be completely unreasonable to ban topics in such a manner while simultaneously expecting to maintain an image of being down to earth or particularly credible to intelligent external observers. It also doesn’t reflect well on the SIAI if their authorities claim they cannot consider relevant risks because due to psychological or psychiatric difficulties. That is incredibly bad PR. It is exactly the kind of problem this post discusses.
Since the success of an organization is partly dependent on its PR, a rational donor should be skeptical of donating to an organization with bad PR. Any organization soliciting donations should keep this principle in mind.
Since the success of an organization is partly dependent on its PR, a rational donor should be skeptical of donating to an organization with bad PR.
So let me see if I understand: if an organization uses its income to make a major scientific breakthrough or to prevent a million people from starving, but does not pay enough attention to avoiding bad PR with the result that the organization ends (but the productive employees take the skills they have accumulated there to other organizations), that is a bad organization, but if an organization in the manner of most non-profits focuses on staying in existence as long as possible to provide a secure personal income for its leaders, which entails paying close attention to PR, that is a good organization?
Well, let us take a concrete example: Doug Engelbart’s lab at SRI International. Doug wasted too much time mentoring the young researchers in his lab with the result that he did not pay enough attention to PR and his lab was forced to close. Most of the young researchers got jobs at Xerox PARC and continued to develop Engelbart’s vision of networked personal computers with graphical user interfaces, work that directly and incontrovertibly inspired the Macintosh computer. But let’s not focus on that. Let’s focus on the fact that Engelbart is a failure because he no longer runs an organization because the organization failed because Engelbart did not pay enough attention to PR and to the other factors needed to ensure the perpetuation of the organization.
I still have a hard time believing it actually happened. I have heard that there’s no such thing as bad publicity—but surely nobody would pull this kind of stunt deliberately. It just seems to be such an obviously bad thing to do.
The “laugh test” is not rational. I think that, if the majority of people fully understood the context of such statements, they would not consider them funny.
The topic was the banned topic and the deleted posts—not the laugh test. If you explained what happened to an outsider—they would have a hard time believing the story—since the explanation sounds so totally crazy and ridiculous.
I’ll try to test that, but keep in mind that my standards for “fully understanding” something are pretty high. I would have to explain FAI theory, AI-FOOM, CEV, what SIAI was, etc.
After all, everyone likes to tell the tale of the forbidden topic and the apprentice being insulted. You are spreading the story around now—increasing the mystery and intrigue of these mythical events about which (almost!) all records have been deleted. The material was left in public for a long time—creating plenty of opportunities for it to “accidentally” leak out.
By allowing partly obfuscated forbidden materials to emerge, you may be contributing to the community folklaw, spreading and perpetuating the intrigue.
The trauma caused by imagining torture blackmail is hard to relate to for most people (including me), because it’s so easy to not take an idea like infinite torture blackmail seriously, on the grounds that the likelihood of ever actually encountering such a scenario seems vanishingly small.
I guess those who are disturbed by the idea have excellent imaginations, or more likely, emotional systems that can be fooled into trying to evaluate the idea of infinite torture (“hell”).
Therefore, I agree that it’s possible to make fun of people on this basis. I myself lean more toward accommodation. Sure, I think those hurt by it should have just avoided the discussion, but perhaps having EY speak for them and officially ban something gave them some catharsis. I feel like I’m beginning to make fun now, so I’ll stop.
You don’t seem to realize that claims like the ones in the post in question are a common sort of claim to make people vulnerable to neuroses develop further problems. Regardless whether or not the claims are at all reasonable, repeatedly referencing them this way is likely to cause further psychological harm. Please stop.
You don’t seem to realize that claims like the ones in the post in question are a common sort of claim to make people vulnerable to neuroses develop further problems. Regardless whether or not the claims are at all reasonable, repeatedly referencing them this way is likely to cause further psychological harm.
However, it seems that in general, the mere fact that certain statements may cause psychological harm to some people is not considered a sufficient ground for banning or even just discouraging such statements here. For example, I am sure that many religious people would find certain views often expressed here shocking and deeply disturbing, and I have no doubt that many of them could be driven into serious psychological crises by exposure to such arguments, especially if they’re stated so clearly and poignantly that they’re difficult to brush off or rationalize away. Or, to take another example, it’s very hard to scare me with hypotheticals, but the post “The Strangest Thing An AI Could Tell You” and the subsequent thread came pretty close; I’m sure that at least a few readers of this blog didn’t sleep well if they happened to read that right before bedtime.
So, what exact sorts of potential psychological harm constitute sufficient grounds for proclaiming a topic undesirable? Is there some official policy about this that I’ve failed to acquaint myself with?
Neither do i, and ive thought a lot about religious extremism and other scary views that turn into reality when given to someone in a sufficiently horrible mental state.
The one “uncredible” claim mentioned—about Eliezer being “hit by a meteorite”—sounds as though it is the kind of thing he might plausibly think. Not too much of a big deal, IMO.
As with many charities, it is easy to think the SIAI might be having a negative effect—simply because it occupies the niche of another organisation that could be doing a better job—but what to do? Things could be worse as well—probably much worse.
I suggested what to do about this problem in my post: withhold funding from SIAI, and make it clear to them why you’re withholding funding from them, and promise to fund them if the issue is satisfactorily resolved to incentivize them to improve.
Right—but that’s only advice for those who are already donating. Others would presumably seek reform or replacement. The decision there seems non-trivial.
Will you do this?
I’m definitely interested in funding an existential risk organization. SIAI would have to be a lot more transparent than it is now right now for me to be interested in funding SIAI. For me personally, it wouldn’t be enough for SIAI to just take measures to avoid poisoning the meme, I would need to see a lot more evidence that SIAI is systematically working to maximize its impact on existential risk reduction.
As things stand I prefer to hold out for a better organization. But if SIAI exhibited transparency and accountability of levels similar to those of GiveWell (welcoming and publically responding to criticism regularly, regularly posting detailed plans of action, seeking out feedback from subject matter specialists and making this public when possible, etc.) I would definitely fund SIAI and advocate that others do so as well.
“transparency”? I thought the point of your post was that SIAI members should refrain from making some of their beliefs easily available to the public?
I see, maybe I should have been more clear. The point of my post is that SIAI members should not express controversial views without substantiating them with abundant evidence. If SIAI provided compelling evidence that Eliezer’s work has higher expected value to humanity than what virtually everybody else is doing, then I would think Eliezer’s comment appropriate.
As things stand SIAI has not provided such evidence. Eliezer himself may have such evidence, but if so he’s either unwilling or unable to share it.
There are a lot of second and higher order effects in PR. You can always shape your public statements for one audience and end up driving away (or failing to convince) another one that’s more important. If Eliezer had shied away from stating some of the more “uncredible” ideas because there wasn’t enough evidence to convince a typical smart person, it would surely prompt questions of “what do you really think about this?” or fail to attract people who are currently interested in SIAI because of those ideas.
Suppose Eliezer hadn’t made that claim, and somebody asks him, “do you think the work SIAI is doing has higher expected value to humanity than what everybody else is doing?”, which somebody is bound to, given that Eliezer is asking for donations from rationalists. What is he supposed to say? “I can’t give you the answer because I don’t have enough evidence to convince a typical smart person?”
I think you make a good point that it’s important to think about PR, but I’m not at all convinced that the specific advice you give are the right ones.
Thanks for your feedback. Several remarks:
This is of course true. I myself am fairly certain that SIAI’s public statements are driving away the people who it’s most important to interest in existential risk.
•It’s standard public relations practice to reveal certain information only if asked.
•An organization that has the strongest case for room for more funding need not be an organization that’s doing something of higher expected value to humanity than what everybody else is doing. In particular, I simultaneously believe that there are politicians who have higher expected value to humanity than all existential risk researchers alive and that the cause of existential risk has the greatest room for more funding.
•One need not be confident in one’s belief that funding one’s organization has highest expected value to humanity to believe that funding one’s organization has highest expected to humanity. A major issue that I have with Eliezer’s rhetoric is that he projects what I perceive to be an unreasonably high degree of confidence in his beliefs.
•Another major issue with Eliezer’s rhetoric that I have is that even putting issues of PR aside, I personally believe that funding SIAI does not have anywhere near the highest expected value to humanity out of all possible uses of money. So from my point of view, I see no upside to Eliezer making extreme claims of the sort that he has—it looks to me as though Eliezer is making false claims and damaging public relations for existential risk as a result.
I will be detailing my reasons for thinking that SIAI’s research does not have high expected value in a future post.
The level of certainty is not up for grabs. You are as confident as you happen to be, this can’t be changed. You can change the appearance, but not your actual level of confidence. And changing the apparent level of confidence is equivalent to lying.
But it isn’t perceived as so by the general public—it seems to me that the usual perception of “confidence” has more to do with status than with probability estimates.
The non-technical people I work with often say that I use “maybe” and “probably” too much (I’m a programmer—“it’ll probably work” is a good description of how often it does work in practice) - as if having confidence in one’s statements was a sign of moral fibre, and not a sign of miscalibration.
Actually, making statements with high confidence is a positive trait, but most people address this by increasing the confidence they express, not by increasing their knowledge until they can honestly make high-confidence statements. And our culture doesn’t correct for that, because errors of calibration are not immediatly obvious (as they would be if, say, we had a widespread habit of betting on various things).
That a lie is likely to be misinterpreted or not noticed doesn’t make it not a lie, and conversely.
Oh, I fully agree with your point; it’s a pity that high confidence on unusual topics is interpreted as arrogance.
Try this: I prefer my leaders to be confident. I prefer my subordinates to be truthful.
For what definitions of “value to humanity” and “virtually everybody else”?
If “value to humanity” is assessed as in Bostrom’s Astronomical Waste paper, that hugely favors effects on existential risk vs alleviating current suffering or increasing present welfare (as such, those also have existential risk effects). Most people don’t agree with that view, so asserting that as a privileged frame can be seen as a hostile move (attacking the value systems of others in favor of a value system according to which one’s area of focus is especially important). Think of the anger directed at vegetarians, or those who guilt-trip others about not saving African lives. And of course, it’s easier to do well on a metric that others are mostly not focused on optimizing.
Dispute about what best reduces existential risk, and annoyance at overly confident statements there is a further issue, but I think that asserting uncommon moral principles (which happen to rank one’s activities as much more valuable than most people would rank them) is a big factor on its own.
In case my previous comment was ambiguous, I should say that I agree with you completely on this point. I’ve been wanting to make a top level post about this general topic for a while. Not sure when I’ll get a chance to do so.
Now that is unfair.
Since 1997, Eliezer has published (mostly on mailing lists and blogs but also in monographs) an enormous amount (at least ten novels worth unless I am very mistaken) of writings supporting exactly that point. Of course most of this material is technical, but unlike the vast majority of technical prose, it is accessible to non-specialists and non-initiates with enough intelligence, a solid undergraduate education as a “scientific generalist” and a lot of free time on their hands because in his writings Eliezer is constantly “watching out for” the reader who does not yet know what he knows. (In other words, it is uncommonly good technical exposition.)
So my impression has been that the situation is that
(i) Eliezer’s writings contain a great deal of insightful material.
(ii) These writings do not substantiate the idea that [that Eliezer’s work has higher expected value to humanity than what virtually everybody else is doing].
I say this having read perhaps around a thousand pages of what Eliezer has written. I consider the amount of reading that I’ve done to be a good “probabilistic proof” that the points (i) and (ii) apply to the portion of his writings that I haven’t read.
That being said, if there are any particular documents that you would point me to which you feel do provide a satisfactory evidence for the idea [that Eliezer’s work has higher expected value to humanity than what virtually everybody else is doing], I would be happy to examine them.
I’m unwilling to read the whole of his opus given how much of it I’ve already read without being convinced. I feel that the time that I put into reducing existential risk can be used to better effect in other ways.
It would help to know what steps in the probabilistic proof don’t have high probability for you.
For example, you might think that the singularity has a good probability of being relatively smooth and some kind of friendly, even without FAI. or you might think that other existential risks may still be a bigger threat, or you may think that Eliezer isn’t putting a dent in the FAI problem.
Or some combination of these and others.
Yes, I agree with you. I plan on making my detailed thoughts on these points explicit. I expect to be able to do so within a month.
But for a short answer, I would say that the situation is mostly that I think that:
This might be a convenient place to collect a variety of reasons why people are FOOM denialists. From my POV:
I am skeptical that safeguards against UFAI (unFAI) will not work. In part because:
I doubt that the “takeoff” will be “hard”. Because:
I am pretty sure the takeoff will require repeatedly doubling and quadrupling hardware, not just autorewriting software.
And hence an effective safeguard would be to simply not give the machine its own credit card!
And in any case, the Moore’s law curve for electronics does not arise from delays in thinking up clever ideas, it arises from delays in building machines to incredibly high tolerances.
Furthermore, even after the machine has more hardware, it doesn’t yet have higher intelligence until it reads lots more encyclopedias and proves for itself many more theorems. These things take time.
And finally, I have yet to see the argument that an FAI protects us from a future UFAI. That is, how does the SIAI help us?
Oh, and I do think that the other existential risks, particularly war and economic collapse, put the UFAI risk pretty far down the priority list. Sure, those other risks may not be quite so existential, but if they don’t kill us, they will at least prevent an early singularity.
Edit added two days later: Since writing this, I thought about it some more, shut up for a moment, and did the math. I still think that it is unlikely that the first takeoff will be a hard one; so hard that it gets out of control. But I now estimate something like a 10% chance that the first takeoff will be hard, and I estimate something like a 30% chance that at least one of the first couple dozen takeoffs will be hard. Multiply that by an estimated 10% chance that a hard takeoff will take place without adequate safeguards in place, and another 10% chance that a safeguardless hard takeoff will go rogue, and you get something like a 0.3% chance of a disaster of Forbin Project magnitude. Completely unacceptable.
Originally, I had discounted the chance that a simple software change could cause the takeoff; I assumed you would need to double and redouble the hardware capability. What I failed to notice was that a simple “tuning” change to the (soft) network connectivity parameters—changing the maximum number of inputs per “neuron” from 8 to 7, say, could have an (unexpected) effect on performance of several orders of magnitude simply by suppressing wasteful thrashing or some such thing.
Do you think that progress in AI is limited primarily by hardware? If hardware is the limiting factor, then you should think AI soon relatively plausible. If software is the limiting factor (the majority view, and the reason most AI folk reject claims such as those of Moravec), such that we won’t get AI until well beyond the minimum computational requirements, then either early AIs should be able to run fast or with numerous copies cheaply, or there will be a lot of room to reduce bloated hardware demands through software improvements.
Thinking that AI will take a long time (during which hardware will advance mightily towards physical limits) but also be sharply and stably hardware-limited when created is a hard view to defend.
I am imagining that it will work something like the human brain (but not by ‘scan and emulate’). We need to create hardware modules comparable to neurons, we need to have some kind of geometric organization which permits individual hardware modules to establish physical connections to a handful of nearby modules, and we need a ‘program’ (corresponding to human embryonic development) which establishes a few starting connections, and finally we need a training period (like training a neural net, and comparable to what the human brain experiences from the first neural activity in the womb through graduate school) which adds many more physical connections. I’m not sure whether to call these connections hardware or software. Actually, they are a hybrid of both—like PLAs (yeah, I way out of date on technology).
So I’m imagining a lot of theoretical work needed to come up with a good ‘neuron’ design (probably several dozen different kinds of neurons), more theoretical work to come up with a good ‘program’ to correspond to the embryonic interconnect, and someone willing to pay for lots and lots of neurons.
So, yeah, I’m thinking that the program will be relatively simple (equivalent to a few million lines of code), but it will take us a long time to find it. Not the 500 million years that it took evolution to come up with that program—apparently 500 million years after it had already invented the neuron. But for human designers, at least a few decades to find and write the program. I hope this explanation helps to make my position seem less weird.
(Powerful) optimization processes can find such ways of solving problems by exploiting every possible shortcut that it is hard to predict those ways in advance. Recently here was an example of that. Genetic algorithm found unexpected solution of a problem exploiting analog properties of particular FPGA chip.
7-8 aren’t hard-takeoff-denialist ideas; they’re SIAI noncontribution arguments. Good summary, though.
Phew! First, my material on the topic:
http://alife.co.uk/essays/the_singularity_is_nonsense/
http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/
Then a few points—which I may add to later.
3 and 4: hardware, sure—that is improving too—just not as fast, sometimes. A machine may find a way to obtain a credit card—or it will get a human to buy whatever it needs—as happens in companies today.
6: how much time? Surely a better example would be: “perform experiments”—and experiments that caan’t be minaturised and executed at high speeds—such as those done in the LHC.
7: AltaVista didn’t protect us from Google—nor did Friendster protect against MySpace. However, so far Google has mostly successfully crushed its rivals.
8: no way, IMO—e.g. see Matt Ridley. That is probably good advice for all DOOMsters, actually.
Some of the most obvious safeguards are likely to be self-imposed ones:
http://alife.co.uk/essays/stopping_superintelligence/
...though a resiliant infrastructure would help too. We see rogue agents (botnets) “eating” the internet today—and it is not very much fun!
Incidentally, a much better place for this kind of comment on this site would be:
http://lesswrong.com/lw/wf/hard_takeoff/
Can you be more specific than “it’s somewhere beneath an enormous amount of 13 years of material from the very same person whose arguments are scrutinized for evidence”?
This is not sufficient to scare people up to the point of having nightmares and ask them for most of their money.
Do you want me to repeat the links people gave you 24 hours ago?
The person who was scared to the point of having nightmares was almost certainly on a weeks-long or months-long visit to the big house in California where people come to discuss extremely powerful technologies and the far future and to learn from experts on these subjects. That environment would tend to cause a person to take certain ideas more seriously than a person usually would.
Also, are we really discrediting people because they were foolish enough to talk about their deranged sleep-thoughts? I’d sound pretty stupid too if I remembered and advertised every bit of nonsense I experienced while sleeping.
It was more than one person. Anyway, I haven’t read all of the comments yet so I might have missed some specific links. If you are talking about links to articles written by EY himself where he argues about AI going FOOM, I commented on one of them.
Here is an example of the kind of transparency in the form of strict calculations, references and evidence I expect.
As I said, I’m not sure what other links you are talking about. But if you mean the kind of LW posts dealing with antipredictions, I’m not impressed. Predicting superhuman AI to be a possible outcome of AI research is not sufficient. Where is the difference between claiming the LHC will go FOOM? I’m sure someone like EY would be able to write a thousand posts around such a scenario telling me that the high risk associated with the LHC going FOOM does outweigh its low probability. There might be sound arguments to support this conclusion. But it is a conclusion and a framework of arguments based on a assumption that is itself of unknown credibility. So is it too much to ask for some transparet evidence to fortify this basic premise? Evidence that is not somewhere to be found within hundreds of posts not directly concerned with the evidence in question but rather arguing based on the very assumption it is trying to justify?
Asteroids really are an easier problem: celestial mechanics in vacuum are pretty stable, we have the Moon providing a record of past cratering to calibrate on, etc. There’s still uncertainty about the technology of asteroid deflection (e.g. its potential for military use, or to incite conflict), but overall it’s perhaps the most tractable risk for analysis since the asteroids themselves don’t depend on recent events (save for some smallish anthropic shadow effects).
An analysis for engineered pathogens, where we have a lot of uncertainty about the difficulty of engineering various of diseases for maximum damage, and how the technology for detection, treatment and prevention will keep pace. We can make generalizations based on existing diseases and their evolutionary dynamics (selection for lower virulence over time with person-to-person transmission, etc), current public health measures, etc, the rarity of the relevant motivations, etc, but you’re still left with many more places where you can’t just plug in well-established numbers and crank forward.
You can still give probability estimates, and plug in well-understood past data where you can, but you can’t get asteroid-level exactitude.
The difference is that we understand both asteroids and particle physics far better than we do intelligence, and there is precedence for both asteroid impacts and high energy particle collisions (natural ones at far higher energy than in the LHC) while there is none for an engineered human level intelligence with access to its own source code.
So calculations of the kind you seem to be asking for just aren’t possible at this point (and calculations with exactly that level of evidence won’t be possible right up until it’s too late), while refutations of the kind LHC panic gets aren’t possible either. You should also note that Eliezer takes LHC panic more serious than most non-innumerate people.
But if you want some calculation anyway: Let’s assume there is a 1% chance of extinction by uFAI within the next 100 years. Let’s also assume that spending $10 million per year (in 2010 dollars, adjusting for inflation) allows us to reduce that risk by 10%, just by the dangers of uFAI being in the public eye and people being somewhat more cautious, and taking the right sort of caution instead of worrying about Skynet or homicidal robots. So $1 billion saves about an expected 1 million lives, a cost of $ 1000 per life, which is about the level of the most efficient conventional charities. And that’s with Robins low-balling estimate (which was for a more specific case, not uFAI extinction in general, so even Robin would likely estimate a higher chance in the case considered) and assuming that FAI research won’t succeed.
I’m asking for whatever calculations should lead people to donate most of their money to the SIAI or get nightmares from stories of distant FAI’s. Surely there must be something to outweigh the lack of evidence, or on what basis has anyone decided to take things serious?
I really don’t want to anger you but the “let’s assume X” attitude is what I have my problems with here. A 1% chance of extinction by uFAI? I just don’t see this, sorry. I can’t pull this out of my hat to make me believe either. I’m not saying this is wrong but I ask why there isn’t a detailed synopsis of this kind of estimations available? I think this is crucial.
So what’s the alternative?
You became aware of a possible danger. You didn’t think it up at random, so you can’t the heuristic that most complex hypotheses generated at random are wrong. There is no observational evidence, but the hypothesis doesn’t predict any observational evidence yet, so lack of evidence is no evidence against (like e.g. the lack of observation is against the danger of vampires). The best arguments for and against are about equally good (at least no order of magnitude differences). There seems to be a way to do something against the danger, but only before it manifests, that is before there can be any observational evidence either way. What do you do? Just assume that the danger is zero because that’s the default? Even though there is no particular reason to assume that’s a good heuristic in this particular case? (or do you think there are good reasons in this case? You mentioned the thought that it might be a scam, but it’s not like Eliezer invented the concept of hostile AIs).
The Bayesian way to deal with it would be to just use your prior (+ whatever evidence the arguments encountered provide, but the result probably mostly depends on your priors in this case). So this is a case where it’s OK to “just make numbers up”. It’s just that you should should make them up yourself, or rather base them on what you actually believe (if you can’t have experts you trust assess the issue and supply you with their priors). No one else can tell you what your priors are. The alternative to “just assuming” is “just assuming” zero, or one, or similar (or arbitrarily decide that everything that predicts observations that would be only 5% likely if it was false is true and everything without such observations is false, regardless of how many observations were actually made), purely based on context and how the questions are posed.
This is the kind of summary of a decision procedure I have been complaining about to be missing, or hidden within enormous amounts of content. I wish someone with enough skill could write a top-level post about it demanding that the SIAI creates an introductory paper exemplifying how to reach the conclusion that (1) the risks are to be taken seriously (2) you should donate to the SIAI to reduce the risks. There could either a be a few papers for different people with different backgrounds or one with different levels of detail. It should feature detailed references to what knowledge is necessary to understand the paper itself. Further it should feature the formulas, variables and decision procedures you have to follow to estimate the risks posed by and incentive to alleviate ufriendly AI. It should also include references to further information from people not associated with the SIAI.
This would allow for the transparency that is required by claims of this magnitude and calls for action, including donations.
I wonder why it took so long until you came along posting this comment.
You didn’t succeed in communicating your problem, otherwise someone else would have explained earlier. I had been reading your posts on the issue and didn’t have even the tiniest hint of an idea that the piece you were missing was an explanation of bayesian reasoning until just before writing that comment, and even then was less optimistic about the comment doing anything for you than I had been for earlier comments. I’m still puzzled and unsure whether it actually was Bayesian reasoning or something else in the comment that apparently helped you. if it was you should read http://yudkowsky.net/rational/bayes and some of the post here tagged “bayesian”.
Because thinking is work, and it’s not always obvious what question needs to be answered.
More generally (and this is something I’m still working on grasping fully). what’s obvious to you is not necessarily obvious to other people, even if you think you have enough in common with them that it’s hard to believe that they could have missed it.
I wouldn’t have said so even a week ago, but I’m now inclined to think that your short attention span is asset to LW.
Just as Eliezer has said (can someone remember the link?) that science as conventionally set up to be too leisurely (not enough thought put into coming up with good hypotheses), LW is set up on the assumption that people have a lot of time to put into the sequences and ability to remember what’s in them.
This isn’t quite what you’re talking about, but a relatively accessible intro doc:
http://singinst.org/riskintro/index.html
This seems like a summary of the idea of there being significant risk:
Anna Salamon at Singularity Summit 2009 - “Shaping the Intelligence Explosion”
http://www.vimeo.com/7318055
Good comment.
However,
This was hard to parse. I would have named “p-value” directly. My understanding is that a stated “p-value” will indeed depend on the number of observations, and that in practice meta-analyses pool the observations from many experiments. I agree that we should not use a hard p-value cutoff for publishing experimental results.
I should have said “a set of observations” and “sets of observations”. I meant things like that if you and other groups test lots of slightly different bogus hypotheses 5% of them will be “confirmed” with statistically significant relations.
Got it, and agreed. This is one of the most pernicious forms of dishonesty by professional researchers (lying about how many hypotheses were generated), and is far more common than merely faking everything.
Have you yet bothered to read e.g. this synopsis of SIAI’s position:
http://singinst.org/riskintro/index.html
I’d also strongly recommend this from Bostrom:
http://www.nickbostrom.com/fut/evolution.html
(Then of course there are longer and more comprehensive texts, which I won’t recommend because you would just continue to ignore them.)
The core of:
http://singinst.org/riskintro/
...that talks about risk appears to be:
“Many AIs will converge toward being optimizing systems, in the sense that, after self-modification, they will act to maximize some goal. For instance, AIs developed under evolutionary pressures would be selected for values that maximized reproductive fitness, and would prefer to allocate resources to reproduction rather than supporting humans. Such unsafe AIs might actively mimic safe benevolence until they became powerful, since being destroyed would prevent them from working toward their goals. Thus, a broad range of AI designs may initially appear safe, but if developed to the point of a Singularity could cause human extinction in the course of optimizing the Earth for their goals.”
Personally, I think that presents a very weak case for there being risk. It argues that there could be risk if we built these machines wrong, and the bad machines became powerful somehow. That is true—but the reader is inclined to respond “so what”. A dam can be dangerous if you build it wrong too. Such observations don’t say very much about the actual risk.
This calculation places no value on the future generations whose birth depends on averting existential risk. That’s not how I see things.
That claims that “that the lifetime risk of dying from an asteroid strike is about the same as the risk of dying in a commercial airplane crash”.
It cites:
Impacts on the Earth by asteroids and comets: assessing the hazard:
http://www.nature.com/nature/journal/v367/n6458/abs/367033a0.html
I am very sceptical about that being true for those alive now:
We have been looking for things that might hit us for a long while now—and we can see much more clearly what the chances are for that period than by looking at the historical record. Also, that is apparently assuming no mitigation attempts—which also seems totally unrealistic.
Looking further:
http://users.tpg.com.au/users/tps-seti/spacegd7.html
...gives 700 deaths/year for aircraft—and 1,400 deaths/year for 2km impacts—based on assumption that one quarter of the human population would perish in such an impact.
Yet, does the SIAI provide evidence on par with the paper I linked to?
What—about the chances of superintelligence causing THE END OF THE WORLD?!?
Of course not! How could they be expected to do that?
If there really was “abundant evidence” there probably wouldn’t be much of a controversy.
With machine intelligence, you probably want to be on the winning side—if that is possible.
Until it is clearer who that is going to be, many will want to hedge.
I’m planning to fund FHI rather than SIAI, when I have a stable income (although my preference is for a different organisation that doesn’t exist)
My position is roughly this.
The nature of intelligence (and its capability for FOOMing) is poorly understood
The correct actions to take depend upon the nature of intelligence.
As such I would prefer to fund an institute that questioned the nature of intelligence, rather than one that has made up its mind that a singularity is the way forward. And it is not just the name that makes me think that SIAI has settled upon this view.
And because the nature of intelligence is the largest wild card in the future of humanity, I would prefer FHI to concentrate on that. Rather than longevity etc.
What would the charity you’d like to contribute to look like?
When I read good popular science books the people will tend to come up with some idea. Then they will test the idea to destruction. Poking and prodding at the idea until it really can’t be anything but what they say it is.
I want to get the same feeling off the group studying intelligence as I do from that type of research. They don’t need to be running foomable AIs, but truth is entangled so they should be able to figure out the nature of intelligence from other facets of the world, including physics and the biological examples.
Questions I hope they would be asking:
Is the g factor related to ability to absorb cultural information? I.e. is peoples increased ability to solve problems if they have a high g due to them being able to get more information about solving problems from cultural information sources?
If it wasn’t then it would be further evidence for .something special in one intelligence over another and it might make sense to call one more intelligent, rather than just having different initial skill sets.
If SIAI had the ethos I’d like, we’d be going over and kicking every one of the supporting arguments for the likelihood of fooming and the nature of intelligence to make sure they were sound. Performing experiments where necessary. However people have forgotten them and moved on to decision theory and the like.
Interesting points. Speaking only for myself, it doesn’t feel as though most of my problem solving or idea generating approaches were picked up from the culture, but I could be kidding myself.
For a different angle, here’s an old theory of Michael Vassar’s—I don’t know whether he still holds it. Talent consists of happening to have a reward system which happens to make doing the right thing feel good.
Definitely not just that. Knowing what the right thing is, and being able to do it before it’s too late, are also required. And talent implies a greater innate capacity for learning to do so. (I’m sure he meant in prospect, not retrospect).
It’s fair to say that some of what we identify as “talent” in people is actually in their motivations as well as their talent-requisite abilities.
And then, hypothetically, if they found that fooming is not likely at all, and that dangerous fooming can be rendered nearly impossible by some easily enforced precautions/regulations, what then? If they found that the SIAI has no particular unique expertise to contribute to the development of FAI? An organization with an ethos you would like: what would it do then? To make it a bit more interesting, suppose they find themselves sitting on a substantial endowment when they reason their way to their own obsolescence?
How often in human history have organizations announced, “Mission accomplished—now we will release our employees to go out and do something else”?
It doesn’t seem likely. The paranoid can usually find something scary to worry about. If something turns out to be not really-frightening, fear mongers can just go on to the next-most frightening thing in line. People have been concerned about losing their jobs to machines for over a century now. Machines are a big and scary enough domain to keep generating fear for a long time.
I think that what SIAI works on is real and urgent, but if I’m wrong and what you describe here does come to pass, the world gets yet another organisation campaigning about something no-one sane should care about. It doesn’t seem like a disastrous outcome.
From a less cynical angle, building organizations is hard. If an organization has fulfilled its purpose, or that purpose turns out to be a mistake, it isn’t awful to look for something useful for the organization to do rather than dissolving it.
The American charity organization, The March of Dimes was originally created to combat polio. Now they are involved with birth defects and other infant health issues.
Since they are the one case I know of (other than ad hoc disaster relief efforts) in which an organized charity accomplished its mission, I don’t begrudge them a few additional decades of corporate existence.
I like this concept.
Assume your theory will fail in some places, and keep pressing it until it does, or you run out of ways to test it.
FHI?
The Future of Humanity Institute.
Nick Bostrom’s personal website probably gives you the best idea of what they produce.
A little too philosophical for my liking, but still interesting.
The point of my post is not that there’s a problem of SIAI staff making claims that you find uncredible, the point of my post is that there’s a problem of SIAI making claims that people who are not already sold on taking existential risk seriously find uncredible.
Can you give a few more examples of claims made by SIAI staff that people find uncredible? Because it’s probably not entirely clear to them (or to others interested in existential risk advocacy) what kind of things a typical smart person would find uncredible.
Looking at your previous comments, I see that another example you gave was that AGI will be developed within the next century. Any other examples?
Is accepting multi-universes important to the SIAI argument? There are a very, very large number of smart people who know very little about physics. They give lip service to quantum theory and relativity because of authority—but they do not understand them. Mentioning multi-universes just slams a door in their minds. If it is important then you will have to continue referring to it but if it is not then it would be better not to sound like you have science fiction type ideas.
Definitely not, for the purposes of public relations at least. It may make some difference when actually doing AI work.
Good point. Cryonics probably comes with a worse Sci. Fi. vibe but is unfortunately less avoidable.
This is a large part of what I implicitly had in mind making my cryonics post (which I guess really rubbed you the wrong way). You might be interested in taking a look at the updated version if you haven’t already done so—I hope it’s more clear than it was before.
Things that stretch my credibility.
AI will be developed by a small team (at this time) in secret
That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world. It might be vaguely useful for looking at computing in the limit (e.g. Galaxy sized computers), but otherwise it is credibility stretching.
I find this very unlikely as well, but Anna Salamon once put it as something like “9 Fields-Medalist types plus (an eventual) methodological revolution” which made me raise my probability estimate from “negligible” to “very small”, which I think given the potential payoffs, is enough for someone to be exploring the possibility seriously.
I have a suspicion that Eliezer isn’t privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.
Turing’s theories involving infinite computing power contributed to building actual computers, right? I don’t see why such theories wouldn’t be useful stepping stones for building AIs as well. There’s a lot of work on making AIXI practical, for example (which may be disastrous if they succeeded since AIXI wasn’t designed to be Friendly).
If this is really something that a typical smart person finds hard to believe at first, it seems like it would be relatively easy to convince them otherwise.
The impression I have lingering from Sl4 days is that he thinks it the only way to do AI safely.
They only generally had infinite memory, rather than infinite processing power. The trouble with infinite processing power is that it doesn’t encourage you to ask what hypotheses should be processed. You just sweep that issue under the carpet and do them all.
I don’t see this as being much of an issue for getting usable AI working: it may be an issue if we demand perfect modeling of reality from a system, but there is no reason to suppose we have that.
As I see it, we can set up a probabilistic model of reality and extend this model in an exploratory way. We would continually measure the relevance of features of the model—how much effect they have on predicted values that are of interest—and we would tend to keep those parts of the model that have high relevance. If we “grow” the model out from the existing model that is known to have high relevance, we should expect it to be more likely that we will encounter further, high-relevance “regions”.
I feel we are going to get stuck in an AI bog. However… This seems to neglect linguistic information.
Let us say that you were interested in getting somewhere. You know you have a bike and a map and have cycled their many times.
What is the relevance of the fact that the word “car” refers to cars to this model? None directly.
Now if I was to tell you that “there is a car leaving at 2pm”, then it would become relevant assuming you trusted what I said.
A lot of real world AI is not about collecting examples of basic input output pairings.
AIXI deals with this by simulating humans and hoping that that is the smallest world.
I’m not sure why that stretches your credibility. Note for example, that computability results often tell us not to try something. Thus for example, the Turing Halting Theorem and related results mean that we know we can’t make a program that will in general tell if any arbitrary program will crash.
Similarly, theorems about the asymptotic ability of certain algorithms matters. A strong version of P != NP would have direct implications about AIs trying to go FOOM. Similarly, if trapdoor function or one-way functions exist they give us possible security procedures with handling young general AI.
I’m mainly talking about solomonoff induction here. Especially when Eliezer uses it as part of his argument about what we can expect from Super Intelligences. Or searching through 3^^^3 proofs without blinking an eye.
The point in the linked post doesn’t deal substantially with the limits of arbitrarily large computers. It is just an intuition pump for the idea that a fast moderately bright intelligence could be dangerous.
Is it a good intuition pump? To me it is like using a TM as an intuition pump about how much memory we might have in the future.
We will never have anywhere near infinite memory. We will have a lot more than what we have at the moment, but the concept of the TM is not useful in gauging the scope and magnitude.
I’m trying to find the other post that annoyed me in this fashion. Something to do with simulating universes.
Good question. I’ll get back to you on this when I get a chance, I should do a little bit of research on the topic first. The two examples that you’ve seen are the main ones that I have in mind that have been stated in public, but there may be others that I’m forgetting.
There are some other examples that I have in mind from my private correspondence with Michael Vassar. He’s made some claims which I personally do not find at all credible. (I don’t want to repeat these without his explicit permission.) I’m sold on the cause of existential risk reduction, so the issue in my top level post does not apply here. But in the course of the correspondence I got the impression that he may say similar things in private to other people who are not sold on the cause of existential risk.
I second that question. I am sure there probably are other examples but they for most part wouldn’t occur to me. The main examples that spring to mind are from cases where Robin has disagreed with Eliezer… but that is hardly a huge step away from SIAI mainline!
Ok, I will provide a claim even if I get banned for it:
http://xixidu.net/lw/03.png
http://xixidu.net/lw/04.png
And if I was to spread the full context of the above and tell anyone outside of the hard core about it, do you seriously think that they would think these kind of reactions are credible?
The form of blanking out you use isn’t secure. Better to use pure black rectangles.
Pure black rectangles are not necessarily secure, either.
Amusing anecdote: There was a story about this issue on Slashdot one time, where someone possessing kiddy porn had obscured the faces by doing a swirl distortion, but investigators were able to sufficiently reverse this by doing an opposite swirl and so were able to identify the victims.
Then someone posted a comment to say that if you ever want to avoid this problem, you need to do something like a Gaussian blur, which deletes the information contained in that portion of the image.
Somebody replied to that comment and said, “Yeah. Or, you know, you could just not molest children.”
Brilliant.
Nice link. (It’s always good to read articles where ‘NLP’ doesn’t refer, approximately, to Jedi mind tricks.)
That document was knocking around on a public website for several days.
Using very much security would probably be pretty pointless.
Please stop doing this. You are adding spaced repetition to something that I, and others, positively do not want to think about. That is a real harm and you do not appear to have taken it seriously.
I’m sorry, but people like Wei force me to do this as they make this whole movement look like being completely down-to-earth, when in fact most people, if they knew about the full complexity of beliefs within this community, would laugh out loud.
You have a good point. It would be completely unreasonable to ban topics in such a manner while simultaneously expecting to maintain an image of being down to earth or particularly credible to intelligent external observers. It also doesn’t reflect well on the SIAI if their authorities claim they cannot consider relevant risks because due to psychological or psychiatric difficulties. That is incredibly bad PR. It is exactly the kind of problem this post discusses.
Since the success of an organization is partly dependent on its PR, a rational donor should be skeptical of donating to an organization with bad PR. Any organization soliciting donations should keep this principle in mind.
So let me see if I understand: if an organization uses its income to make a major scientific breakthrough or to prevent a million people from starving, but does not pay enough attention to avoiding bad PR with the result that the organization ends (but the productive employees take the skills they have accumulated there to other organizations), that is a bad organization, but if an organization in the manner of most non-profits focuses on staying in existence as long as possible to provide a secure personal income for its leaders, which entails paying close attention to PR, that is a good organization?
Well, let us take a concrete example: Doug Engelbart’s lab at SRI International. Doug wasted too much time mentoring the young researchers in his lab with the result that he did not pay enough attention to PR and his lab was forced to close. Most of the young researchers got jobs at Xerox PARC and continued to develop Engelbart’s vision of networked personal computers with graphical user interfaces, work that directly and incontrovertibly inspired the Macintosh computer. But let’s not focus on that. Let’s focus on the fact that Engelbart is a failure because he no longer runs an organization because the organization failed because Engelbart did not pay enough attention to PR and to the other factors needed to ensure the perpetuation of the organization.
Yes, that would be an example. In general, organizations tend to need some level of PR to convince people to align with with its goal.
I still have a hard time believing it actually happened. I have heard that there’s no such thing as bad publicity—but surely nobody would pull this kind of stunt deliberately. It just seems to be such an obviously bad thing to do.
The “laugh test” is not rational. I think that, if the majority of people fully understood the context of such statements, they would not consider them funny.
The context asked ‘what kind of things a typical smart person would find uncredible’. This is a perfect example of such a thing.
A typical smart person would find the laugh test credible? We must have different definitions of “smart.”
The topic was the banned topic and the deleted posts—not the laugh test. If you explained what happened to an outsider—they would have a hard time believing the story—since the explanation sounds so totally crazy and ridiculous.
I’ll try to test that, but keep in mind that my standards for “fully understanding” something are pretty high. I would have to explain FAI theory, AI-FOOM, CEV, what SIAI was, etc.
(Voted you back up to 0 here.)
I think you are right about the laugh test itself.
Perhaps that was a marketing effort.
After all, everyone likes to tell the tale of the forbidden topic and the apprentice being insulted. You are spreading the story around now—increasing the mystery and intrigue of these mythical events about which (almost!) all records have been deleted. The material was left in public for a long time—creating plenty of opportunities for it to “accidentally” leak out.
By allowing partly obfuscated forbidden materials to emerge, you may be contributing to the community folklaw, spreading and perpetuating the intrigue.
Sure, but it was fair of him to give evidence when challenged, whether or not he baited that challenge.
The trauma caused by imagining torture blackmail is hard to relate to for most people (including me), because it’s so easy to not take an idea like infinite torture blackmail seriously, on the grounds that the likelihood of ever actually encountering such a scenario seems vanishingly small.
I guess those who are disturbed by the idea have excellent imaginations, or more likely, emotional systems that can be fooled into trying to evaluate the idea of infinite torture (“hell”).
Therefore, I agree that it’s possible to make fun of people on this basis. I myself lean more toward accommodation. Sure, I think those hurt by it should have just avoided the discussion, but perhaps having EY speak for them and officially ban something gave them some catharsis. I feel like I’m beginning to make fun now, so I’ll stop.
You don’t seem to realize that claims like the ones in the post in question are a common sort of claim to make people vulnerable to neuroses develop further problems. Regardless whether or not the claims are at all reasonable, repeatedly referencing them this way is likely to cause further psychological harm. Please stop.
JoshuaZ:
However, it seems that in general, the mere fact that certain statements may cause psychological harm to some people is not considered a sufficient ground for banning or even just discouraging such statements here. For example, I am sure that many religious people would find certain views often expressed here shocking and deeply disturbing, and I have no doubt that many of them could be driven into serious psychological crises by exposure to such arguments, especially if they’re stated so clearly and poignantly that they’re difficult to brush off or rationalize away. Or, to take another example, it’s very hard to scare me with hypotheticals, but the post “The Strangest Thing An AI Could Tell You” and the subsequent thread came pretty close; I’m sure that at least a few readers of this blog didn’t sleep well if they happened to read that right before bedtime.
So, what exact sorts of potential psychological harm constitute sufficient grounds for proclaiming a topic undesirable? Is there some official policy about this that I’ve failed to acquaint myself with?
That’s a very valid set of points and I don’t have a satisfactory response.
Neither do i, and ive thought a lot about religious extremism and other scary views that turn into reality when given to someone in a sufficiently horrible mental state.