The point of my post is not that there’s a problem of SIAI staff making claims that you find uncredible, the point of my post is that there’s a problem of SIAI making claims that people who are not already sold on taking existential risk seriously find uncredible.
Can you give a few more examples of claims made by SIAI staff that people find uncredible? Because it’s probably not entirely clear to them (or to others interested in existential risk advocacy) what kind of things a typical smart person would find uncredible.
Looking at your previous comments, I see that another example you gave was that AGI will be developed within the next century. Any other examples?
Is accepting multi-universes important to the SIAI argument? There are a very, very large number of smart people who know very little about physics. They give lip service to quantum theory and relativity because of authority—but they do not understand them. Mentioning multi-universes just slams a door in their minds. If it is important then you will have to continue referring to it but if it is not then it would be better not to sound like you have science fiction type ideas.
Is accepting multi-universes important to the SIAI argument?
Definitely not, for the purposes of public relations at least. It may make some difference when actually doing AI work.
If it is important then you will have to continue referring to it but if it is not then it would be better not to sound like you have science fiction type ideas.
Good point. Cryonics probably comes with a worse Sci. Fi. vibe but is unfortunately less avoidable.
Cryonics probably comes with a worse Sci. Fi. vibe
This is a large part of what I implicitly had in mind making my cryonics post (which I guess really rubbed you the wrong way). You might be interested in taking a look at the updated version if you haven’t already done so—I hope it’s more clear than it was before.
AI will be developed by a small team (at this time) in secret
That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world. It might be vaguely useful for looking at computing in the limit (e.g. Galaxy sized computers), but otherwise it is credibility stretching.
AI will be developed by a small team (at this time) in secret
I find this very unlikely as well, but Anna Salamon once put it as something like “9 Fields-Medalist types plus (an eventual) methodological revolution” which made me raise my probability estimate from “negligible” to “very small”, which I think given the potential payoffs, is enough for someone to be exploring the possibility seriously.
I have a suspicion that Eliezer isn’t privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.
That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.
Turing’s theories involving infinite computing power contributed to building actual computers, right? I don’t see why such theories wouldn’t be useful stepping stones for building AIs as well. There’s a lot of work on making AIXI practical, for example (which may be disastrous if they succeeded since AIXI wasn’t designed to be Friendly).
If this is really something that a typical smart person finds hard to believe at first, it seems like it would be relatively easy to convince them otherwise.
I have a suspicion that Eliezer isn’t privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.
The impression I have lingering from Sl4 days is that he thinks it the only way to do AI safely.
Turing’s theories involving infinite computing power contributed to building actual computers, right? I don’t see why such theories wouldn’t be useful stepping stones for building AIs as well.
They only generally had infinite memory, rather than infinite processing power. The trouble with infinite processing power is that it doesn’t encourage you to ask what hypotheses should be processed. You just sweep that issue under the carpet and do them all.
I don’t see this as being much of an issue for getting usable AI working: it may be an issue if we demand perfect modeling of reality from a system, but there is no reason to suppose we have that.
As I see it, we can set up a probabilistic model of reality and extend this model in an exploratory way. We would continually measure the relevance of features of the model—how much effect they have on predicted values that are of interest—and we would tend to keep those parts of the model that have high relevance. If we “grow” the model out from the existing model that is known to have high relevance, we should expect it to be more likely that we will encounter further, high-relevance “regions”.
That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world.
I’m not sure why that stretches your credibility. Note for example, that computability results often tell us not to try something. Thus for example, the Turing Halting Theorem and related results mean that we know we can’t make a program that will in general tell if any arbitrary program will crash.
Similarly, theorems about the asymptotic ability of certain algorithms matters. A strong version of P != NP would have direct implications about AIs trying to go FOOM. Similarly, if trapdoor function or one-way functions exist they give us possible security procedures with handling young general AI.
I’m mainly talking about solomonoff induction here. Especially when Eliezer uses it as part of his argument about what we can expect from Super Intelligences. Or searching through 3^^^3 proofs without blinking an eye.
The point in the linked post doesn’t deal substantially with the limits of arbitrarily large computers. It is just an intuition pump for the idea that a fast moderately bright intelligence could be dangerous.
Is it a good intuition pump? To me it is like using a TM as an intuition pump about how much memory we might have in the future.
We will never have anywhere near infinite memory. We will have a lot more than what we have at the moment, but the concept of the TM is not useful in gauging the scope and magnitude.
I’m trying to find the other post that annoyed me in this fashion. Something to do with simulating universes.
Good question. I’ll get back to you on this when I get a chance, I should do a little bit of research on the topic first. The two examples that you’ve seen are the main ones that I have in mind that have been stated in public, but there may be others that I’m forgetting.
There are some other examples that I have in mind from my private correspondence with Michael Vassar. He’s made some claims which I personally do not find at all credible. (I don’t want to repeat these without his explicit permission.) I’m sold on the cause of existential risk reduction, so the issue in my top level post does not apply here. But in the course of the correspondence I got the impression that he may say similar things in private to other people who are not sold on the cause of existential risk.
I second that question. I am sure there probably are other examples but they for most part wouldn’t occur to me. The main examples that spring to mind are from cases where Robin has disagreed with Eliezer… but that is hardly a huge step away from SIAI mainline!
And if I was to spread the full context of the above and tell anyone outside of the hard core about it, do you seriously think that they would think these kind of reactions are credible?
Amusing anecdote: There was a story about this issue on Slashdot one time, where someone possessing kiddy porn had obscured the faces by doing a swirl distortion, but investigators were able to sufficiently reverse this by doing an opposite swirl and so were able to identify the victims.
Then someone posted a comment to say that if you ever want to avoid this problem, you need to do something like a Gaussian blur, which deletes the information contained in that portion of the image.
Somebody replied to that comment and said, “Yeah. Or, you know, you could just not molest children.”
Please stop doing this. You are adding spaced repetition to something that I, and others, positively do not want to think about. That is a real harm and you do not appear to have taken it seriously.
I’m sorry, but people like Wei force me to do this as they make this whole movement look like being completely down-to-earth, when in fact most people, if they knew about the full complexity of beliefs within this community, would laugh out loud.
You have a good point. It would be completely unreasonable to ban topics in such a manner while simultaneously expecting to maintain an image of being down to earth or particularly credible to intelligent external observers. It also doesn’t reflect well on the SIAI if their authorities claim they cannot consider relevant risks because due to psychological or psychiatric difficulties. That is incredibly bad PR. It is exactly the kind of problem this post discusses.
Since the success of an organization is partly dependent on its PR, a rational donor should be skeptical of donating to an organization with bad PR. Any organization soliciting donations should keep this principle in mind.
Since the success of an organization is partly dependent on its PR, a rational donor should be skeptical of donating to an organization with bad PR.
So let me see if I understand: if an organization uses its income to make a major scientific breakthrough or to prevent a million people from starving, but does not pay enough attention to avoiding bad PR with the result that the organization ends (but the productive employees take the skills they have accumulated there to other organizations), that is a bad organization, but if an organization in the manner of most non-profits focuses on staying in existence as long as possible to provide a secure personal income for its leaders, which entails paying close attention to PR, that is a good organization?
Well, let us take a concrete example: Doug Engelbart’s lab at SRI International. Doug wasted too much time mentoring the young researchers in his lab with the result that he did not pay enough attention to PR and his lab was forced to close. Most of the young researchers got jobs at Xerox PARC and continued to develop Engelbart’s vision of networked personal computers with graphical user interfaces, work that directly and incontrovertibly inspired the Macintosh computer. But let’s not focus on that. Let’s focus on the fact that Engelbart is a failure because he no longer runs an organization because the organization failed because Engelbart did not pay enough attention to PR and to the other factors needed to ensure the perpetuation of the organization.
I still have a hard time believing it actually happened. I have heard that there’s no such thing as bad publicity—but surely nobody would pull this kind of stunt deliberately. It just seems to be such an obviously bad thing to do.
The “laugh test” is not rational. I think that, if the majority of people fully understood the context of such statements, they would not consider them funny.
The topic was the banned topic and the deleted posts—not the laugh test. If you explained what happened to an outsider—they would have a hard time believing the story—since the explanation sounds so totally crazy and ridiculous.
I’ll try to test that, but keep in mind that my standards for “fully understanding” something are pretty high. I would have to explain FAI theory, AI-FOOM, CEV, what SIAI was, etc.
After all, everyone likes to tell the tale of the forbidden topic and the apprentice being insulted. You are spreading the story around now—increasing the mystery and intrigue of these mythical events about which (almost!) all records have been deleted. The material was left in public for a long time—creating plenty of opportunities for it to “accidentally” leak out.
By allowing partly obfuscated forbidden materials to emerge, you may be contributing to the community folklaw, spreading and perpetuating the intrigue.
The trauma caused by imagining torture blackmail is hard to relate to for most people (including me), because it’s so easy to not take an idea like infinite torture blackmail seriously, on the grounds that the likelihood of ever actually encountering such a scenario seems vanishingly small.
I guess those who are disturbed by the idea have excellent imaginations, or more likely, emotional systems that can be fooled into trying to evaluate the idea of infinite torture (“hell”).
Therefore, I agree that it’s possible to make fun of people on this basis. I myself lean more toward accommodation. Sure, I think those hurt by it should have just avoided the discussion, but perhaps having EY speak for them and officially ban something gave them some catharsis. I feel like I’m beginning to make fun now, so I’ll stop.
You don’t seem to realize that claims like the ones in the post in question are a common sort of claim to make people vulnerable to neuroses develop further problems. Regardless whether or not the claims are at all reasonable, repeatedly referencing them this way is likely to cause further psychological harm. Please stop.
You don’t seem to realize that claims like the ones in the post in question are a common sort of claim to make people vulnerable to neuroses develop further problems. Regardless whether or not the claims are at all reasonable, repeatedly referencing them this way is likely to cause further psychological harm.
However, it seems that in general, the mere fact that certain statements may cause psychological harm to some people is not considered a sufficient ground for banning or even just discouraging such statements here. For example, I am sure that many religious people would find certain views often expressed here shocking and deeply disturbing, and I have no doubt that many of them could be driven into serious psychological crises by exposure to such arguments, especially if they’re stated so clearly and poignantly that they’re difficult to brush off or rationalize away. Or, to take another example, it’s very hard to scare me with hypotheticals, but the post “The Strangest Thing An AI Could Tell You” and the subsequent thread came pretty close; I’m sure that at least a few readers of this blog didn’t sleep well if they happened to read that right before bedtime.
So, what exact sorts of potential psychological harm constitute sufficient grounds for proclaiming a topic undesirable? Is there some official policy about this that I’ve failed to acquaint myself with?
Neither do i, and ive thought a lot about religious extremism and other scary views that turn into reality when given to someone in a sufficiently horrible mental state.
The point of my post is not that there’s a problem of SIAI staff making claims that you find uncredible, the point of my post is that there’s a problem of SIAI making claims that people who are not already sold on taking existential risk seriously find uncredible.
Can you give a few more examples of claims made by SIAI staff that people find uncredible? Because it’s probably not entirely clear to them (or to others interested in existential risk advocacy) what kind of things a typical smart person would find uncredible.
Looking at your previous comments, I see that another example you gave was that AGI will be developed within the next century. Any other examples?
Is accepting multi-universes important to the SIAI argument? There are a very, very large number of smart people who know very little about physics. They give lip service to quantum theory and relativity because of authority—but they do not understand them. Mentioning multi-universes just slams a door in their minds. If it is important then you will have to continue referring to it but if it is not then it would be better not to sound like you have science fiction type ideas.
Definitely not, for the purposes of public relations at least. It may make some difference when actually doing AI work.
Good point. Cryonics probably comes with a worse Sci. Fi. vibe but is unfortunately less avoidable.
This is a large part of what I implicitly had in mind making my cryonics post (which I guess really rubbed you the wrong way). You might be interested in taking a look at the updated version if you haven’t already done so—I hope it’s more clear than it was before.
Things that stretch my credibility.
AI will be developed by a small team (at this time) in secret
That formal theory involving infinite/near infinite computing power has anything to do with AI and computing in the real world. It might be vaguely useful for looking at computing in the limit (e.g. Galaxy sized computers), but otherwise it is credibility stretching.
I find this very unlikely as well, but Anna Salamon once put it as something like “9 Fields-Medalist types plus (an eventual) methodological revolution” which made me raise my probability estimate from “negligible” to “very small”, which I think given the potential payoffs, is enough for someone to be exploring the possibility seriously.
I have a suspicion that Eliezer isn’t privately as confident about this as he appears, and his apparent confidence is itself a PR strategy.
Turing’s theories involving infinite computing power contributed to building actual computers, right? I don’t see why such theories wouldn’t be useful stepping stones for building AIs as well. There’s a lot of work on making AIXI practical, for example (which may be disastrous if they succeeded since AIXI wasn’t designed to be Friendly).
If this is really something that a typical smart person finds hard to believe at first, it seems like it would be relatively easy to convince them otherwise.
The impression I have lingering from Sl4 days is that he thinks it the only way to do AI safely.
They only generally had infinite memory, rather than infinite processing power. The trouble with infinite processing power is that it doesn’t encourage you to ask what hypotheses should be processed. You just sweep that issue under the carpet and do them all.
I don’t see this as being much of an issue for getting usable AI working: it may be an issue if we demand perfect modeling of reality from a system, but there is no reason to suppose we have that.
As I see it, we can set up a probabilistic model of reality and extend this model in an exploratory way. We would continually measure the relevance of features of the model—how much effect they have on predicted values that are of interest—and we would tend to keep those parts of the model that have high relevance. If we “grow” the model out from the existing model that is known to have high relevance, we should expect it to be more likely that we will encounter further, high-relevance “regions”.
I feel we are going to get stuck in an AI bog. However… This seems to neglect linguistic information.
Let us say that you were interested in getting somewhere. You know you have a bike and a map and have cycled their many times.
What is the relevance of the fact that the word “car” refers to cars to this model? None directly.
Now if I was to tell you that “there is a car leaving at 2pm”, then it would become relevant assuming you trusted what I said.
A lot of real world AI is not about collecting examples of basic input output pairings.
AIXI deals with this by simulating humans and hoping that that is the smallest world.
I’m not sure why that stretches your credibility. Note for example, that computability results often tell us not to try something. Thus for example, the Turing Halting Theorem and related results mean that we know we can’t make a program that will in general tell if any arbitrary program will crash.
Similarly, theorems about the asymptotic ability of certain algorithms matters. A strong version of P != NP would have direct implications about AIs trying to go FOOM. Similarly, if trapdoor function or one-way functions exist they give us possible security procedures with handling young general AI.
I’m mainly talking about solomonoff induction here. Especially when Eliezer uses it as part of his argument about what we can expect from Super Intelligences. Or searching through 3^^^3 proofs without blinking an eye.
The point in the linked post doesn’t deal substantially with the limits of arbitrarily large computers. It is just an intuition pump for the idea that a fast moderately bright intelligence could be dangerous.
Is it a good intuition pump? To me it is like using a TM as an intuition pump about how much memory we might have in the future.
We will never have anywhere near infinite memory. We will have a lot more than what we have at the moment, but the concept of the TM is not useful in gauging the scope and magnitude.
I’m trying to find the other post that annoyed me in this fashion. Something to do with simulating universes.
Good question. I’ll get back to you on this when I get a chance, I should do a little bit of research on the topic first. The two examples that you’ve seen are the main ones that I have in mind that have been stated in public, but there may be others that I’m forgetting.
There are some other examples that I have in mind from my private correspondence with Michael Vassar. He’s made some claims which I personally do not find at all credible. (I don’t want to repeat these without his explicit permission.) I’m sold on the cause of existential risk reduction, so the issue in my top level post does not apply here. But in the course of the correspondence I got the impression that he may say similar things in private to other people who are not sold on the cause of existential risk.
I second that question. I am sure there probably are other examples but they for most part wouldn’t occur to me. The main examples that spring to mind are from cases where Robin has disagreed with Eliezer… but that is hardly a huge step away from SIAI mainline!
Ok, I will provide a claim even if I get banned for it:
http://xixidu.net/lw/03.png
http://xixidu.net/lw/04.png
And if I was to spread the full context of the above and tell anyone outside of the hard core about it, do you seriously think that they would think these kind of reactions are credible?
The form of blanking out you use isn’t secure. Better to use pure black rectangles.
Pure black rectangles are not necessarily secure, either.
Amusing anecdote: There was a story about this issue on Slashdot one time, where someone possessing kiddy porn had obscured the faces by doing a swirl distortion, but investigators were able to sufficiently reverse this by doing an opposite swirl and so were able to identify the victims.
Then someone posted a comment to say that if you ever want to avoid this problem, you need to do something like a Gaussian blur, which deletes the information contained in that portion of the image.
Somebody replied to that comment and said, “Yeah. Or, you know, you could just not molest children.”
Brilliant.
Nice link. (It’s always good to read articles where ‘NLP’ doesn’t refer, approximately, to Jedi mind tricks.)
That document was knocking around on a public website for several days.
Using very much security would probably be pretty pointless.
Please stop doing this. You are adding spaced repetition to something that I, and others, positively do not want to think about. That is a real harm and you do not appear to have taken it seriously.
I’m sorry, but people like Wei force me to do this as they make this whole movement look like being completely down-to-earth, when in fact most people, if they knew about the full complexity of beliefs within this community, would laugh out loud.
You have a good point. It would be completely unreasonable to ban topics in such a manner while simultaneously expecting to maintain an image of being down to earth or particularly credible to intelligent external observers. It also doesn’t reflect well on the SIAI if their authorities claim they cannot consider relevant risks because due to psychological or psychiatric difficulties. That is incredibly bad PR. It is exactly the kind of problem this post discusses.
Since the success of an organization is partly dependent on its PR, a rational donor should be skeptical of donating to an organization with bad PR. Any organization soliciting donations should keep this principle in mind.
So let me see if I understand: if an organization uses its income to make a major scientific breakthrough or to prevent a million people from starving, but does not pay enough attention to avoiding bad PR with the result that the organization ends (but the productive employees take the skills they have accumulated there to other organizations), that is a bad organization, but if an organization in the manner of most non-profits focuses on staying in existence as long as possible to provide a secure personal income for its leaders, which entails paying close attention to PR, that is a good organization?
Well, let us take a concrete example: Doug Engelbart’s lab at SRI International. Doug wasted too much time mentoring the young researchers in his lab with the result that he did not pay enough attention to PR and his lab was forced to close. Most of the young researchers got jobs at Xerox PARC and continued to develop Engelbart’s vision of networked personal computers with graphical user interfaces, work that directly and incontrovertibly inspired the Macintosh computer. But let’s not focus on that. Let’s focus on the fact that Engelbart is a failure because he no longer runs an organization because the organization failed because Engelbart did not pay enough attention to PR and to the other factors needed to ensure the perpetuation of the organization.
Yes, that would be an example. In general, organizations tend to need some level of PR to convince people to align with with its goal.
I still have a hard time believing it actually happened. I have heard that there’s no such thing as bad publicity—but surely nobody would pull this kind of stunt deliberately. It just seems to be such an obviously bad thing to do.
The “laugh test” is not rational. I think that, if the majority of people fully understood the context of such statements, they would not consider them funny.
The context asked ‘what kind of things a typical smart person would find uncredible’. This is a perfect example of such a thing.
A typical smart person would find the laugh test credible? We must have different definitions of “smart.”
The topic was the banned topic and the deleted posts—not the laugh test. If you explained what happened to an outsider—they would have a hard time believing the story—since the explanation sounds so totally crazy and ridiculous.
I’ll try to test that, but keep in mind that my standards for “fully understanding” something are pretty high. I would have to explain FAI theory, AI-FOOM, CEV, what SIAI was, etc.
(Voted you back up to 0 here.)
I think you are right about the laugh test itself.
Perhaps that was a marketing effort.
After all, everyone likes to tell the tale of the forbidden topic and the apprentice being insulted. You are spreading the story around now—increasing the mystery and intrigue of these mythical events about which (almost!) all records have been deleted. The material was left in public for a long time—creating plenty of opportunities for it to “accidentally” leak out.
By allowing partly obfuscated forbidden materials to emerge, you may be contributing to the community folklaw, spreading and perpetuating the intrigue.
Sure, but it was fair of him to give evidence when challenged, whether or not he baited that challenge.
The trauma caused by imagining torture blackmail is hard to relate to for most people (including me), because it’s so easy to not take an idea like infinite torture blackmail seriously, on the grounds that the likelihood of ever actually encountering such a scenario seems vanishingly small.
I guess those who are disturbed by the idea have excellent imaginations, or more likely, emotional systems that can be fooled into trying to evaluate the idea of infinite torture (“hell”).
Therefore, I agree that it’s possible to make fun of people on this basis. I myself lean more toward accommodation. Sure, I think those hurt by it should have just avoided the discussion, but perhaps having EY speak for them and officially ban something gave them some catharsis. I feel like I’m beginning to make fun now, so I’ll stop.
You don’t seem to realize that claims like the ones in the post in question are a common sort of claim to make people vulnerable to neuroses develop further problems. Regardless whether or not the claims are at all reasonable, repeatedly referencing them this way is likely to cause further psychological harm. Please stop.
JoshuaZ:
However, it seems that in general, the mere fact that certain statements may cause psychological harm to some people is not considered a sufficient ground for banning or even just discouraging such statements here. For example, I am sure that many religious people would find certain views often expressed here shocking and deeply disturbing, and I have no doubt that many of them could be driven into serious psychological crises by exposure to such arguments, especially if they’re stated so clearly and poignantly that they’re difficult to brush off or rationalize away. Or, to take another example, it’s very hard to scare me with hypotheticals, but the post “The Strangest Thing An AI Could Tell You” and the subsequent thread came pretty close; I’m sure that at least a few readers of this blog didn’t sleep well if they happened to read that right before bedtime.
So, what exact sorts of potential psychological harm constitute sufficient grounds for proclaiming a topic undesirable? Is there some official policy about this that I’ve failed to acquaint myself with?
That’s a very valid set of points and I don’t have a satisfactory response.
Neither do i, and ive thought a lot about religious extremism and other scary views that turn into reality when given to someone in a sufficiently horrible mental state.