Guys… I am in the final implementation stages of the general intelligence algorithm. Though I had intellectual property concerns regarding working within the academic network (especially with the quality of people I was exposed to), I am willing to work with people that I perceive as intelligent and emotionally mature enough to respect my accomplishments. Although I do not perceive an absolute need to work with anyone else, I do have the following concerns about finishing this project alone:
Ethical considerations—Review of my approach to handling the potential dangers including some not typically talked about (DNA encoding is an instance of the GIA which means publishing could open the door to genetic programming). How to promote the use of it to quantify the social sciences as much or more than the use of it for sheer technological purposes which could lead to a poorly understood MAD situation between any individual and the human race. Example: The question of how/when to render assistance to others without harming their self-determination could be reduced to an optimization problem.
I could just read everything written on this subject but most of it is off base.
Productivity considerations—Considering the potential implications having accountability to others for meeting deadlines etc could increase productivity… every day may matter in this case. I have pretty much been working in a vacuum other than deducing information from comments made by others, looking up data on the internet, and occasionally debating with people incognito regarding their opinions of how/why certain things work or do not work.
If anyone is willing and able to meet to talk about this in the southeast I would consider it based on a supplied explanation of how best to protect oneself from the loss of credit for one’s ideas in working with others (which I would then compare to my own understanding regarding the subject for honesty and accuracy) If there is no reason for anyone to want to collaborate under these circumstances then so be it, but I feel like more emotionally mature and intelligent people would not feel this way.
Please don’t take this as a personal attack, but, historically speaking, every one who’d said “I am in the final implementation stages of the general intelligence algorithm” was wrong so far. Their algorithms never quite worked out. Is there any evidence you can offer that your work is any different ? I understand that this is a tricky proposition, since revealing your work could set off all kinds of doomsday scenarios (assuming that it performs as you expect it to); still, surely there must be some way for you to convince skeptics that you can succeed where so many others had failed.
Sadly, I think the general trend you note is correct, but the first developers to succeed may do so in relative secrecy.
As time goes on it becomes increasingly possible that some small group or lone researcher is able to put the final pieces together and develop an AGI. Assuming a typical largely selfish financial motivation, a small self-sufficient developer would have very little to gain from pre-publishing or publicizing their plan.
Eventually of course they may be tempted to publicize, but there is more incentive to do that later, if at all. Unless you work on it for a while and it doesn’t go much of anywhere. Then of course you publish.
As time goes on it becomes increasingly possible that some small group or lone researcher is able to put the final pieces together and develop an AGI.
Why do you think this is the case? Is this just because the overall knowledge level concerning AI goes up over time? If so, what makes you think that that rate of increase is anything large enough to be significant?
Yes. This is just the way of invention in general: steady incremental evolutionary progress.
A big well funded team can throw more computational resources into their particular solution for the problem, but the returns are sublinear (for any one particular solution) even without moore’s law.
Yes, but on this forum there should be some reasonable immunity against instances of Pascal’s wager/mugging like that. The comment in question does not rise above the noise level, so treating it seriously shows how far many regulars still have to go in learning the basics.
Congratulations on your insights, but please don’t snrk implement them until snigger you’ve made sure that oh heck I can’t keep a straight face anymore.
The reactions to the parent comment are very amusing. We have people sarcastically supporting the commenter, people sarcastically telling the commenter they’re a threat to the world, people sarcastically telling the commenter to fear for their life, people non-sarcastically telling the commenter to fear for their life, people honestly telling the commenter they’re probably nuts, and people failing to get every instance of the sarcasm. Yet at bottom, we’re probably all (except for private_messaging) thinking the same thing: that FinalState almost certainly has no way of creating an AGI and that no-one involved need feel threatened by anyone else.
Yet at bottom, we’re probably all (except for private_messaging) thinking the same thing: that FinalState almost certainly has no way of creating an AGI
nah, I stated that probability of him creating AGI is epsilon (my probability for his project hurting me is microscopic epsilon while the SI hurting him somehow is a larger epsilon, I only stated a relation that the latter is larger than former. The probability of a person going unfriendly is way, way higher than the probability of a person creating AGI that kills us all).
I think we’re all here for various sarcastic or semi sarcastic points; my point is that given the SI stance, AGI researchers would (and have to) try to keep away from SI, especially those whom have some probability of creating an AGI, given combination of probability of useful contribution by SI versus probability of SI going nuts.
that FinalState almost certainly has no way of creating an AGI
I actually meant that I thought you disagreed with:
and that no-one involved need feel threatened by anyone else.
Sorry for the language ambiguity. If you think the probability of SI hurting FinalState is epsilon, I misunderstood you. I thought you thought it was a large enough probability to be worth worrying about and warning FinalState about.
I am in the final implementation stages of the general intelligence algorithm.
Do you mean “I am in the final writing stages of a paper on a general intelligence algorithm?” If you were in the final implementation stages of what LW would recognize as the general intelligence algorithm, the very last thing you would want to do is mention that fact here; and the second-to-last thing you’d do would be to worry about personal credit.
I am open to arguments as to why that might be the case, but unless you also have the GIA, I should be telling you what things I would want to do first and last. I don’t really see what the risk is, since I haven’t given anyone any unique knowledge that would allow them to follow in my footsteps.
A paper? I’ll write that in a few minutes after I finish the implementation. Problem statement → pseudocode → implementation. I am just putting some finishing touches on the data structure cases I created to solve the problem.
As far as I understand, the SIAI folks believe that the risk is, “you push the Enter key, your algorithm goes online, bootstraps itself to transhuman superintelligence, and eats the Earth with nanotechnology” (nanotech is just one possibility among many, of course). I personally don’t believe we’re in any danger of that happening any time soon, but these guys do. They have made it their mission in life to prevent this scenario from happening. Their mission and yours appear to be in conflict.
That is just wrong. SAI doesn’t really work like that. Those people have seen too many sci fi movies. It’s easy to psychologically manipulate an AI if you are smart enough to create one in the first place. To use terms I have seen tossed around, there is no difference between tool and agent AI. The agent only does things that you program it to do. It would take a malevolent genius to program something akin to a serial killer to cause that kind of scenario.
Retraction means that you no longer endorse the contents of a comment. The comment is not deleted so that it will not break existing conversations. Retracted comments are no longer eligible for voting. Once a comment is retracted, it can be revisited at which point there is a ‘delete’ option, which removes the comment permanently.
If you are concerned about Intellectual Property rights, by all means have a confidentiality agreement signed b4 revealing any proprietary information. Any reasonable person would not have a problem signing such an agreement.
Expect some skepticism until a working prototype is available.
My recommendation: stay away from SIAI as there is considerable probability that they are at worst nutjobs or at best a fraud. Either way it is dangerous for you, as in, actual risk to your safety. Do not reveal your address. I am bloody serious. It is not a game here.
edit: supplemental information (not so much on the potential dangers but on the usefulness of communication):
Note: I fully believe that risk to your safety (while small) outweights the risk to all of us from your software project. All of us includes me, all my relatives, all people i care for, the world, etc.
So your argument that visiting a bunch of highly educated pencil-necked white nerds is physically dangerous boils down to… one incident of ineffective online censorship mocked by most of the LW community and all outsiders, and some criticism of Yudkowsky’s computer science & philosophical achievements.
I see.
I would literally have had more respect for you if you had used racial slurs like “niggers” in your argument, since that is at least tethered to reality in the slightest bit.
one incident of ineffective online censorship mocked by most of the LW community and all outsiders
Where a single incident seems grotesquely out of character, one should attempt to explain the single incident’s cause. What’s troubling is that Eliezer Yudkowsky has: 1) never admitted his mistake; 2) never shown (at least to my knowledge) any regret over how he handled it; and 3) most importantly, never explained his response (practically impossible without admitting his mistake).
The failure to address a wrongdoing or serious error over many years means it should be taken seriously, despite the lapse of time. The failure of self-analysis raises real questions about a lack of intellectual honesty—that is, a lack of epistemic rationality.
I don’t think it’s hard to explain at all: Eliezer prioritized a donor (presumably long-term and one he knew personally) over an article. I disagree with it, but you know what, I saw this sort of thing all the time on Wikipedia, and I don’t need to go looking for theories of why administrators were crazy and deleted Daniel Brandt’s article. I know why they did, even though I strongly disagreed.
3) most importantly, never explained his response (practically impossible without admitting his mistake).
He or someone else must have explained at some point, or I wouldn’t know his reason was that the article was giving a donor nightmares.
Is deleting one post such an issue to get worked up over? Or is this just discussed because it’s the best criticism one can come up with besides “he’s a high school dropout who hasn’t yet created an AI and so must be completely wrong”?
Has he said anywhere that the individual with nightmares was a donor? Note incidentally that having content that is acting as that much of a cognitive basilisk might be a legitimate reason to delete (although I’m inclined to think that it wasn’t).
Is deleting one post such an issue to get worked up over? Or is this just discussed because it’s the best criticism one can come up with besides “he’s a high school dropout who hasn’t yet created an AI and so must be completely wrong”?
Like JoshuaZ, I hadn’t known a donor was involved. What’s the big deal? People donote to SIAI because they trust Eliezer Yudkowsky’s integrity and intellect. So it’s natural to ask whether he’s someone you can count on to deliver the truth. Caving to donors is inauspicious.
In a related vein, I also found disturbing that Eliezer Yudkowsky repeated his claim that that Loosemoore guy “lied.” Having had years to cool off, he still hasn’t summoned the humility to admit he stretched the evidence for Loosemoore’s deceitfulness: Loosemoore is obviously a cognitive scientist.
These two examples paint a picture of Eliezer Yudkowsky as a person subject to strong personal loyalties and animosities that exceed his dedication to the truth. In the first incident, his loyalty to a donor induced him to suppress information; in the Loosemoore incident, his longstanding animosity to Loosemoore made him unable to adjust his earlier opinion.
I hope these impressions aren’t accurate. But one thing seems for sure: Eliezer Yudkowsky is not a person for serious self-criticism. Has he admitted any significant intellectual error since he became a rationalist? [Serious question.]
It’s also a double-bind. If you do nothing, you are valuing donors at less than some random speculation which is unusually dubious even by LessWrong’s standards, resting as it does on a novel speculative decision theory (acausal trade) whose most obvious requirement (implementing sufficiently similar algorithms) is beyond blatantly false when applied to humans and FAIs. (If you actually believe that SIAI is a good charity, pissing off donors over something like this is a really bad idea, and if you don’t believe SIAI is a good charity, well, that’s even more damning, isn’t it?) And if you delete it, well, you get exactly this stupid mess which is still being dragged up years later.
I hope these impressions aren’t accurate. But one thing seems for sure: Eliezer Yudkowsky is not a person for serious self-criticism. Has he admitted any significant intellectual error since he became a rationalist? [Serious question.]
Repudiating most of his long-form works like CFAI and LOGI and CEV isn’t admission of error?
Personally, when he was writing the Sequences, I found it a little obnoxious how he kept saying “I was totally on the wrong track and mistaken before I was enlightened & came to understand Bayesian statistics, but now I have a chance of being less wrong”—once is enough, we get it already, I’m not that interested in your intellectual evolution.
Sorry, I wasn’t clear. I meant links to the repudiations. I’ve read some of the material in CFAI and CEV, but not the retraction, and not yet any of LOGI.
Personally, when he was writing the Sequences, I found it a little obnoxious how he kept saying “I was totally on the wrong track and mistaken before I was enlightened & came to understand Bayesian statistics, but now I have a chance of being less wrong”—once is enough, we get it already, I’m not that interested in your intellectual evolution.
Hmm, and the foom belief (for instance) is based on Bayesian statistics how?
That’s pretty damn interesting, because I’ve understood Bayesian statistics for ages, understood how wrong you are without it, and also understood how computationally expensive it is—just think what sort of data you need to attach to each proposition to avoid double counting evidence, to avoid any form of circular updates, to avoid naive Bayesian mistakes… even worse, how prone it is to making faulty conclusions from a partial set of propositions (as generated by e.g. exploring ideas, which btw introduces another form of circularity as you tend to use ideas which you think are probable as starting point more often).
Seriously, he should try to write software that would do updates correctly on a graph with cycles and with correlated propositions. That might result in another enlightenment, hopefully the one not leading to increased confidence, but to decreased confidence. Statistics isn’t easy to do right. And relatively minor bugs easily lead to major errors.
Hmm, and the foom belief (for instance) is based on Bayesian statistics how?
I don’t think it’s based on Bayesian statistics any more than any other belief may (or may not) be based. To take Eliezer specifically, he was interested in the Singularity—specifically, the Good/Vingean observation that a machine more intelligent than us ought to be better than us at creating a still more intelligent machine—long before he had his ‘Bayesian enlightenment’, so his shift to subjective Bayesianism may have increased his belief in intelligence explosions, but certainly didn’t cause it.
Eliezer, I upvoted you and was about to apologize for contributing to this rumor myself, but then found this quote from a copy of the Roko post that’s available online:
Meanwhile I’m banning this post so that it doesn’t (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I’m not sure I know the sufficient detail.)
Perhaps your memory got mixed up because Roko subsequently deleted all of his other posts and comments? (Unless “banning” meant something other than “deleting”?)
Now I’ve got no idea what I did. Maybe my own memory was mixed up by hearing other people say that the post was deleted by Roko? Or Roko retracted it after I banned it, or it was banned and then unbanned and then Roko retracted it?
I retract my grandparent comment; I have little trust for my own memories. Thanks for catching this.
A lesson learned here. I vividly remembered your “Meanwhile I’m banning this post” comment and was going to remind you, but chickened out due to the caps in the great-grandparent which seemed to signal that you Knew What You Were Talking About and wouldn’t react kindly to correction. Props to Wei Dai for having more courage than I did.
I’m surprised and disconcerted that some people might be so afraid of being rebuked by Eliezer as to be reluctant to criticize/correct him even when such incontrovertible evidence is available showing that he’s wrong. Your comment also made me recall another comment you wrote a couple of years ago about how my status in this community made a criticism of you feel like a “huge insult”, which I couldn’t understand at the time and just ignored.
I wonder how many other people feel this strongly about being criticized/insulted by a high status person (I guess at least Roko also felt strongly enough about being called “stupid” by Eliezer to contribute to him leaving this community a few days later), and whether Eliezer might not be aware of this effect he is having on others.
Your comment also made me recall another comment you [Kip] wrote a couple of years ago about how my status in this community made a criticism of you feel like a “huge insult”, which I couldn’t understand at the time and just ignored.
My brain really, really does not want to update on the numerous items of evidence available to it that it can hit people much much harder now, owing to community status, than when it was 12 years old.
(nods) I’ve wondered this many times. I have also at times wondered if EY is adopting the “slam the door three times” approach to prospective members of his community, though I consider this fairly unlikely given other things he’s said.
Somewhat relatedly, I remember when lukeprog first joined the site, he and EY got into an exchange that from what I recall of my perspective as a completely uninvolved third party involved luke earnestly trying to offer assistance and EY being confidently dismissive of any assistance someone like luke could provide, and at the time I remember feeling sort of sorry for luke, who it seemed to me was being treated a lot worse than he deserved, and surprised that he kept at it.
The way that story ultimately turned out led me to decide that my model of what was going on was at least importantly incomplete, and quite possibly fundamentally wrongheaded, but I haven’t further refined that model.
I wonder how many other people feel this strongly about being criticized/insulted by a high status person (I guess at least Roko also felt strongly enough about being called “stupid” by Eliezer to contribute to him leaving this community a few days later), and whether Eliezer might not be aware of this effect he is having on others.
As a data point here I tend to empathize with the recipient of such barrages to what I subjectively estimate as about 60% of the degree of emotional affect that I would experience if it were directed at myself. Particularly if said recipient is someone I respect as much as Roko and when the insults are not justified—less if they do not have my respect and if the insults are justified I experience no empathy. It is the kind of thing that I viscerally object to having in my tribe and where it is possible I try to ensure that the consequences to the high status person for their behavior are as negative as possible—or at least minimize the reward they receive if the tribe is one that tends to award bullying.
There are times in the past—let’s say 4 years ago—where such an attack would certainly prompt me to leave a community, even if the community was otherwise moderately appreciated. Now I believe I am unlikely to leave over such an incident. I would say I am more socially resilient and also more capable as understanding social politics as a game and so take it less personally. For instance when received the more mildly expressed declaration from Eliezer “You are not safe to even associate with!” I don’t recall experiencing any flight impulses—more surprise.
I’m surprised and disconcerted that some people might be so afraid of being rebuked by Eliezer as to be reluctant to criticize/correct him even when such incontrovertible evidence is available showing that he’s wrong.
I was a little surprised at first too at reading of komponisto’s reticence. Until I thought about it and reminded myself that in general I err on the side of not holding my tongue when I ought. In fact, the character “wedrifid” on wotmud.org with which I initially established this handle was banned from the game for 3 months for making exactly this kind of correction based off incontrovertible truth. People with status are dangerous and in general highly epistemically irrational in this regard. Correcting them is nearly always foolish.
I must emphasize that part of my initial surprise at kompo’s reticence is due to my model of Eliezer as not being especially corrupt in this kind of regard. In response to such correction I expect him to respond positively and update. While Eliezer may be arrogant and a tad careless when interacting with people at times but he is not an egotistical jerk enforcing his dominance in his domain with dick moves. That’s both high praise (by my way of thinking) and a reason for people to err less on the side of caution with him and to take less personally any ‘abrupt’ things he may say. Eliezer being rude to you isn’t a precursor to him beating you to death with a metaphorical rock to maintain his power—as our instincts may anticipate. He’s just being rude.
I’m surprised and disconcerted that some people might be so afraid of being rebuked by Eliezer as to be reluctant to criticize/correct him even when such incontrovertible evidence is available showing that he’s wrong.
People have to realize that to critically examine his output is very important due to the nature and scale of what he is trying to achieve.
Even people with comparatively modest goals like trying to become the president of the United States of America should face and expect a constant and critical analysis of everything they are doing.
Which is why I am kind of surprised how often people ask me if I am on a crusade against Eliezer or find fault with my alleged “hostility”. Excuse me? That person is asking for money to implement a mechanism that will change the nature of the whole universe. You should be looking for possible shortcomings as well!
Everyone should be critical of Eliezer and SIAI, even if they agree with almost anything. Why? Because if you believe that it is incredible important and difficult to get friendly AI just right, then you should be wary of any weak spot. And humans are the weak spot here.
That’s why outsiders think it’s a circlejerk. I’ve heard of Richard Loosemore whom as far as i can see was banned over corrections on the “conjunction fallacy”, not sure what exactly went on, but ofc having spent time reading Roko thing (and having assumed that there was something sensible I did not hear of, and then learning that there wasn’t) its kind of obvious where my priors are.
Maybe try keeping statements more accurate by qualifying your generalizations (“some outsiders”), or even just saying “that’s why I think this is a circlejirk.” That’s what everyone ever is going to interpret it as anyhow (intentional).
Maybe you guys are too careful with qualifying everything as ‘some outsiders’ and then you end up with outsiders like Holden forming negative views which you could of predicted if you generalized more (and have the benefit of Holden’s anticipated feedback without him telling people not to donate).
Maybe. Seems like you’re reaching, though: Maybe something bad comes from us being accurate rather than general about things like this, and maybe Holden criticizing SIAI is a product of this on LessWrong for some reason, and therefore it is in fact better for you to say inaccurate things like “outsiders think it’s a circlejrik.” Because you… care about us?
You guys are only being supposedly ‘accurate’ when it feels good. I have not said, ‘all outsiders’, that’s your interpretation which you can subsequently disagree with.
SI generalized from the agreement of self selected participants, onto opinions of outsiders, like Holden, subsequently approaching him and getting back the same critique they’ve been hearing from rare ‘contrarians’ here for ages but assumed to be some sorta fringe views and such. I don’t really care what you guys do with this, you can continue as is and be debunked big time as cranks, your choice. edit: actually, you can see Eliezer himself said that most AI researchers are lunatics. What did SI do to distinguish themselves from what you guys call ‘lunatics’? What is here that can shift probabilities from the priors? Absolutely nothing. The focus on safety with made up fears is no indication of sanity what so ever.
You guys are only being supposedly ‘accurate’ when it feels good. I have not said, ‘all outsiders’, that’s your interpretation which you can subsequently disagree with.
You’re misusing language by not realizing that most people treat “members of group A think X” as “a sizable majority of members of group A think X”, or not caring and blaming the reader when they parse it the standard way. We don’t say “LWers are religious” or even “US citizens vote Democrat”, even though there’s certainly more than one religious person on this site or Democrat voter in the US.
And if you did intend to say that, you’re putting words into Manfred’s mouth by assuming he’s talking about ‘all’ instead.
I do think that ‘sizable majority’ hypothesis has not been ruled out, to say the least. SI is working to help build benevolent ruler bot, to save the world from malevolent bot. That sounds as crazy as things can be. Prior track record doing anything relevant? None. Reasons for SI to think they can make any progress? None.
I think most of sceptically minded people do see that kind of stuff in pretty negative light, but of course that’s my opinion, you can disagree. Actually, who cares, SI should just go on ‘fix’ what Holden pointed out, increase visibility, and get listed on crackpot/pseudoscience pages.
I’m not talking about SI (which I’ve never donated money to), I’m talking about you.
I can talk about you too. The statement “That’s why outsiders think it’s a circlejerk”, does not have ‘sizable majority’, or ‘significant minority’, or ‘all’, or ‘some’ qualifier, nor does it have any kind of implied qualifier, nor does it need qualifying with vague “some”, that is entirely needless verbosity (as the ‘some’ can range from 0.00001% to 99.999%), and the request to add “some” is clearly rhetorical, which we both realize equally well. (It is the case, though, that I think the most likely case is “significant majority of rational people”, i.e. i expect greater than 50% chance of strong negative opinion of SI if it is presented to a rational person).
And you’re starting to repeat yourself.
The other day someone told me my argument was shifting like wind.
I’m talking about you. And you’re starting to repeat yourself.
Does that mean it is time to stop feeding him?
I had decided when I finished my hiatus recently that the account in question had already crossed the threshold where I could reply to him without predicting that I was just causing more noise.
I wonder how many other people feel this strongly about being criticized/insulted by a high status person (I guess at least Roko also felt strongly enough about being called “stupid” by Eliezer to contribute to him leaving this community a few days later), and whether Eliezer might not be aware of this effect he is having on others.
I don’t feel insulted at all. He is much smarter than me. But I am also not trying to accomplish the same as him. If he calls me stupid for criticizing him, that’s as if someone who wants to become a famous singer is telling me that I can’t sing when I criticized their latest song. No shit Sherlock!
IIRC Roko deleted the speculation-about-superintelligences part of the post shortly after its publication, but discussion in the comments raged on, so you subsequently banned the whole post/discussion.
And a few days later, primarily for unrelated reasons but probably with this incident as a trigger, Roko deleted his account, which on that version of LW meant that the text of all his comments disappeared (on the current version of LW, only author’s name gets removed when account is deleted, comments don’t disappear).
Surely not individually (there were probably thousands and IIRC it was also happening to other accounts, so wasn’t the result of running a self-made destructive script); what you’re seeing is just how “deletion of account” performed on old version of LW looks like on current version of LW.
No, I don’t think so; in fact I don’t think it was even possible for users to delete their own accounts on the old version of LW. (See here.) SilasBarta discovered Roko in the process of deleting his comments, before they had been completely deleted.
I don’t think it was even possible for users to delete their own accounts on the old version of LW. (See here.)
That post discusses the fact that account deletion was broken at one time in 2011, and a decision was being made about how to handle account deletion in the future. It doesn’t say anything relevant about how it worked in 2010.
“April last year” in that comment is when LW was started, I don’t believe it refers to incomplete deletion. The comments before that date that remained could be those posted under a different username (account), automatically copied from overcomingbias along with the Sequences.
Here is clearer evidence that account deletion simply did nothing back then. My understanding is the same as komponisto’s: Roko wrote a script to delete all of his posts/comments individually.
This comment was written 3 days before the post komponisto linked to, which discussed the issue of account deletion feature having been broken at that time (Apr 2011); the comment was probably the cause of that post. I don’t see where it indicates the state of this feature around summer 2010. Since “nothing happens” behavior was indicated as an error (in Apr 2011), account deletion probably did something else before it stopped working.
IIRC Roko deleted the speculation-about-superintelligences part of the post shortly after its publication, but discussion in the comments raged on, so you subsequently banned the whole post/discussion.
This sounds right to me, but I still have little trust in my memories.
Or little interest in rational self-improvement by figuring what actually happened and why?
[You’ve made an outrageously self-assured false statement about this, and you were upvoted—talk about sycophancy—for retracting your falsehood, while suffering no penalty for your reckless arrogance.]
This sounds right to me, but I still have little trust in my memories.
To clarify for those new here—“retract” here is meant purely in the usual sense, not in the sense of hitting the “retract” button, as that didn’t exist at the time.
Are there no server logs or database fields that would clarify the mystery? Couldn’t Trike answer the question? (Yes, this is a use of scarce time—but if people are going to keep bringing it up, a solid answer is best.)
Your point is well taken, but since part of the concern about that whole affair was your extreme language and style, maybe stating this in normal caps might be a reasonable step for PR.
He or someone else must have explained at some point, or I wouldn’t know his reason was that the article was giving a donor nightmares.
This is half the truth. Here is what he wrote:
For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us.
Please rot13 the part from “potentially” onwards, and add a warning as in this comment (with “decode the rot-13′d part” instead of “follow the links”), because there are people here who’ve said they don’t want to know about that thing.
Eliezer prioritized a donor (presumably long-term and one he knew personally) over an article.
Note that the post in question has already been seen by the donor, and has effectively advocated donating all spare money to SI. I imagine the donor was not a mind upload and the point was not deleted from donor’s memory, but I do know that deletion of it from public space resulted in lack of rebuttals.
In any case my point was not that censorship was bad, but that a nonsense threat utterly lacking in any credibility was taken very seriously (to the point of nightmares you say?). It is dangerous to have anyone seriously believe your project is going to kill everyone, even if that person is a pencil necked white nerd.
“he’s a high school dropout who hasn’t yet created an AI and so must be completely wrong”?
Strawman. A Bayesian reasoner should update on such evidence, especially as combination of ‘high school drop out’ and ‘no impressive technical accomplishments’ is a very strong indicator (of lack of world class genius) for that age category. It is the case that this evidence, post update, shifts estimates significantly in direction of ‘completely wrong or not even wrong’ for all insights that require world class genius level intelligence, such as, incidentally, forming opinion on AI risk which most world class geniuses did not form.
In any case I did not even say what you implied. To me the Roko incident is evidence that some people here take that kind of nonsense seriously enough to have nightmares about it (to delete it, etc etc), and as such it is unsafe if such people get told that particular software project is going to kill us all, while the list of accomplishment was to perform update on, when evaluating probability.
I have never seen where the person-with-nightmares was revealed as a donor, or indeed any clue as to who they were other than ‘someone Eliezer knows’. I would like some evidence, if there is any.
Also, Eliezer did not drop out of high school; he never attended in the first place, commonly known as ‘skipping it’, which is more common among “geniuses” (though I dislike that description).
Please note that none of the evidence shows the donor status of the anonymous people/person who actually had nightmares, and the two named individuals did not say it gave them nightmares, but used a popular TVTropes idiom, “Nightmare Fuel”, as an adjective.
Very few people are so smart they are in the category of ‘too smart for highschool and any university’… many more are less smart, some have practical issues (need to work to feed family f/e). There’s some very serious priors from the normal distribution, for evidence to shift. Successful self education is fairly uncommon, especially outside the context of ‘had to feed family’.
Does it really? Do I have to repeat myself more? Is it against some unwritten rule to mention Bell curve prior which I have had from the start?
What is your purpose?
What do you think? Feedback. I do actually think he’s nuts, you know? I also think he’s terribly miscalibrated , which is probably the cause of the overconfidence in his foom belief (and it is ultimately the overconfidence that is nutty, same beliefs with appropriate confidence would be just mildly weird in a good way). It is also probably the case that politeness results in biased feedback.
Well, there’s also the matter of why I’d think he’s nuts when facing “either he’s a supergenius or he’s nuts” dilemma created by overly high confidence expressed in overly speculative arguments. But yea I’m not sure it’s getting anywhere, the target audience is just EY himself, and I do expect he’d read this at least out of curiosity to see how he’s being defended, but with low confidence so I’m done.
It is the case that this evidence, post update, shifts estimates significantly in direction of ‘completely wrong or not even wrong’ for all insights that require world class genius level intelligence, such as, incidentally, forming opinion on AI risk which most world class geniuses did not form.
Most “world class geniuses” have not opinionated on AI risk. So “forming opinion on AI risk which most world class geniuses did not form” is hardly a task which requires “world class genius level intelligence”.
For a “Bayesian reasoner”, a piece of writing is its own sufficient evidence concerning its qualities. Said reasoner does not need to rely much on indirect evidence concerning the author, after the reasoner has read the actual writing itself.
Most “world class geniuses” have not opinionated on AI risk.
Nonetheless, the risk in question is also a personal risk of death for every genius… now idk how do we define geniuses here but obviously most geniuses could be presumed pretty good at preventing their own deaths, or deaths of their families. I should have said, forming a valid opinion.
For a “Bayesian reasoner”, a piece of writing is its own sufficient evidence concerning its qualities. Said reasoner does not need to rely much on indirect evidence concerning the author, after the reasoner has read the actual writing itself.
Assuming that absolutely nothing in the writing had to be taken on faith. True for mathematical proofs. False for almost everything else.
Nonetheless, the risk in question is also a personal risk of death for every genius… now idk how do we define geniuses here but obviously most geniuses could be presumed pretty good at preventing their own deaths, or deaths of their families.
That seems like a pretty questionable presumption to me. High IQ is linked to reduced mortality according to at least one study, but that needn’t imply that any particular fatal risk be likely to be uncovered, let alone prevented, by any particular genius; there’s no physical law stating that lethal threats must be obvious in proportion to their lethality. And that’s especially true for existential threats, which almost by definition must be without experiential precedent.
You’d have a stronger argument if you narrowed your reference class to AI researchers. Not a terribly original one in this context, but a stronger one.
Go dig for numbers yourself, and assume he is a genius until you find numbers, that will be very rational. Meanwhile most of people have a general feel of how rare it would be that a person with supposedly genius level untested insights into a technical topic (in so much as most geniuses fail to have those insights) would have nothing impressive that was tested, at age of, what, 32? edit: Then also, the geniuses know of that feeling and generally produce the accomplishments in question if they want to be taken seriously.
Starting a nonprofit on a subject unfamiliar to most and successfully soliciting donations, starting an 8.5-million-view blog, writing over 2 million words on wide-ranging controversial topics so well that the only sustained criticism to be made is “it’s long” and minor nitpicks, writing an extensive work of fiction that dominated its genre, and making some novel and interesting inroads into decision theory all seem, to me, to be evidence in favour of genius-level intelligence. These are evidence because the overwhelming default in every case for simply ‘smart’ people is to fail.
It provides evidence in favour of him being correct. If there weren’t other sources of information on Hubbard’s activities, I’d expect him to be of genius-level intelligence.
You’re familiar with the concept that someone looking like Hitler doesn’t make them fascist, right?
It provides evidence in favour of him being correct. If there weren’t other sources of information on Hubbard’s activities, I’d expect him to be of genius-level intelligence.
Honestly, I wouldn’t be surprised if he was; he clearly had an almost uniquely good understanding of what it takes to build a successful cult (though his early links with the OTO probably helped). New religious movements start all the time, and not one in a hundred reaches Scientology’s level of success. You can be both a genius and a charlatan. It’s easier to be the latter if you’re the former, actually.
Although his writing’s admittedly pretty terrible.
I wouldn’t expect genius level technical intelligence. Self deception is important part of effective deception; you have to believe a lie to build a good lie. Avoiding self deception is important part of technical accomplishment.
Furthermore, knowing that someone has no technical accomplishments is very different from not knowing if someone has technical accomplishments.
Yes. Worked at 3 failed start-ups, founded successful start-up (and know of several more failed ones). Self deception is incredibly destructive to any accomplishment that is not involving deception of other people. You need to know how good your skill set is, how good your product is, how good your idea is. You can’t be falling in love with brainfarts.
In any case, talents require extensive practice with feedback (are massively enhanced by that), and no technical accomplishments at age above 30 pretty much excludes any possibility of technical talent of any significance nowadays. (Yes, some odd case may discover they are awesome inventor, at age past 30, but they suffer from lack of earlier practice, and it’d be incredibly foolish of anyone who knows of own natural talent since teen, not to practice properly)
I’d also point out that if you read the investigative Hubbard biographies, you see many classic signs of con artistry: constant changes of location, careers, ideologies, bankruptcies or court cases in their wake, endless lies about their credentials, and so on. Most of these do not match Eliezer at all—the only similarities are flux in ideas and projects which don’t always pan out (like Flare), but that could be said of an ordinary academic AI researcher as well. (Most academic software is used for some publications and abandoned to bitrot.)
To clarify it better: the Roko incident illustrates how seriously some members of LW take nonsense conjectured threats. The fact of censorship is quite irrelevant. I was not really making a stab at the Eliezer with the Roko incident (even though I can see how you can picture it as such as it is easier to respond to the statement under this interpretation).
The HS dropping out and lack of accomplishments are a piece of evidence, and a rational Bayesian agent is better off knowing about such evidence. Especially given all the pieces of other evidence lying around such as ‘world foremost expert on self improvement’ and other introductions like http://www.youtube.com/watch?v=MwriJqBZyoM , which are normally indicative of far greater accomplishments (such as making something which self improved) than ones which took place.
To clarify it better: the Roko incident illustrates how seriously some members of LW take nonsense conjectured threats. The fact of censorship is quite irrelevant.
You can’t have it both ways. If it’s nonsense, then the importance is that someone took it seriously (like a donor), not anyone’s reaction to that someone taking it seriously (like Eliezer). If it’s not nonsense, then someone taking it seriously is not the issue, but someone’s reaction to taking it seriously (the censorship). Make up your mind.
The HS dropping out and lack of accomplishments are a piece of evidence, and a rational Bayesian agent is better off knowing about such evidence.
I don’t believe at any point in my comment did I claim the dropping out of school represented precisely 0 Bayesian evidence...
You can’t have it both ways. If it’s nonsense, then the importance is that someone took it seriously (like a donor), not anyone’s reaction to that someone taking it seriously (like Eliezer). If it’s not nonsense, then someone taking it seriously is not the issue, but someone’s reaction to taking it seriously (the censorship). Make up your mind.
If it is dangerous nonsense then it is important that there is rebuttal (ideally one that works on people whom would fall for the nonsense in first place). Haven’t seen one.
If it is not nonsense, then it outlines that certain decision theories should not be built into FAI.
I don’t believe at any point in my comment did I claim the dropping out of school represented precisely 0 Bayesian evidence...
you really didn’t like me pointing it out, though.
I should also link trolley problem discussions perhaps.
Trolley problems are a standard type of problem discussed in intro psychology and intro philosophy classes in colleges. And they go farther, with many studies just about how people respond or think about them. That LW would want to discuss trolley problems or that different people would have wildly conflicting responses to them shouldn’t be surprising- that’s what makes them interesting. Using them as evidence that LW is somehow bad seems strange.
I’m not private_messaging, but I think he has a marginally valid point, even though I disagree with his sensational style.
I personally would estimate FinalState’s chances of building a working AGI at approximately epsilon, given the total absence of evidence. My opinion doesn’t really matter, though, because I’m just some guy with a LessWrong account.
The SIAI folks, on the other hand, have made it their mission in life to prevent the rise of un-Friendly AGI. Thus, they could make FinalState’s life difficult in some way, in order to fulfill their core mission. In effect, FinalState’s post could be seen as a Pascal’s Mugging attempt vs. SIAI.
The social and opportunity costs of trying to supress a “UFAI attempt” as implausible as FinalState’s are far higher than the risk of failing to do so. There are also decision-theoretic reasons never to give in to Pascal-Mugging-type offers. SIAI knows all this and therefore will ignore FinalState completely, as well they should.
The social and opportunity costs of trying to supress a “UFAI attempt” as implausible as FinalState’s are far higher than the risk of failing to do so.
I think that depends on what level of suppression one is willing to employ, though in general I agree with you. FinalState had admitted to being a troll, but even if he was an earnest crank, the magnitude of the expected value of his work would still be quite small, even when you do account for SIAI’s bias.
There are also decision-theoretic reasons never to give in to Pascal-Mugging-type offers
What are they, out of curiosity ? I think I missed that part of the Sequences...
What are they, out of curiosity ? I think I missed that part of the Sequences...
It’s not in the main Sequences, it’s in the various posts on decision theory and Pascal’s Muggings. I hope our resident decision theory experts will correct me if I’m wrong, but my understanding is this. If an agent is of the type that gives into Pascal’s Mugging, then other agents who know that have an incentive to mug them. If all potential muggers know that they’ll get no concessions from an agent, they have no incentive to mug them. I don’t think this covers “Pascal’s Gift” scenarios where an agent is offered a tiny probability of a large positive utility, but it covers scenarios involving a small chance of a large disutility.
What probability would you give to FinalState’s assertion of having a working AGI?
0%, since it apparently isn’t finished yet.
Will it be finished in a year? 2%, as all other attempts that have reached the “final stages” have failed to build working AGI. The most credible of those attempts were done with groups; it appears FinalState is working alone.
That’s the kind of probability I would’ve assigned to EURISKO destroying the world back when Lenat was the first person ever to try to build anything self-improving. For a random guy on the Internet it’s off by… maybe five orders of magnitude? I would expect a pretty tiny fraction of all worlds to have the names of homebrew projects carved on their tombstones, and there are many random people on the Internet claiming to have AGI.
People like this are significant, not because of their chances of creating AGI, but because of what their inability to stop or take any serious precautions, despite their belief that they are about to create AGI, tells us about human nature.
Understanding “random guy on the Internet” to mean something like an Internet user all I know about whom is that they are interested in building AGI and willing to put some concerted effort into the project… hrm… yeah, I’ll accept e-7 as within my range.
My estimate for an actual random person on the Internet building AGI in, say, the next decade, has a ceiling of e-10 or so, but I don’t have a clue what its lower bound is.
That said, I’m not sure how well-correlated the willingness of a “random guy on the Internet” (meaning 1) to try to build AGI without taking precautions is to the willingness of someone whose chances are orders of magnitude higher to do so.
Then again, we have more compelling lines of evidence leading us to expect humans to not take precautions.
My estimate for an actual random person on the Internet building AGI in, say, the next decade, has a ceiling of e-10 or so, but I don’t have a clue what its lower bound is.
(I had to read that three times before getting why that number was 1000 times smaller than the other one, because I kept on misinterpreting “random person”. Try “randomly-chosen person”.)
I have no idea what you understood “random person” to mean, if not randomly chosen person. I’m also curious now as to whether whatever-that-is is what EY meant in the first place.
A stranger, esp. one behaving in weird ways; this appears to me to be the most common meaning of that word in 21st-century English when applied to a person. (Older speakers might be unfamiliar with it, but the median LWer is 25 years old, as of the latest survey.) And I also had taken the indefinite article to be an existential quantifier; hence, I had effectively interpreted the statement as “at least one actual strange person on the Internet building AGI in the next decade”, for which I thought such a low probability would be ridiculous.
but because of what their inability to stop or take any serious precautions, despite their belief that they are about to create AGI, tells us about human nature.
Are these in any way a representative sample of normal humans? In order to be in this category one generally needs to be pretty high on the crank scale along with some healthy Dunning-Kruger issues.
That’s always been the argument that future AGI scientists won’t be as crazy as the lunatics presently doing it—that the current crowd of researchers are self-selected for incaution—but I wouldn’t put too much weight on that; it seems like a very human behavior, some of the smarter ones with millions of dollars don’t seem of below-average competence in any other way, and the VCs funding them are similarly incapable of backing off even when they say they expect human-level AGI to be created.
Sorry, I’m confused. By “people like this” did you mean people like FinalState or did you mean professional AI researchers? I interpreted it as the first.
Before people downvote PM’s comment above, note that Eliezer’s comment prior to editing was a hierarchy of different AI researchers with lowest being people like FinalState, the second highest being professional AI researchers and the highest being “top AI researchers”.
With that out of the way, what do you think you are accomplishing with this remark? You have a variety of valid points to make, but I fail to see what is contained in this remark that does anything at all.
Me or Eliezer? I’m making some point by direct demonstration. It’s a popular ranking system, ya know? He used it on FinalState. A lot of people use it on him.
This just shifts the question to how you slotted FinalState into such a promising reference class? Conservatively, tens of academic research programs, tens of PhD dissertations, hundreds of hobbyist projects, hundreds of undergraduate term projects, and tens of business ventures have attempted something similar to AGI and none have succeeded.
As far as I can tell, the vast majority of academic projects (particularly those of undergrads) have worked on narrow AI, which this is supposedly not.
However, reading the post again, it doesn’t sound as though they have the support of any academic institution; I misread the bit around “academic network”. It sounds more as though this is a homebrew project, in which case I need to go two or three orders of magnitude lower.
As far as I can tell, the vast majority of academic projects (particularly those of undergrads) have worked on narrow AI, which this is supposedly not.
That’s definitely a reasonable assessment. I dialed all those estimates down by about an order of magnitude from when I started writing that point as I thought through just how unusual attempting general AI is. But over sixty years and hundreds of institutions where one might get a sufficiently solid background in CS to implement something big, there are going to be lots of unusual people trying things out.
The Rule of Succession, if I’m not mistaken, assumes a uniform prior from 0 to 1 for the probability of success. That seems unreasonable; it shouldn’t be extremely improbable (even before observing failure) that fewer than one in a thousand such claims result in a working AGI. So you have to adjust downward somewhat from there, but it’s hard to say how much.
(This is in addition to the point that user:othercriteria makes in the sibling comment.)
You’re correct, but where would I find a better prior? I’d rather be too conservative than resort to wild guessing (which it would be, since I’m not an expert on AGI).
(A variant of this is rhollerith_dot_com’s objection below, that I failed to take into account whatever the probability of working AGI leading to death is. Presumably that changes the prior as well.)
A. Many commonly used priors are listed in the Handbook of Chemistry and Physics.
Q. Where do priors originally come from?
A. Never ask that question.
Q. Uh huh. Then where do scientists get their priors?
A. Priors for scientific problems are established by annual vote of the AAAS. In recent years the vote has become fractious and controversial, with widespread acrimony, factional polarization, and several outright assassinations. This may be a front for infighting within the Bayes Council, or it may be that the disputants have too much spare time. No one is really sure.
Q. I see. And where does everyone else get their priors?
A. They download their priors from Kazaa.
Q. What if the priors I want aren’t available on Kazaa?
A. There’s a small, cluttered antique shop in a back alley of San Francisco’s Chinatown. Don’t ask about the bronze rat.
Isn’t the lesson of the Quantum Physics sequence that ordinary humans today should get their priors from the least complex (and falsifiable?) statements that aren’t inconsistent with empirical knowledge.
I don’t know where to get a good prior. I suppose you might look at past instances where someone claimed to be close to doing something that seemed about as difficult and confusing as AGI seems to be (before taking into account a history of promises that didn’t pan out, but after taking into account what we know about the confusingness of the problem, insofar as that knowledge doesn’t itself come from the fact of failed promises). I don’t know what that prior would look like, but it seems like it would assign (if you randomly selected a kind of feat) a substantially greater than 1⁄10 probability of seeing at least 10 failed predictions of achieving that feat for every successful such prediction, a substantially greater than 1⁄100 probability of seeing at least 100 failed predictions for every successful prediction, and so on.
Hmm yeah, I read the post again and, if it’s a troll, it’s a way-more-subtle-than-typical one. Still, my posterior probability assignment on him being serious/sincere is in the 0.40s (extraordinary claims require extraordinary evidence) -- though this means that the probability that he succeeds given that he’s serious is the same order of magnitude as the probability that he succeeds given everything I know.
If you know you probably would not have survived the sun’s having failed to rise, you cannot just apply the Rule of Succession to your knowledge of past sunrises to calculate the probability that the sun will rise tomorrow because that would be ignoring relevant information, namely the existence of a severe selection bias. (Sadly, I do not know how to modify the Rule of Succession to account for the selection bias.)
On the other hand, if you have so much background knowledge about the Sun that you can think about the selection effects involved, the Rule of Succession is a moot & incomplete analysis to begin with.
Regarding your second paragraph, Sir Gwern, if we switch the example to the question of whether the US and Russia will launch nukes at each other this year, I have at lot of information about the strength of the selection bias (including for example Carl Sagan’s work on nuclear winter) that I might put to good use if I knew how to account for selection effects, but I would be sorely tempted to use something like the Rule of Succession (modified to account for the selection bias and where the analog of a day in which the sun might or might not rise is the start of the part of the career of someone in the military or in politics during which he or she can influence whether or not an attempt at a first strike is made) because my causal model of the mental processes behind the decision to launch is so unsatisfactory.
This might be a good place for me to point out that I never bought into the common wisdom, which I have never seen anyone object to or distance themselves from in print, that the chances of a nuclear exchange between the US and Russia went down considerably after the collapse of the Soviet Union in 1991.
This might be a good place for me to point out that I never bought into the common wisdom, which I have never seen anyone object to or distance themselves from in print, that the chances of a nuclear exchange between the US and Russia went down considerably after the collapse of the Soviet Union in 1991.
Nuclear war isn’t the same situation, though. We can survive nuclear war at all sorts of levels of intensity, so the selection filter is not nearly the same as “the Sun going out”, which is ~100% fatal. Bostrom’s shadow paper might actually work for nuclear war, from the perspective of a revived civilization, but I’d have to reread it to see.
The selection filter does not have to be total or near total for my point to stand, namely, Rule-of-Succession-like calculations can be useful even when one has enough information to think about the selection effects involved (provided that Rule-of-Succession-like calculations are ever useful).
And parenthetically selection effects on observations about whether nuclear exchanges happened in the past can be very strong. Consider for example a family who has lived in Washington, D.C., for the last 5 decades: Washington, D.C., is such an important target that it is unlikely the family would have survived the launch of most or all of the Soviet/Russian arsenal at the U.S. So, although I agree with you that the human race as a whole would probably have survived almost any plausible nuclear exchange, that does not do the family in D.C. much good. More precisely, it does not do much good for the family’s ability to use historical data on whether or not nukes were launched at the U.S. in the past to refine their probability of launches in the future.
Me too. This value is several orders of magnitude above my own estimate.
That said, it depends on your definition of “finished”. For example, it is much more plausible (relatively speaking) that FinalState will fail to produce an AGI, but will nevertheless produce an algorithm that performs some specific task—such as character recognition, unit pathing, natural language processing, etc. -- better than the leading solutions. In this case, I suppose one could still make the argument that FinalState’s project was finished somewhat successfully.
The thread about EY’s failure to make make many falsifiable predictions is better ad hominem
I meant to provide priors for the expected value of communication with SI. Sorry, can’t be done in non ad hominem way. There’s been video or two where Eliezer was called “world’s foremost expert on recursive self improvement”, which normally implies making something self improve.
the speculation about launching terrorist attacks on fab plants is a much more compelling display of potential risk to life and property.
Ahh right, should of also linked this one. I see it was edited replacing ‘we’ with ‘world government’ and ‘sabotage’ with sanctions and military action. BTW that speculation is by gwern, is he working at SIAI?
What probability would you give to FinalState’s assertion of having a working AGI?
AGI is ill defined. Of something that would foom as to pose potential danger, infinitesimally small.
Ultimately: I think risk to his safety is small, and payoff is negligible, while the risk from his software is pretty much nonexistent.
It nonetheless results in significant presentation bias, what ever is the cause.
My priors, for one thing, were way off in SI’s favour. My own cascade of updates was triggered by seeing Alexei say that he plans to make a computer game to make money to donate to SIAI. Before which I sort of assumed that the AI discussions here were about some sorta infinite power super-intelligence in scifi, not unlike Vinge’s beyond, intellectually pleasurable game of wits (I even participated a little once or twice along the lines of how you can’t debug superintelligence). I assumed that Eliezer had achievements from which he got the attitude (I sort of confused him with Hanson to some extent), etc etc etc. I looked into it more accurately since.
“And if Novamente should ever cross the finish line, we all die.”
And yet SIAI didn’t do anything to Ben Goertzel (except make him Director of Research for a time, which is kind of insane in my judgement, but obviously not in the sense you intend).
Ben Goertzel’s projects are knowably hopeless, so I didn’t too strongly oppose Tyler Emerson’s project from within SIAI’s then-Board of Directors; it was being argued to have political benefits, and I saw no noticeable x-risk so I didn’t expend my own political capital to veto it, just sighed. Nowadays the Board would not vote for this.
And it is also true that, in the hypothetical counterfactual conditional where Goertzel’s creations work, we all die. I’d phrase the email message differently today to avoid any appearance of endorsing the probability, because today I understand better that most people have trouble mentally separating hypotheticals. But the hypothetical is still true in that counterfactual universe, if not in this one.
Also, in the hypothetical counterfactual conditional where Goertzel’s creations work, we all die
What’s about hypothetical counterfactual conditional where you run into some AGI software that you think will work? Should I assume zero positive rate for ‘you think it works’?
I’d phrase the email message differently today to avoid any appearance of endorsing the probability, because today I understand better that most people have trouble mentally separating hypotheticals.
Really, so it is invalid to make a hypothetical that if someone has a project that you think will work, you may think that we are all going to die unless that project is stopped?
Did I claim they did beat him up or what? Ultimately, more recent opinion which I seen somewhere is that Eliezer ended up considering Ben harmless as in unlikely to achieve the result. I also see you guys really loving trolley problems including extreme forms of it (with 3^^^3 dustspecks in 3^^^3 eyes).
Having it popularly told that your project is going to kill everyone is already a risk given all the other nutjobs:
Even if later atoned for by making you head of SI or something (with unclear motivation which may well be creepy in nature)
See, i did not say he was going to definitely get killed or something. I said, there was a risk. Yea, nothing happening to Ben Goertzel’s persona is proof positive that the risk is zero. Geez, why won’t you for once reason like this about AI risk for example.
Ultimately: encounters with a nutjob* who may, after presentation of technical details, believe you are going to kill everyone, are about as safe as making credible death threats against normal person and his relatives and his family etc. Or less safe, even. Neither results in 100% probability of anything happening.
*though of course the point may be made that he doesn’t believe the stuff he says he believes, or that a sane portion of his brain will reliably enact akrasia over the decision, or something.
The existence of third-party anti-technology terrorists adds something to the conversation beyond the risks FinalState can directly pose to SIAI-folk and vice versa. I’m curious about gwern’s response, especially, given his interest in Death Note, which describes a world where law enforcement can indirectly have people killed just by publishing their identifying information.
He explicitly said that the aggregate risk from the project would be way way smaller than personal risk from SIAI. Trying to convince people to stop gives SIAI impression of power and so increases its resource acquisition possibilities, which is considered bad.
Guys… I am in the final implementation stages of the general intelligence algorithm. Though I had intellectual property concerns regarding working within the academic network (especially with the quality of people I was exposed to), I am willing to work with people that I perceive as intelligent and emotionally mature enough to respect my accomplishments. Although I do not perceive an absolute need to work with anyone else, I do have the following concerns about finishing this project alone:
Ethical considerations—Review of my approach to handling the potential dangers including some not typically talked about (DNA encoding is an instance of the GIA which means publishing could open the door to genetic programming). How to promote the use of it to quantify the social sciences as much or more than the use of it for sheer technological purposes which could lead to a poorly understood MAD situation between any individual and the human race. Example: The question of how/when to render assistance to others without harming their self-determination could be reduced to an optimization problem.
I could just read everything written on this subject but most of it is off base.
Productivity considerations—Considering the potential implications having accountability to others for meeting deadlines etc could increase productivity… every day may matter in this case. I have pretty much been working in a vacuum other than deducing information from comments made by others, looking up data on the internet, and occasionally debating with people incognito regarding their opinions of how/why certain things work or do not work.
If anyone is willing and able to meet to talk about this in the southeast I would consider it based on a supplied explanation of how best to protect oneself from the loss of credit for one’s ideas in working with others (which I would then compare to my own understanding regarding the subject for honesty and accuracy) If there is no reason for anyone to want to collaborate under these circumstances then so be it, but I feel like more emotionally mature and intelligent people would not feel this way.
Please don’t take this as a personal attack, but, historically speaking, every one who’d said “I am in the final implementation stages of the general intelligence algorithm” was wrong so far. Their algorithms never quite worked out. Is there any evidence you can offer that your work is any different ? I understand that this is a tricky proposition, since revealing your work could set off all kinds of doomsday scenarios (assuming that it performs as you expect it to); still, surely there must be some way for you to convince skeptics that you can succeed where so many others had failed.
Sadly, I think the general trend you note is correct, but the first developers to succeed may do so in relative secrecy.
As time goes on it becomes increasingly possible that some small group or lone researcher is able to put the final pieces together and develop an AGI. Assuming a typical largely selfish financial motivation, a small self-sufficient developer would have very little to gain from pre-publishing or publicizing their plan.
Eventually of course they may be tempted to publicize, but there is more incentive to do that later, if at all. Unless you work on it for a while and it doesn’t go much of anywhere. Then of course you publish.
Why do you think this is the case? Is this just because the overall knowledge level concerning AI goes up over time? If so, what makes you think that that rate of increase is anything large enough to be significant?
Yes. This is just the way of invention in general: steady incremental evolutionary progress.
A big well funded team can throw more computational resources into their particular solution for the problem, but the returns are sublinear (for any one particular solution) even without moore’s law.
it’s both amusing and disconcerting that people on this forum treat such a comment seriously.
I try to treat all comments with some degree of seriousness, which can be expressed as a floating-point number between 0 and 1 :-)
Isn’t the SIAI founded on the supposition that a scenario like this is possible?
Yes, but on this forum there should be some reasonable immunity against instances of Pascal’s wager/mugging like that. The comment in question does not rise above the noise level, so treating it seriously shows how far many regulars still have to go in learning the basics.
If this works, it’s probably worth a top-level post.
Upvoted for humor: “probably”.
Cheers! Some find my humor a little dry.
Congratulations on your insights, but please don’t snrk implement them until snigger you’ve made sure that oh heck I can’t keep a straight face anymore.
The reactions to the parent comment are very amusing. We have people sarcastically supporting the commenter, people sarcastically telling the commenter they’re a threat to the world, people sarcastically telling the commenter to fear for their life, people non-sarcastically telling the commenter to fear for their life, people honestly telling the commenter they’re probably nuts, and people failing to get every instance of the sarcasm. Yet at bottom, we’re probably all (except for private_messaging) thinking the same thing: that FinalState almost certainly has no way of creating an AGI and that no-one involved need feel threatened by anyone else.
nah, I stated that probability of him creating AGI is epsilon (my probability for his project hurting me is microscopic epsilon while the SI hurting him somehow is a larger epsilon, I only stated a relation that the latter is larger than former. The probability of a person going unfriendly is way, way higher than the probability of a person creating AGI that kills us all).
I think we’re all here for various sarcastic or semi sarcastic points; my point is that given the SI stance, AGI researchers would (and have to) try to keep away from SI, especially those whom have some probability of creating an AGI, given combination of probability of useful contribution by SI versus probability of SI going nuts.
I never thought you disagreed with:
I actually meant that I thought you disagreed with:
Sorry for the language ambiguity. If you think the probability of SI hurting FinalState is epsilon, I misunderstood you. I thought you thought it was a large enough probability to be worth worrying about and warning FinalState about.
You’ll have to forgive Eliezer for not responding; he’s busy dispatching death squads.
Not funny.
Of course not, why send death squads when you can send Death Eaters. It just takes a single spell to solve this problem.
Indeed not.
Do you mean “I am in the final writing stages of a paper on a general intelligence algorithm?” If you were in the final implementation stages of what LW would recognize as the general intelligence algorithm, the very last thing you would want to do is mention that fact here; and the second-to-last thing you’d do would be to worry about personal credit.
I am open to arguments as to why that might be the case, but unless you also have the GIA, I should be telling you what things I would want to do first and last. I don’t really see what the risk is, since I haven’t given anyone any unique knowledge that would allow them to follow in my footsteps.
A paper? I’ll write that in a few minutes after I finish the implementation. Problem statement → pseudocode → implementation. I am just putting some finishing touches on the data structure cases I created to solve the problem.
As far as I understand, the SIAI folks believe that the risk is, “you push the Enter key, your algorithm goes online, bootstraps itself to transhuman superintelligence, and eats the Earth with nanotechnology” (nanotech is just one possibility among many, of course). I personally don’t believe we’re in any danger of that happening any time soon, but these guys do. They have made it their mission in life to prevent this scenario from happening. Their mission and yours appear to be in conflict.
That is just wrong. SAI doesn’t really work like that. Those people have seen too many sci fi movies. It’s easy to psychologically manipulate an AI if you are smart enough to create one in the first place. To use terms I have seen tossed around, there is no difference between tool and agent AI. The agent only does things that you program it to do. It would take a malevolent genius to program something akin to a serial killer to cause that kind of scenario.
People who created Deep Thought have no problem beating it at chess.
What on earth is this retraction nonsense?
Retraction means that you no longer endorse the contents of a comment. The comment is not deleted so that it will not break existing conversations. Retracted comments are no longer eligible for voting. Once a comment is retracted, it can be revisited at which point there is a ‘delete’ option, which removes the comment permanently.
I didn’t realize that I was receiving all that mail...
Yummy 11 more tears
If you are not totally incompetent or lying out of your ass, please stop. Do not turn it on. At least consult SI.
Don’t feed the… um, crank.
A pascals mugging is worth at least a comment.
If you are concerned about Intellectual Property rights, by all means have a confidentiality agreement signed b4 revealing any proprietary information. Any reasonable person would not have a problem signing such an agreement.
Expect some skepticism until a working prototype is available.
Good luck with your project!
My recommendation: stay away from SIAI as there is considerable probability that they are at worst nutjobs or at best a fraud. Either way it is dangerous for you, as in, actual risk to your safety. Do not reveal your address. I am bloody serious. It is not a game here.
edit: supplemental information (not so much on the potential dangers but on the usefulness of communication):
the ‘roko incident’: http://rationalwiki.org/wiki/Talk:LessWrong#Hell
the founder: http://lesswrong.com/lw/6dr/discussion_yudowskys_actual_accomplishments/
Note: I fully believe that risk to your safety (while small) outweights the risk to all of us from your software project. All of us includes me, all my relatives, all people i care for, the world, etc.
So your argument that visiting a bunch of highly educated pencil-necked white nerds is physically dangerous boils down to… one incident of ineffective online censorship mocked by most of the LW community and all outsiders, and some criticism of Yudkowsky’s computer science & philosophical achievements.
I see.
I would literally have had more respect for you if you had used racial slurs like “niggers” in your argument, since that is at least tethered to reality in the slightest bit.
Where a single incident seems grotesquely out of character, one should attempt to explain the single incident’s cause. What’s troubling is that Eliezer Yudkowsky has: 1) never admitted his mistake; 2) never shown (at least to my knowledge) any regret over how he handled it; and 3) most importantly, never explained his response (practically impossible without admitting his mistake).
The failure to address a wrongdoing or serious error over many years means it should be taken seriously, despite the lapse of time. The failure of self-analysis raises real questions about a lack of intellectual honesty—that is, a lack of epistemic rationality.
I don’t think it’s hard to explain at all: Eliezer prioritized a donor (presumably long-term and one he knew personally) over an article. I disagree with it, but you know what, I saw this sort of thing all the time on Wikipedia, and I don’t need to go looking for theories of why administrators were crazy and deleted Daniel Brandt’s article. I know why they did, even though I strongly disagreed.
He or someone else must have explained at some point, or I wouldn’t know his reason was that the article was giving a donor nightmares.
Is deleting one post such an issue to get worked up over? Or is this just discussed because it’s the best criticism one can come up with besides “he’s a high school dropout who hasn’t yet created an AI and so must be completely wrong”?
Please cite your claim that the affected person was a donor.
Has he said anywhere that the individual with nightmares was a donor? Note incidentally that having content that is acting as that much of a cognitive basilisk might be a legitimate reason to delete (although I’m inclined to think that it wasn’t).
Like JoshuaZ, I hadn’t known a donor was involved. What’s the big deal? People donote to SIAI because they trust Eliezer Yudkowsky’s integrity and intellect. So it’s natural to ask whether he’s someone you can count on to deliver the truth. Caving to donors is inauspicious.
In a related vein, I also found disturbing that Eliezer Yudkowsky repeated his claim that that Loosemoore guy “lied.” Having had years to cool off, he still hasn’t summoned the humility to admit he stretched the evidence for Loosemoore’s deceitfulness: Loosemoore is obviously a cognitive scientist.
These two examples paint a picture of Eliezer Yudkowsky as a person subject to strong personal loyalties and animosities that exceed his dedication to the truth. In the first incident, his loyalty to a donor induced him to suppress information; in the Loosemoore incident, his longstanding animosity to Loosemoore made him unable to adjust his earlier opinion.
I hope these impressions aren’t accurate. But one thing seems for sure: Eliezer Yudkowsky is not a person for serious self-criticism. Has he admitted any significant intellectual error since he became a rationalist? [Serious question.]
It’s also a double-bind. If you do nothing, you are valuing donors at less than some random speculation which is unusually dubious even by LessWrong’s standards, resting as it does on a novel speculative decision theory (acausal trade) whose most obvious requirement (implementing sufficiently similar algorithms) is beyond blatantly false when applied to humans and FAIs. (If you actually believe that SIAI is a good charity, pissing off donors over something like this is a really bad idea, and if you don’t believe SIAI is a good charity, well, that’s even more damning, isn’t it?) And if you delete it, well, you get exactly this stupid mess which is still being dragged up years later.
Repudiating most of his long-form works like CFAI and LOGI and CEV isn’t admission of error?
Personally, when he was writing the Sequences, I found it a little obnoxious how he kept saying “I was totally on the wrong track and mistaken before I was enlightened & came to understand Bayesian statistics, but now I have a chance of being less wrong”—once is enough, we get it already, I’m not that interested in your intellectual evolution.
As someone who hasn’t been around that long, it would be interesting to have links. I’m having trouble coming up with useful search terms.
Creating Friendly AI, Levels of Organization in General Intelligence, and Coherent Extrapolated Volition.
Sorry, I wasn’t clear. I meant links to the repudiations. I’ve read some of the material in CFAI and CEV, but not the retraction, and not yet any of LOGI.
Oh. I don’t remember, then, besides the notes about them being obsolete.
Hmm, and the foom belief (for instance) is based on Bayesian statistics how?
That’s pretty damn interesting, because I’ve understood Bayesian statistics for ages, understood how wrong you are without it, and also understood how computationally expensive it is—just think what sort of data you need to attach to each proposition to avoid double counting evidence, to avoid any form of circular updates, to avoid naive Bayesian mistakes… even worse, how prone it is to making faulty conclusions from a partial set of propositions (as generated by e.g. exploring ideas, which btw introduces another form of circularity as you tend to use ideas which you think are probable as starting point more often).
Seriously, he should try to write software that would do updates correctly on a graph with cycles and with correlated propositions. That might result in another enlightenment, hopefully the one not leading to increased confidence, but to decreased confidence. Statistics isn’t easy to do right. And relatively minor bugs easily lead to major errors.
I don’t think it’s based on Bayesian statistics any more than any other belief may (or may not) be based. To take Eliezer specifically, he was interested in the Singularity—specifically, the Good/Vingean observation that a machine more intelligent than us ought to be better than us at creating a still more intelligent machine—long before he had his ‘Bayesian enlightenment’, so his shift to subjective Bayesianism may have increased his belief in intelligence explosions, but certainly didn’t cause it.
Once again: ROKO DELETED HIS OWN POST. NO OUTSIDE CENSORSHIP WAS INVOLVED.
This is how rumors evolve, ya know.
Eliezer, I upvoted you and was about to apologize for contributing to this rumor myself, but then found this quote from a copy of the Roko post that’s available online:
Perhaps your memory got mixed up because Roko subsequently deleted all of his other posts and comments? (Unless “banning” meant something other than “deleting”?)
Now I’ve got no idea what I did. Maybe my own memory was mixed up by hearing other people say that the post was deleted by Roko? Or Roko retracted it after I banned it, or it was banned and then unbanned and then Roko retracted it?
I retract my grandparent comment; I have little trust for my own memories. Thanks for catching this.
A lesson learned here. I vividly remembered your “Meanwhile I’m banning this post” comment and was going to remind you, but chickened out due to the caps in the great-grandparent which seemed to signal that you Knew What You Were Talking About and wouldn’t react kindly to correction. Props to Wei Dai for having more courage than I did.
I’m surprised and disconcerted that some people might be so afraid of being rebuked by Eliezer as to be reluctant to criticize/correct him even when such incontrovertible evidence is available showing that he’s wrong. Your comment also made me recall another comment you wrote a couple of years ago about how my status in this community made a criticism of you feel like a “huge insult”, which I couldn’t understand at the time and just ignored.
I wonder how many other people feel this strongly about being criticized/insulted by a high status person (I guess at least Roko also felt strongly enough about being called “stupid” by Eliezer to contribute to him leaving this community a few days later), and whether Eliezer might not be aware of this effect he is having on others.
My brain really, really does not want to update on the numerous items of evidence available to it that it can hit people much much harder now, owing to community status, than when it was 12 years old.
(nods) I’ve wondered this many times.
I have also at times wondered if EY is adopting the “slam the door three times” approach to prospective members of his community, though I consider this fairly unlikely given other things he’s said.
Somewhat relatedly, I remember when lukeprog first joined the site, he and EY got into an exchange that from what I recall of my perspective as a completely uninvolved third party involved luke earnestly trying to offer assistance and EY being confidently dismissive of any assistance someone like luke could provide, and at the time I remember feeling sort of sorry for luke, who it seemed to me was being treated a lot worse than he deserved, and surprised that he kept at it.
The way that story ultimately turned out led me to decide that my model of what was going on was at least importantly incomplete, and quite possibly fundamentally wrongheaded, but I haven’t further refined that model.
As a data point here I tend to empathize with the recipient of such barrages to what I subjectively estimate as about 60% of the degree of emotional affect that I would experience if it were directed at myself. Particularly if said recipient is someone I respect as much as Roko and when the insults are not justified—less if they do not have my respect and if the insults are justified I experience no empathy. It is the kind of thing that I viscerally object to having in my tribe and where it is possible I try to ensure that the consequences to the high status person for their behavior are as negative as possible—or at least minimize the reward they receive if the tribe is one that tends to award bullying.
There are times in the past—let’s say 4 years ago—where such an attack would certainly prompt me to leave a community, even if the community was otherwise moderately appreciated. Now I believe I am unlikely to leave over such an incident. I would say I am more socially resilient and also more capable as understanding social politics as a game and so take it less personally. For instance when received the more mildly expressed declaration from Eliezer “You are not safe to even associate with!” I don’t recall experiencing any flight impulses—more surprise.
I was a little surprised at first too at reading of komponisto’s reticence. Until I thought about it and reminded myself that in general I err on the side of not holding my tongue when I ought. In fact, the character “wedrifid” on wotmud.org with which I initially established this handle was banned from the game for 3 months for making exactly this kind of correction based off incontrovertible truth. People with status are dangerous and in general highly epistemically irrational in this regard. Correcting them is nearly always foolish.
I must emphasize that part of my initial surprise at kompo’s reticence is due to my model of Eliezer as not being especially corrupt in this kind of regard. In response to such correction I expect him to respond positively and update. While Eliezer may be arrogant and a tad careless when interacting with people at times but he is not an egotistical jerk enforcing his dominance in his domain with dick moves. That’s both high praise (by my way of thinking) and a reason for people to err less on the side of caution with him and to take less personally any ‘abrupt’ things he may say. Eliezer being rude to you isn’t a precursor to him beating you to death with a metaphorical rock to maintain his power—as our instincts may anticipate. He’s just being rude.
People have to realize that to critically examine his output is very important due to the nature and scale of what he is trying to achieve.
Even people with comparatively modest goals like trying to become the president of the United States of America should face and expect a constant and critical analysis of everything they are doing.
Which is why I am kind of surprised how often people ask me if I am on a crusade against Eliezer or find fault with my alleged “hostility”. Excuse me? That person is asking for money to implement a mechanism that will change the nature of the whole universe. You should be looking for possible shortcomings as well!
Everyone should be critical of Eliezer and SIAI, even if they agree with almost anything. Why? Because if you believe that it is incredible important and difficult to get friendly AI just right, then you should be wary of any weak spot. And humans are the weak spot here.
That’s why outsiders think it’s a circlejerk. I’ve heard of Richard Loosemore whom as far as i can see was banned over corrections on the “conjunction fallacy”, not sure what exactly went on, but ofc having spent time reading Roko thing (and having assumed that there was something sensible I did not hear of, and then learning that there wasn’t) its kind of obvious where my priors are.
Maybe try keeping statements more accurate by qualifying your generalizations (“some outsiders”), or even just saying “that’s why I think this is a circlejirk.” That’s what everyone ever is going to interpret it as anyhow (intentional).
Maybe you guys are too careful with qualifying everything as ‘some outsiders’ and then you end up with outsiders like Holden forming negative views which you could of predicted if you generalized more (and have the benefit of Holden’s anticipated feedback without him telling people not to donate).
Maybe. Seems like you’re reaching, though: Maybe something bad comes from us being accurate rather than general about things like this, and maybe Holden criticizing SIAI is a product of this on LessWrong for some reason, and therefore it is in fact better for you to say inaccurate things like “outsiders think it’s a circlejrik.” Because you… care about us?
You guys are only being supposedly ‘accurate’ when it feels good. I have not said, ‘all outsiders’, that’s your interpretation which you can subsequently disagree with.
SI generalized from the agreement of self selected participants, onto opinions of outsiders, like Holden, subsequently approaching him and getting back the same critique they’ve been hearing from rare ‘contrarians’ here for ages but assumed to be some sorta fringe views and such. I don’t really care what you guys do with this, you can continue as is and be debunked big time as cranks, your choice. edit: actually, you can see Eliezer himself said that most AI researchers are lunatics. What did SI do to distinguish themselves from what you guys call ‘lunatics’? What is here that can shift probabilities from the priors? Absolutely nothing. The focus on safety with made up fears is no indication of sanity what so ever.
You’re misusing language by not realizing that most people treat “members of group A think X” as “a sizable majority of members of group A think X”, or not caring and blaming the reader when they parse it the standard way. We don’t say “LWers are religious” or even “US citizens vote Democrat”, even though there’s certainly more than one religious person on this site or Democrat voter in the US.
And if you did intend to say that, you’re putting words into Manfred’s mouth by assuming he’s talking about ‘all’ instead.
I do think that ‘sizable majority’ hypothesis has not been ruled out, to say the least. SI is working to help build benevolent ruler bot, to save the world from malevolent bot. That sounds as crazy as things can be. Prior track record doing anything relevant? None. Reasons for SI to think they can make any progress? None.
I think most of sceptically minded people do see that kind of stuff in pretty negative light, but of course that’s my opinion, you can disagree. Actually, who cares, SI should just go on ‘fix’ what Holden pointed out, increase visibility, and get listed on crackpot/pseudoscience pages.
I’m not talking about SI (which I’ve never donated money to), I’m talking about you. And you’re starting to repeat yourself.
I can talk about you too. The statement “That’s why outsiders think it’s a circlejerk”, does not have ‘sizable majority’, or ‘significant minority’, or ‘all’, or ‘some’ qualifier, nor does it have any kind of implied qualifier, nor does it need qualifying with vague “some”, that is entirely needless verbosity (as the ‘some’ can range from 0.00001% to 99.999%), and the request to add “some” is clearly rhetorical, which we both realize equally well. (It is the case, though, that I think the most likely case is “significant majority of rational people”, i.e. i expect greater than 50% chance of strong negative opinion of SI if it is presented to a rational person).
The other day someone told me my argument was shifting like wind.
Does that mean it is time to stop feeding him?
I had decided when I finished my hiatus recently that the account in question had already crossed the threshold where I could reply to him without predicting that I was just causing more noise.
Good point.
I don’t feel insulted at all. He is much smarter than me. But I am also not trying to accomplish the same as him. If he calls me stupid for criticizing him, that’s as if someone who wants to become a famous singer is telling me that I can’t sing when I criticized their latest song. No shit Sherlock!
IIRC Roko deleted the speculation-about-superintelligences part of the post shortly after its publication, but discussion in the comments raged on, so you subsequently banned the whole post/discussion.
And a few days later, primarily for unrelated reasons but probably with this incident as a trigger, Roko deleted his account, which on that version of LW meant that the text of all his comments disappeared (on the current version of LW, only author’s name gets removed when account is deleted, comments don’t disappear).
Roko never deleted his account; he simply deleted all of his comments individually.
Surely not individually (there were probably thousands and IIRC it was also happening to other accounts, so wasn’t the result of running a self-made destructive script); what you’re seeing is just how “deletion of account” performed on old version of LW looks like on current version of LW.
No, I don’t think so; in fact I don’t think it was even possible for users to delete their own accounts on the old version of LW. (See here.) SilasBarta discovered Roko in the process of deleting his comments, before they had been completely deleted.
That post discusses the fact that account deletion was broken at one time in 2011, and a decision was being made about how to handle account deletion in the future. It doesn’t say anything relevant about how it worked in 2010.
“April last year” in that comment is when LW was started, I don’t believe it refers to incomplete deletion. The comments before that date that remained could be those posted under a different username (account), automatically copied from overcomingbias along with the Sequences.
Here is clearer evidence that account deletion simply did nothing back then. My understanding is the same as komponisto’s: Roko wrote a script to delete all of his posts/comments individually.
This comment was written 3 days before the post komponisto linked to, which discussed the issue of account deletion feature having been broken at that time (Apr 2011); the comment was probably the cause of that post. I don’t see where it indicates the state of this feature around summer 2010. Since “nothing happens” behavior was indicated as an error (in Apr 2011), account deletion probably did something else before it stopped working.
Ok, I guess I could be wrong then. Maybe somebody who knows Roko could ask him?
This sounds right to me, but I still have little trust in my memories.
Or little interest in rational self-improvement by figuring what actually happened and why?
[You’ve made an outrageously self-assured false statement about this, and you were upvoted—talk about sycophancy—for retracting your falsehood, while suffering no penalty for your reckless arrogance.]
To clarify for those new here—“retract” here is meant purely in the usual sense, not in the sense of hitting the “retract” button, as that didn’t exist at the time.
Are there no server logs or database fields that would clarify the mystery? Couldn’t Trike answer the question? (Yes, this is a use of scarce time—but if people are going to keep bringing it up, a solid answer is best.)
Your point is well taken, but since part of the concern about that whole affair was your extreme language and style, maybe stating this in normal caps might be a reasonable step for PR.
This is half the truth. Here is what he wrote:
Please rot13 the part from “potentially” onwards, and add a warning as in this comment (with “decode the rot-13′d part” instead of “follow the links”), because there are people here who’ve said they don’t want to know about that thing.
Note that the post in question has already been seen by the donor, and has effectively advocated donating all spare money to SI. I imagine the donor was not a mind upload and the point was not deleted from donor’s memory, but I do know that deletion of it from public space resulted in lack of rebuttals.
In any case my point was not that censorship was bad, but that a nonsense threat utterly lacking in any credibility was taken very seriously (to the point of nightmares you say?). It is dangerous to have anyone seriously believe your project is going to kill everyone, even if that person is a pencil necked white nerd.
Strawman. A Bayesian reasoner should update on such evidence, especially as combination of ‘high school drop out’ and ‘no impressive technical accomplishments’ is a very strong indicator (of lack of world class genius) for that age category. It is the case that this evidence, post update, shifts estimates significantly in direction of ‘completely wrong or not even wrong’ for all insights that require world class genius level intelligence, such as, incidentally, forming opinion on AI risk which most world class geniuses did not form.
In any case I did not even say what you implied. To me the Roko incident is evidence that some people here take that kind of nonsense seriously enough to have nightmares about it (to delete it, etc etc), and as such it is unsafe if such people get told that particular software project is going to kill us all, while the list of accomplishment was to perform update on, when evaluating probability.
I have never seen where the person-with-nightmares was revealed as a donor, or indeed any clue as to who they were other than ‘someone Eliezer knows’. I would like some evidence, if there is any.
Also, Eliezer did not drop out of high school; he never attended in the first place, commonly known as ‘skipping it’, which is more common among “geniuses” (though I dislike that description).
I sent you 3 pieces of evidence via private message. Including two names.
Thank you for the links.
Please note that none of the evidence shows the donor status of the anonymous people/person who actually had nightmares, and the two named individuals did not say it gave them nightmares, but used a popular TVTropes idiom, “Nightmare Fuel”, as an adjective.
Very few people are so smart they are in the category of ‘too smart for highschool and any university’… many more are less smart, some have practical issues (need to work to feed family f/e). There’s some very serious priors from the normal distribution, for evidence to shift. Successful self education is fairly uncommon, especially outside the context of ‘had to feed family’.
Your criticism shifts as the wind.
What is your purpose?
Does it really? Do I have to repeat myself more? Is it against some unwritten rule to mention Bell curve prior which I have had from the start?
What do you think? Feedback. I do actually think he’s nuts, you know? I also think he’s terribly miscalibrated , which is probably the cause of the overconfidence in his foom belief (and it is ultimately the overconfidence that is nutty, same beliefs with appropriate confidence would be just mildly weird in a good way). It is also probably the case that politeness results in biased feedback.
If your purpose is “let everyone know I think Eliezer is nuts”, then you have succeeded, and may cease posting.
Well, there’s also the matter of why I’d think he’s nuts when facing “either he’s a supergenius or he’s nuts” dilemma created by overly high confidence expressed in overly speculative arguments. But yea I’m not sure it’s getting anywhere, the target audience is just EY himself, and I do expect he’d read this at least out of curiosity to see how he’s being defended, but with low confidence so I’m done.
Most “world class geniuses” have not opinionated on AI risk. So “forming opinion on AI risk which most world class geniuses did not form” is hardly a task which requires “world class genius level intelligence”.
For a “Bayesian reasoner”, a piece of writing is its own sufficient evidence concerning its qualities. Said reasoner does not need to rely much on indirect evidence concerning the author, after the reasoner has read the actual writing itself.
Nonetheless, the risk in question is also a personal risk of death for every genius… now idk how do we define geniuses here but obviously most geniuses could be presumed pretty good at preventing their own deaths, or deaths of their families. I should have said, forming a valid opinion.
Assuming that absolutely nothing in the writing had to be taken on faith. True for mathematical proofs. False for almost everything else.
That seems like a pretty questionable presumption to me. High IQ is linked to reduced mortality according to at least one study, but that needn’t imply that any particular fatal risk be likely to be uncovered, let alone prevented, by any particular genius; there’s no physical law stating that lethal threats must be obvious in proportion to their lethality. And that’s especially true for existential threats, which almost by definition must be without experiential precedent.
You’d have a stronger argument if you narrowed your reference class to AI researchers. Not a terribly original one in this context, but a stronger one.
Numbers?
Go dig for numbers yourself, and assume he is a genius until you find numbers, that will be very rational. Meanwhile most of people have a general feel of how rare it would be that a person with supposedly genius level untested insights into a technical topic (in so much as most geniuses fail to have those insights) would have nothing impressive that was tested, at age of, what, 32? edit: Then also, the geniuses know of that feeling and generally produce the accomplishments in question if they want to be taken seriously.
Starting a nonprofit on a subject unfamiliar to most and successfully soliciting donations, starting an 8.5-million-view blog, writing over 2 million words on wide-ranging controversial topics so well that the only sustained criticism to be made is “it’s long” and minor nitpicks, writing an extensive work of fiction that dominated its genre, and making some novel and interesting inroads into decision theory all seem, to me, to be evidence in favour of genius-level intelligence. These are evidence because the overwhelming default in every case for simply ‘smart’ people is to fail.
Many a con men accomplish this.
The overwhelming default for those capable of significant technical accomplishment is not to spend time on such activities.
Ultimately there’s many more successful ventures like this, such as scientology, and if I use this kind of metric on L. Ron Hubbard...
It provides evidence in favour of him being correct. If there weren’t other sources of information on Hubbard’s activities, I’d expect him to be of genius-level intelligence.
You’re familiar with the concept that someone looking like Hitler doesn’t make them fascist, right?
Honestly, I wouldn’t be surprised if he was; he clearly had an almost uniquely good understanding of what it takes to build a successful cult (though his early links with the OTO probably helped). New religious movements start all the time, and not one in a hundred reaches Scientology’s level of success. You can be both a genius and a charlatan. It’s easier to be the latter if you’re the former, actually.
Although his writing’s admittedly pretty terrible.
I wouldn’t expect genius level technical intelligence. Self deception is important part of effective deception; you have to believe a lie to build a good lie. Avoiding self deception is important part of technical accomplishment.
Furthermore, knowing that someone has no technical accomplishments is very different from not knowing if someone has technical accomplishments.
This does not seem obvious to me, in general. Do you have experience making technical accomplishments?
Yes. Worked at 3 failed start-ups, founded successful start-up (and know of several more failed ones). Self deception is incredibly destructive to any accomplishment that is not involving deception of other people. You need to know how good your skill set is, how good your product is, how good your idea is. You can’t be falling in love with brainfarts.
In any case, talents require extensive practice with feedback (are massively enhanced by that), and no technical accomplishments at age above 30 pretty much excludes any possibility of technical talent of any significance nowadays. (Yes, some odd case may discover they are awesome inventor, at age past 30, but they suffer from lack of earlier practice, and it’d be incredibly foolish of anyone who knows of own natural talent since teen, not to practice properly)
I’d also point out that if you read the investigative Hubbard biographies, you see many classic signs of con artistry: constant changes of location, careers, ideologies, bankruptcies or court cases in their wake, endless lies about their credentials, and so on. Most of these do not match Eliezer at all—the only similarities are flux in ideas and projects which don’t always pan out (like Flare), but that could be said of an ordinary academic AI researcher as well. (Most academic software is used for some publications and abandoned to bitrot.)
To clarify it better: the Roko incident illustrates how seriously some members of LW take nonsense conjectured threats. The fact of censorship is quite irrelevant. I was not really making a stab at the Eliezer with the Roko incident (even though I can see how you can picture it as such as it is easier to respond to the statement under this interpretation).
The HS dropping out and lack of accomplishments are a piece of evidence, and a rational Bayesian agent is better off knowing about such evidence. Especially given all the pieces of other evidence lying around such as ‘world foremost expert on self improvement’ and other introductions like http://www.youtube.com/watch?v=MwriJqBZyoM , which are normally indicative of far greater accomplishments (such as making something which self improved) than ones which took place.
You can’t have it both ways. If it’s nonsense, then the importance is that someone took it seriously (like a donor), not anyone’s reaction to that someone taking it seriously (like Eliezer). If it’s not nonsense, then someone taking it seriously is not the issue, but someone’s reaction to taking it seriously (the censorship). Make up your mind.
I don’t believe at any point in my comment did I claim the dropping out of school represented precisely 0 Bayesian evidence...
If it is dangerous nonsense then it is important that there is rebuttal (ideally one that works on people whom would fall for the nonsense in first place). Haven’t seen one.
If it is not nonsense, then it outlines that certain decision theories should not be built into FAI.
you really didn’t like me pointing it out, though.
How highly educated?
One incident of being batshit insane (in the form of taking utter nonsense very seriously). I should also link trolley problem discussions perhaps.
You’ve already gone down this road with Wei Dai. More FUD.
Trolley problems are a standard type of problem discussed in intro psychology and intro philosophy classes in colleges. And they go farther, with many studies just about how people respond or think about them. That LW would want to discuss trolley problems or that different people would have wildly conflicting responses to them shouldn’t be surprising- that’s what makes them interesting. Using them as evidence that LW is somehow bad seems strange.
Well, LW takes those fairly seriously, and stopping deadly AI is a form of trolley problem.
At least try harder in you fear-mongering. The thread about EY’s failure to make make many falsifiable predictions is better ad hominem and the speculation about launching terrorist attacks on fab plants is a much more compelling display of potential risk to life and property.
I agree that this is not a game, although you should note that you are doing EY/SIAI/LessWrong’s work for it by trying to scare FinalState.
What probability would you give to FinalState’s assertion of having a working AGI?
I’m not private_messaging, but I think he has a marginally valid point, even though I disagree with his sensational style.
I personally would estimate FinalState’s chances of building a working AGI at approximately epsilon, given the total absence of evidence. My opinion doesn’t really matter, though, because I’m just some guy with a LessWrong account.
The SIAI folks, on the other hand, have made it their mission in life to prevent the rise of un-Friendly AGI. Thus, they could make FinalState’s life difficult in some way, in order to fulfill their core mission. In effect, FinalState’s post could be seen as a Pascal’s Mugging attempt vs. SIAI.
The social and opportunity costs of trying to supress a “UFAI attempt” as implausible as FinalState’s are far higher than the risk of failing to do so. There are also decision-theoretic reasons never to give in to Pascal-Mugging-type offers. SIAI knows all this and therefore will ignore FinalState completely, as well they should.
I think that depends on what level of suppression one is willing to employ, though in general I agree with you. FinalState had admitted to being a troll, but even if he was an earnest crank, the magnitude of the expected value of his work would still be quite small, even when you do account for SIAI’s bias.
What are they, out of curiosity ? I think I missed that part of the Sequences...
It’s not in the main Sequences, it’s in the various posts on decision theory and Pascal’s Muggings. I hope our resident decision theory experts will correct me if I’m wrong, but my understanding is this. If an agent is of the type that gives into Pascal’s Mugging, then other agents who know that have an incentive to mug them. If all potential muggers know that they’ll get no concessions from an agent, they have no incentive to mug them. I don’t think this covers “Pascal’s Gift” scenarios where an agent is offered a tiny probability of a large positive utility, but it covers scenarios involving a small chance of a large disutility.
I’m not sure that that is in fact an admission of being a troll… it reads as fairly ambiguous to me. Do other people have readings on this?
0%, since it apparently isn’t finished yet.
Will it be finished in a year? 2%, as all other attempts that have reached the “final stages” have failed to build working AGI. The most credible of those attempts were done with groups; it appears FinalState is working alone.
2%?
Seriously?
I am curious as to why your estimate is so high.
That’s the kind of probability I would’ve assigned to EURISKO destroying the world back when Lenat was the first person ever to try to build anything self-improving. For a random guy on the Internet it’s off by… maybe five orders of magnitude? I would expect a pretty tiny fraction of all worlds to have the names of homebrew projects carved on their tombstones, and there are many random people on the Internet claiming to have AGI.
People like this are significant, not because of their chances of creating AGI, but because of what their inability to stop or take any serious precautions, despite their belief that they are about to create AGI, tells us about human nature.
Understanding “random guy on the Internet” to mean something like an Internet user all I know about whom is that they are interested in building AGI and willing to put some concerted effort into the project… hrm… yeah, I’ll accept e-7 as within my range.
My estimate for an actual random person on the Internet building AGI in, say, the next decade, has a ceiling of e-10 or so, but I don’t have a clue what its lower bound is.
That said, I’m not sure how well-correlated the willingness of a “random guy on the Internet” (meaning 1) to try to build AGI without taking precautions is to the willingness of someone whose chances are orders of magnitude higher to do so.
Then again, we have more compelling lines of evidence leading us to expect humans to not take precautions.
(I had to read that three times before getting why that number was 1000 times smaller than the other one, because I kept on misinterpreting “random person”. Try “randomly-chosen person”.)
I have no idea what you understood “random person” to mean, if not randomly chosen person. I’m also curious now as to whether whatever-that-is is what EY meant in the first place.
A stranger, esp. one behaving in weird ways; this appears to me to be the most common meaning of that word in 21st-century English when applied to a person. (Older speakers might be unfamiliar with it, but the median LWer is 25 years old, as of the latest survey.) And I also had taken the indefinite article to be an existential quantifier; hence, I had effectively interpreted the statement as “at least one actual strange person on the Internet building AGI in the next decade”, for which I thought such a low probability would be ridiculous.
Thanks for clarifying.
Are these in any way a representative sample of normal humans? In order to be in this category one generally needs to be pretty high on the crank scale along with some healthy Dunning-Kruger issues.
That’s always been the argument that future AGI scientists won’t be as crazy as the lunatics presently doing it—that the current crowd of researchers are self-selected for incaution—but I wouldn’t put too much weight on that; it seems like a very human behavior, some of the smarter ones with millions of dollars don’t seem of below-average competence in any other way, and the VCs funding them are similarly incapable of backing off even when they say they expect human-level AGI to be created.
Sorry, I’m confused. By “people like this” did you mean people like FinalState or did you mean professional AI researchers? I interpreted it as the first.
AGI researchers sound a lot like FinalState when they think they’ll have AGI cracked in two years.
Eliezer < anyone with actual notable accomplishments. edit: damn it you edited your message.
Over 140 posts and 0 total karma; that’s persistence.
private_messaging says he’s Dmytry, who has positive karma. It’s possible that the more anonymous-sounding name encourages worse behaviour though.
Before people downvote PM’s comment above, note that Eliezer’s comment prior to editing was a hierarchy of different AI researchers with lowest being people like FinalState, the second highest being professional AI researchers and the highest being “top AI researchers”.
With that out of the way, what do you think you are accomplishing with this remark? You have a variety of valid points to make, but I fail to see what is contained in this remark that does anything at all.
Me or Eliezer? I’m making some point by direct demonstration. It’s a popular ranking system, ya know? He used it on FinalState. A lot of people use it on him.
There’s got to be a level beyond “arguments as soldiers” to describe your current approach to ineffective contrarianism.
I volunteer “arguments as cannon fodder.”
Laplace’s Rule of Succession, assuming around fifty failures under similar or more favorable circumstances.
This just shifts the question to how you slotted FinalState into such a promising reference class? Conservatively, tens of academic research programs, tens of PhD dissertations, hundreds of hobbyist projects, hundreds of undergraduate term projects, and tens of business ventures have attempted something similar to AGI and none have succeeded.
As far as I can tell, the vast majority of academic projects (particularly those of undergrads) have worked on narrow AI, which this is supposedly not.
However, reading the post again, it doesn’t sound as though they have the support of any academic institution; I misread the bit around “academic network”. It sounds more as though this is a homebrew project, in which case I need to go two or three orders of magnitude lower.
That’s definitely a reasonable assessment. I dialed all those estimates down by about an order of magnitude from when I started writing that point as I thought through just how unusual attempting general AI is. But over sixty years and hundreds of institutions where one might get a sufficiently solid background in CS to implement something big, there are going to be lots of unusual people trying things out.
Of those who attempted, fewer thought they were close, but fifty still seems very generous.
The Rule of Succession, if I’m not mistaken, assumes a uniform prior from 0 to 1 for the probability of success. That seems unreasonable; it shouldn’t be extremely improbable (even before observing failure) that fewer than one in a thousand such claims result in a working AGI. So you have to adjust downward somewhat from there, but it’s hard to say how much.
(This is in addition to the point that user:othercriteria makes in the sibling comment.)
You’re correct, but where would I find a better prior? I’d rather be too conservative than resort to wild guessing (which it would be, since I’m not an expert on AGI).
(A variant of this is rhollerith_dot_com’s objection below, that I failed to take into account whatever the probability of working AGI leading to death is. Presumably that changes the prior as well.)
http://yudkowsky.net/rational/bayes
Isn’t the lesson of the Quantum Physics sequence that ordinary humans today should get their priors from the least complex (and falsifiable?) statements that aren’t inconsistent with empirical knowledge.
I don’t know where to get a good prior. I suppose you might look at past instances where someone claimed to be close to doing something that seemed about as difficult and confusing as AGI seems to be (before taking into account a history of promises that didn’t pan out, but after taking into account what we know about the confusingness of the problem, insofar as that knowledge doesn’t itself come from the fact of failed promises). I don’t know what that prior would look like, but it seems like it would assign (if you randomly selected a kind of feat) a substantially greater than 1⁄10 probability of seeing at least 10 failed predictions of achieving that feat for every successful such prediction, a substantially greater than 1⁄100 probability of seeing at least 100 failed predictions for every successful prediction, and so on.
And why do you think FinalState is in such a circumstance, rather than just bullshitting us?
I was being charitable. Also, I misread the original post; see the comments below.
Hmm yeah, I read the post again and, if it’s a troll, it’s a way-more-subtle-than-typical one. Still, my posterior probability assignment on him being serious/sincere is in the 0.40s (extraordinary claims require extraordinary evidence) -- though this means that the probability that he succeeds given that he’s serious is the same order of magnitude as the probability that he succeeds given everything I know.
If you know you probably would not have survived the sun’s having failed to rise, you cannot just apply the Rule of Succession to your knowledge of past sunrises to calculate the probability that the sun will rise tomorrow because that would be ignoring relevant information, namely the existence of a severe selection bias. (Sadly, I do not know how to modify the Rule of Succession to account for the selection bias.)
Bostrom has made a stab at compensating, although I don’t think http://www.nickbostrom.com/papers/anthropicshadow.pdf works for the sun example.
On the other hand, if you have so much background knowledge about the Sun that you can think about the selection effects involved, the Rule of Succession is a moot & incomplete analysis to begin with.
Regarding your second paragraph, Sir Gwern, if we switch the example to the question of whether the US and Russia will launch nukes at each other this year, I have at lot of information about the strength of the selection bias (including for example Carl Sagan’s work on nuclear winter) that I might put to good use if I knew how to account for selection effects, but I would be sorely tempted to use something like the Rule of Succession (modified to account for the selection bias and where the analog of a day in which the sun might or might not rise is the start of the part of the career of someone in the military or in politics during which he or she can influence whether or not an attempt at a first strike is made) because my causal model of the mental processes behind the decision to launch is so unsatisfactory.
This might be a good place for me to point out that I never bought into the common wisdom, which I have never seen anyone object to or distance themselves from in print, that the chances of a nuclear exchange between the US and Russia went down considerably after the collapse of the Soviet Union in 1991.
What’s your line of thought?
Nuclear war isn’t the same situation, though. We can survive nuclear war at all sorts of levels of intensity, so the selection filter is not nearly the same as “the Sun going out”, which is ~100% fatal. Bostrom’s shadow paper might actually work for nuclear war, from the perspective of a revived civilization, but I’d have to reread it to see.
The selection filter does not have to be total or near total for my point to stand, namely, Rule-of-Succession-like calculations can be useful even when one has enough information to think about the selection effects involved (provided that Rule-of-Succession-like calculations are ever useful).
And parenthetically selection effects on observations about whether nuclear exchanges happened in the past can be very strong. Consider for example a family who has lived in Washington, D.C., for the last 5 decades: Washington, D.C., is such an important target that it is unlikely the family would have survived the launch of most or all of the Soviet/Russian arsenal at the U.S. So, although I agree with you that the human race as a whole would probably have survived almost any plausible nuclear exchange, that does not do the family in D.C. much good. More precisely, it does not do much good for the family’s ability to use historical data on whether or not nukes were launched at the U.S. in the past to refine their probability of launches in the future.
An interesting bracket style. How am I supposed to know where the parenthetical ends?
Me too. This value is several orders of magnitude above my own estimate.
That said, it depends on your definition of “finished”. For example, it is much more plausible (relatively speaking) that FinalState will fail to produce an AGI, but will nevertheless produce an algorithm that performs some specific task—such as character recognition, unit pathing, natural language processing, etc. -- better than the leading solutions. In this case, I suppose one could still make the argument that FinalState’s project was finished somewhat successfully.
Oh, I see that I misinterpreted FinalState’s statement
as an indication of only being a few minutes away from having a working implementation.
I meant to provide priors for the expected value of communication with SI. Sorry, can’t be done in non ad hominem way. There’s been video or two where Eliezer was called “world’s foremost expert on recursive self improvement”, which normally implies making something self improve.
Ahh right, should of also linked this one. I see it was edited replacing ‘we’ with ‘world government’ and ‘sabotage’ with sanctions and military action. BTW that speculation is by gwern, is he working at SIAI?
AGI is ill defined. Of something that would foom as to pose potential danger, infinitesimally small.
Ultimately: I think risk to his safety is small, and payoff is negligible, while the risk from his software is pretty much nonexistent.
This usually happens when the person being introduced wasn’t consulted about the choice of introduction.
It nonetheless results in significant presentation bias, what ever is the cause.
My priors, for one thing, were way off in SI’s favour. My own cascade of updates was triggered by seeing Alexei say that he plans to make a computer game to make money to donate to SIAI. Before which I sort of assumed that the AI discussions here were about some sorta infinite power super-intelligence in scifi, not unlike Vinge’s beyond, intellectually pleasurable game of wits (I even participated a little once or twice along the lines of how you can’t debug superintelligence). I assumed that Eliezer had achievements from which he got the attitude (I sort of confused him with Hanson to some extent), etc etc etc. I looked into it more accurately since.
The Roko incident has absolutely nothing to do with this at all. Roko did not claim to be on the verge of creating an AGI.
Once again you’re spreading FUD about the SI. Presumably moderation will come eventually, no doubt over some hue and cry over censoring contrarians.
The Roko incident allows to evaluate the sanity of people he’d be talking to.
Other relevant link:
http://acceleratingfuture.com/sl4/archive/0501/10613.html
“And if Novamente should ever cross the finish line, we all die.”
Ultimately, you can present your arguments, I can present my arguments, and then he can decide, to talk to you guys, or not.
And yet SIAI didn’t do anything to Ben Goertzel (except make him Director of Research for a time, which is kind of insane in my judgement, but obviously not in the sense you intend).
Ben Goertzel’s projects are knowably hopeless, so I didn’t too strongly oppose Tyler Emerson’s project from within SIAI’s then-Board of Directors; it was being argued to have political benefits, and I saw no noticeable x-risk so I didn’t expend my own political capital to veto it, just sighed. Nowadays the Board would not vote for this.
And it is also true that, in the hypothetical counterfactual conditional where Goertzel’s creations work, we all die. I’d phrase the email message differently today to avoid any appearance of endorsing the probability, because today I understand better that most people have trouble mentally separating hypotheticals. But the hypothetical is still true in that counterfactual universe, if not in this one.
There is no contradiction here.
To clarify, by “kind of insane” I didn’t mean you personally, but was commenting on SIAI’s group rationality at that time.
What’s about hypothetical counterfactual conditional where you run into some AGI software that you think will work? Should I assume zero positive rate for ‘you think it works’?
Really, so it is invalid to make a hypothetical that if someone has a project that you think will work, you may think that we are all going to die unless that project is stopped?
Did I claim they did beat him up or what? Ultimately, more recent opinion which I seen somewhere is that Eliezer ended up considering Ben harmless as in unlikely to achieve the result. I also see you guys really loving trolley problems including extreme forms of it (with 3^^^3 dustspecks in 3^^^3 eyes).
Having it popularly told that your project is going to kill everyone is already a risk given all the other nutjobs:
http://www.nature.com/news/2011/110822/full/476373a.html
Even if later atoned for by making you head of SI or something (with unclear motivation which may well be creepy in nature)
See, i did not say he was going to definitely get killed or something. I said, there was a risk. Yea, nothing happening to Ben Goertzel’s persona is proof positive that the risk is zero. Geez, why won’t you for once reason like this about AI risk for example.
Ultimately: encounters with a nutjob* who may, after presentation of technical details, believe you are going to kill everyone, are about as safe as making credible death threats against normal person and his relatives and his family etc. Or less safe, even. Neither results in 100% probability of anything happening.
*though of course the point may be made that he doesn’t believe the stuff he says he believes, or that a sane portion of his brain will reliably enact akrasia over the decision, or something.
The existence of third-party anti-technology terrorists adds something to the conversation beyond the risks FinalState can directly pose to SIAI-folk and vice versa. I’m curious about gwern’s response, especially, given his interest in Death Note, which describes a world where law enforcement can indirectly have people killed just by publishing their identifying information.
Yes, the most that has ever happened to anyone who talked to EY about building an AGI is some mild verbal/textual abuse.
I agree with gwern’s assessment of your arguments.
EDIT: Also, I am not affiliated with the SI.
You are not trying very hard. You missed the thrid alternative: Use your crazy fear mongering to convince him to stop, not just avoid SI.
I hope you’re not just using this as a rhetorical opportunity to spread fear about SI.
He explicitly said that the aggregate risk from the project would be way way smaller than personal risk from SIAI. Trying to convince people to stop gives SIAI impression of power and so increases its resource acquisition possibilities, which is considered bad.