He believes that the Singularity is proof that the Universe was created by an Intelligent Creator (who happens to be the Christian God), and that it is further evidence of YEC.
MatthewB
I think the comment that LWer suck at Politics is the more apt description.
Politics is the art of the possible, and that it deals with WHAT IS, regardless of whether that is “rational.”
And attempting to demand that it conform to rationality standards dictated by this community guarantees that this community will lack political clout.
Especially if it becomes known that the main beneficiaries and promoters of the Singularity have a particularly pathological politics.
Peter Thiel may well be a Libertarian Hero, but his name is instant death in even mainstream GOP circles, and he is seen as a fascist by the progressives.
Glenn Beck is seen as a dangerous and irrationally delusional ideologue by mainstream politicians.
That sort of endorsement isn’t going to help the cause if it becomes well known.
It will tar the Singularity as an ideological enclave of techno-supremists.
NO ONE at Less Wrong seems to be aware of the stigma attached to the Singularity after the performance of David Rose at the “Human Being in an Inhuman World” conference at Bard College in 2010. I was there, and got to witness the reactions of academics and political analysts from New York and Washington DC (some very powerful people in policy circles) who sat, mouths hanging aghast, at what David Rose was saying.
When these people discover that Glenn Beck is promoting the Singularity (and Glenn Beck has some very specific agendas in promoting it, that are very selfish and probably pretty offensive to the ideals of Less Wrong) these people will be even more convinced that the Singularity is a techno-cult composed of some very dangerous individuals.
Being influential is not necessarily a good thing.
Especially when Glenn Beck’s influence is in delusional conspiracy theories, and evangelical christianity, and Young Earth Creationism.
Glenn Beck is hardly someone whose enthusiasm you should welcome.
He has a creationist agenda that he has found a way to support with the ideas surrounding the topic of the Singularity.
This is not exactly “success.”
There are some populations that will pervert the things they get in their hands.
Glenn Beck was one of the first TV personalities to interview Ray.
The Interview is on YouTube, and is very informative as to Glenn’s objectives and Agenda.
Primarily, he wishes to use the ideology behind the Singularity as support for “Intelligent Design.” In the inteview, he makes an explicit statement to that effect.
Glenn Beck is hardly “rational” as per the definition of “Less Wrong.”
Yes, I have read many of the various Less Wrong Wiki entries on the problems surrounding Friendly AI.
Unfortunately, I am in the process of getting an education in Computational Modeling and Neuroscience (I was supposed to have started at UC Berkeley this fall, but budget cuts in the Community Colleges of CA resulted in the loss of two classes necessary for transfer, so I will have to wait till next fall to start… And, I am now thinking of going to UCSD, where they have the Institute of Computational Neuroscience (or something like that—It’s where Terry Sejnowski teaches), among other things, that make it also an excellent choice for what I wish to study) and this sort of precludes being able to focus much on the issues that tend to come up often among many people on Less Wrong (particularly those from the SIAI, whom I feel are myopically focused upon FAI to the detriment of other things).
While I would eventually like to see if it is even possible to build some of the Komodo Dragon like Superintelligences, I will probably wait until such a time as our native intelligence is a good deal greater than it is now.
This touches upon an issue that I first learned from Ben. The SIAI seems to be putting forth the opinion that AI is going to spring fully formed from someplace, in the same fashion that Athena sprang fully formed (and clothed) from the Head of Zeus.
I just don’t see that happening. I don’t see any Constructed Intelligence as being something that will spontaneously emerge outside of any possible human control.
I am much more in line with people like Henry Markham, Dharmendra Modha, and Jeff Hawkins who believe that the types of minds that we will be tending to work towards (models of the mammalian brain) will trend toward Constructed Intelligences (CI as opposed to AI) that tend to naturally prefer our company, even if we are a bit “dull witted” in comparison.
I don’t so much buy the “Ant/Amoeba to Human” comparison, simply because mammals (almost all of them) tend to have some qualities that ants and amoebas don’t… They tend to be cute and fuzzy, and like other cute/fuzzy things. Building a CI modeled after a mammalian intelligence will probably share that trait. It doesn’t mean it is necessarily so, but it does seem to be more than less likely.
And, considering it will be my job to design computational systems that model cognitive architectures. I would prefer to work toward that end until such a time as it is shown that ANY work toward that end is dangerous enough to not do that work.
I think major infrastructure rebuilding is probably closer to the case than “maintenance”
Yes, that is close to what I am proposing.
No, I am not aware of any facts about progress in decision theory that would give any guarantees of the future behavior of AI. I still think that we need to be far more concerned with people’s behaviors in the future than with AI. People are improving systems as well.
As far as the Komodo Dragon, you missed the point of my post, and the Komodo dragon just kinda puts the period on that:
“Gorging upon the stew of...”
From Ben Goertzel,
And I think that theory is going to emerge after we’ve experimented with some AGI systems that are fairly advanced, yet well below the “smart computer scientist” level.
At the second Singularity Summit, I heard this same sentiment from Ben, Robin Hanson, and from Rodney Brooks, and from Cynthia Breazeal (at the Third Singularity Summit), and from Ron Arkin (at the “Human Being in an Inhuman Age” Conference at Bard College on Oct 22nd ¹), and from almost every professor I have had (or will have for the next two years).
It was a combination of Ben, Robin and several professors at Berkeley and UCSD which led me to the conclusion that we probably won’t know how dangerous an AGI (CGI—Constructed General Intelligence… Seems to be a term I have heard used by more than one person in the last year instead of AI/AGI. They prefer it to AI, as the word Artificial seems to imply that the intelligence is not real, and the word Constructed is far more accurate) is until we have put a lot more time into building AI (or CI) systems that will reveal more about the problems they attempt to address.
Sort of like how the Wright Brothers didn’t really learn how they needed to approach building an airplane until they began to build airplanes. The final Wright Flyer didn’t just leap out of a box. It is not likely that an AI will just leap out of a box either (whether it is being built at a huge Corporate or University lab, or in someone’s home lab).
Also, it is possible that AI may come in the form of a sub-symbolic system which is so opaque that even it won’t be able to easily tell what can or cannot be optimized.
Ron Arkin (From Georgia Tech) discussed this briefly at the conference at Bard College I mentioned.
MB
¹ I should really write up something about that conference here. I was shocked at how many highly educated people so completely missed the point, and became caught up in something that makes The Scary Idea seem positively benign in comparison.
I agree.
I doubt you would remember this, but we talked about this at the Meet and Greet at the Singularity Summit a few months ago (in addition to CBGBs and Punk Rock and Skaters).
James Hughes mentioned you as well at a Conference in NY where we discussed this very issue as well.
One thing that you mentioned at the Summit (well in conversation) was that The Scary Idea was tending to cause some paranoia among people who otherwise might be contributing more to the development of AI (of course, you also seemed pretty hostile to brain emulation too) as it tends to cause funding that could be going to AI to be slowed as a result.
Well… That is hard to communicate now, as I will need to extricate the problems from the specifics that were communicated to me (in confidence)...
Let’s see...
1) That there is a dangerous political movement in the USA that seems to be preferring revealed knowledge to scientific understanding and investigation. 2) Poverty 3) Education 4) Hunger (I myself suffer from this problem—I am disabled, on a fixed income, and while I am in school again and doing quite well I still have to make choices sometimes between necessities… And, I am quite well off compared to some I know) 5) The lack of a political dialog and the preference for ideological certitude over pragmatic solutions and realistic uncertainty. 6) The fact that there exist a great amount of crime among the white collar crowd that goes both unchecked, and unpunished when it is exposed (Maddoff was a fluke in that regard). 7) The various “Wars” that we declare on things (Drugs, Terrorism, etc.) “War” is a poor paradigm to use, and it leads to more damage than it corrects (especially in the two instances I cited) 8) The real “Wars” that are happening right now (and not just those waged by the USA and allies)
Some of these were explicitly discussed.
Some will eventually be resolved, but that doesn’t mean that they should be ignored until that time. That would be akin to seeing a man dying of starvation, while one has the capacity to feed him, yet thinking “Oh, he’ll get some food eventually.”
And, some may just be perennial problems with which we will have to deal with for some time to come.
At the Singularity Summit’s “Meet and Greet”, I spoke with both Ben Geortzel and Eliezer Yudowski (among others) about this specific problem.
I am FAR more in line with Ben’s position than with Eliezer’s (probably because both Ben and I are either Working or Studying directly on the “how to do” aspect of AI, rather than just concocting philosophical conundrums for AI, such as the “Paperclip Maximizer” scenario of Eliezer’s, which I find highly dubious).
AI isn’t going to spring fully formed out of some box of parts. It may be an emergent property of something, but if we worry about all of the possible places from which it could emerge, then we might as well worry about things like ghosts and goblins that we cannot see (and haven’t seen) popping up suddenly as a threat.
At Bard College on the Weekend of October the 22nd, I attended a Conference where this topic was discussed a bit. I spoke to James Hughes, head of the IEET (Institute for the Ethics of Emerging Technologies) about this problem as well. He believes that the SIAI tends to be overly dramatic about Hard Takeoff scenarios at the expense of more important ethical problems… And, he and I also discussed the specific problems of “The Scary Idea” that tend to ignore the gradual progress in understanding human values and cognition, and how these are being incorporated into AI as we move toward the creation of a Constructed Intelligence (CI as opposed to AI) that is equivalent to human intelligence.
Also, WRT this comment:
For another example, you can’t train tigers to care about their handlers. No matter how much time you spend with them and care for them, they sometimes bite off arms just because they are hungry. I understand most big cats are like this.
You CAN train (Training is not the right word for it) tigers, and other big cats to care about their handlers. It requires a type of training and teaching that goes on from birth, but there are plenty of Big Cats who don’t attack their owners or handlers simply because they are hungry, or some other similar reason. They might accidentally injure a handler due to the fact that they do not have the capacity to understand the fragility of a human being, but this is a lack of cognitive capacity, and it is not a case of a higher intelligence accidentally damaging something fragile… A more intelligent mind would be capable of understanding things like physical frailty and taking steps to avoid damaging a more fragile body… But, the point still stands… Big cats can and do form deep emotional bonds with humans, and will even go as far as to try to protect and defend those humans (which, can sometimes lead to injury of the human in its own right).
And, I know this from having worked with a few big cats, and having a sister who is a senior zookeeper at the Houston Zoo (and head curator of the SW US Zoo’s African Expedition) who works with big cats ALL the time.
Back to the point about AI.
It is going to be next to impossible to solve the problem of “Friendly AI” without first creating AI systems that have social cognitive capacities. Just sitting around “Thinking” about it isn’t likely to be very helpful in resolving the problem.
That would be what Bertrand Russell calls “Gorging upon the Stew of every conceivable idea.”
But, it would also not have the function of letting others who may struggle with certain concepts of knowing that they were not alone in struggling.
That Candidate 2 (admitting that one is wrong is a win for an argument), is one of my oldest bits of helpful knowledge.
If one admits that one is wrong, one instantly ceases to be wrong (or at lest ceases to be wrong in the way that one was wrong. It could still be the case that the other person in an argument is also wrong, but for the purposes of this point, we are assuming that they are “correct”), because one is then in possession of more accurate (i.e. “right”) information/knowledge.
How About Eliezer, Peter Thiel, Peter Diamandis, done… I know that Peter Diamandis would NOT be turned away by Hitchens… Now, it is just a matter of getting ahold of a few mullionaire/billionaire types…
I have had the EXACT same idea!
However, my plan was to contact his publicist through Alcor or one of the other Cryonics companies (all one of them I think)
Now, I am not certain about this, but we have to examine that code before we know it’s outcome.
While this isn’t “Running” the code in the traditional sense of computation as we are familiar with it today, it does seem that the code is sort of run by our brains as a simulation as we scan it.
As sort of meta-process if you will...
I could be so wrong about that though… eh...
Also, that code is useless really, except maybe as a wait function… It doesn’t really do anything (Not sure why Unknowns gets voted up in the first post above, and down below)...
Also, leaping from some code to the Entirety of an AI’s source code seems to be a rather large leap.
It isn’t stuff that made it into the modern canon, but in the Early Christian Church, Myth of this type appeared all over the place from the Jewish Sources, in an attempt to integrate it into various Christian Sects.
To be fair this stuff isn’t Christian mythology in the way that Adam and Eve, or Loaves and Fishes is Christian mythology. It’s just religious fiction.
Isn’t it ALL just religious fiction?
Also, don’t forget that humans will be improving just as rapidly as the machines.
My own studies (Cognitive Science and Cybernetics at UCLA) tend to support the conclusion that machine intelligence will never be a threat to humanity. Humanity will have become something else by the time that machines could become an existential threat to current humans.