I agree on most of this, but would you mind explaining why you think neuroscience is “mostly useless?” My intuition is the opposite. Also agreed that pure mathematics seems useful.
nhamann
Would you mind tabooing the word “preference” and re-writing this post? It’s not clear to me that the research cited in your “crash course” post actually supports what you seem to be claiming here.
If you can come up with better images to represent Friendly AI, please let me know!
How about an image of a paper clip?
Apologies for the pedantry that follows.
Today, we know how Hebb’s mechanism works at the molecular level.
This quote gives the impression that there is a unitary learning mechanism at work in the brain called “Hebbian learning,” and that how it works is well understood. It is my understanding that this is not accurate.
For example, spike-timing-dependent plasticity is a Hebbian learning rule which has been postulated to underlie at least some forms of long-term potentiation and long-term depression. However, there is ongoing debate as to how accurate/useful this concept is, including one recent attempt at a re-formulation of classical STDP.
With regard to molecular mechanisms, it was my understanding that even fundamental issues like whether LTP/LTD primarily involve presynaptic or postsynaptic modifications (or both) have not yet been cleared up.
I think your statement should be changed to something like “Though there are likely a variety of Hebbian learning mechanisms at work in the brain, neuroscientists are beginning to understand the few of them that have been discovered so far.”
That thread is way too long, so I’m not going to read it, but I did a quick search for and didn’t see any discussion on what I consider the dealbreaker when considering the evidence for or against most religions (but especially any flavor of Christianity), which is the existence of “souls.” Simply put, the “soul” hypothesis doesn’t jive with current evidence from physics, and it doesn’t pay rent with regard to observations from neuroscience (or any kind of observations, for that matter). I strongly suspect that the Book of Mormon doesn’t deal with evidence from neuroscience, which means that, due to the “soul” hypothesis being fairly central to Christian belief (it is the postulated mechanism by which a person is judged for “sins” committed in their life), you don’t have to read it.
As an aside, I consider this line of reasoning to be something like “atheism for dummies” since most religions that I’ve seen depend on humans having something like a soul.
Isn’t 12.0 something like quadruple-beta of the “Stable” version of Chrome?
I’m not entirely sure what you mean here. It’s the current stable release
OP: For the record, I’m on Chrome 13 and I haven’t noticed anything like you mentioned here. The graphical glitches make me think something is up with your video card or the drivers for it, but if it’s only happening for LW...I’m not sure what to tell you.
In the past year I’ve been involved in two major projects at SIAI. Steve Rayhawk and I were asked to review existing AGI literature and produce estimates of development timelines for AGI.
You seem to suggest that this work is incomplete, but I’m curious: is this available anywhere or is it still a work in-progress? I would be very interested in reading this, even if its incomplete. I would even be interested in just seeing a bibliography.
I’m interested in … winning arguments …
Ack, that won’t do. It is generally detrimental to be overly concerned with winning arguments. Aside from that, though, welcome to LW!
What. That quote seems to be directly at odds with the entire idea of “Friendly AI”. And of course it is, as a later version of Eliezer refuted it:
(In April 2001, Eliezer said that these comments no longer describe his opinions, found at “Friendly AI”.)
I’m also not sure it makes sense to call SIAI a “closed-source” machine intelligence outfit, given that I’m pretty sure there’s no code yet.
They appear to be aiming for whole brain emulation, trying to scale up previous efforts that simulated a rat neocortical column.
Here’s another interim report on the longitudinal effects of CR on rhesus monkeys, this one a bit more recent (2009) than the one linked in the OP. From the abstract:
We report findings of a 20-year longitudinal adult-onset CR study in rhesus monkeys aimed at filling this critical gap in aging research. In a population of rhesus macaques maintained at the Wisconsin National Primate Research Center, moderate CR lowered the incidence of aging-related deaths. At the time point reported 50% of control fed animals survived compared with 80% survival of CR animals. Further, CR delayed the onset of age-associated pathologies. Specifically, CR reduced the incidence of diabetes, cancer, cardiovascular disease, and brain atrophy. These data demonstrate that CR slows aging in a primate species.
Have you read A Human’s Guide to Words? You seem to be confused about how words work.
Looking back at your posts in this sequence so far, it seems like it’s taken you four posts to say “Philosophers are confused about meta-ethics, often because they spend a lot of time disputing defintions.” I guess they’ve been well-sourced, which is worth something. But it seems like we’re still waiting on substantial new insights about metaethics, sadly.
“Save the world” has icky connotations for me. I also suspect that it’s too vague for there to be much benefit to people announcing that they would like to do so. Better to discuss concrete problems, and then ask who is interested/concerned with those problems and who would like to try to work on them.
Good reminder that reversed stupidity is not intelligence.
Adding to the list: Hans Berger invented the EEG while trying to investigate telepathy, which he was convinced was real. Even fools can make important discoveries.
Won’t music-theoretic analysis be basically irrelevant to a description of why some people enjoy, for instance, Merzbow?
One thing I didn’t see you mention is neuroscience. My understanding is that some AGI researchers are currently taking this route; e.g. Shane Legg, mentioned in another comment, is an AGI researcher who is currently studying theoretical neuroscience with Peter Dayan. Demis Hassabis is another person interested in AGI who’s taking the neuroscience route (see his talk on this subject from the most recent Singularity Summit). I’m personally interested in FAI, and I suspect that we need to study the brain to understand in more detail the nature of human preference. In terms of a career path, it’s possible I’ll go to graduate school at some point in the future, but my current plans are to just get a programming job and study neuroscience in my free time.
Have you given a thought to just taking the day job route? There are some problems, as I’ve found more than a few journal articles locked behind a paywall, but there are some ways for dealing with this. Furthermore, I’ve found a surprising number of recent neuro articles are available through open access journals like PNAS, Frontiers and through other routes (Google, Google Scholar, CiteseerX, author websites). If you’re interested more in CS research, then I suspect you’ll have even less trouble; for some reason recent (CS papers) seem to almost always be available over the internet.
What about in the case where the first punch constitutes total devastation, and there is no last punch? I.e. the creation of unfriendly AI. It would seem preferable to initiate aggression instead of adhering to “you should never throw the first punch” and subsequently dying/losing the future.
Edit: In concert with this comment here, I should make it clear that this comment is purely concerned with a hypothetical situation, and that I definitely do not advocate killing any AGI researchers.
Your account of “proof” is not actually an alternative to the “proofs are social constructs” description, since these are addressing two different aspects of proof. You have focused on the standard mathematical model of proofs, but there is a separate sociological account of how professional mathematicians prove things.
Here is an example of the latter from Thurston’s “On Proof and Progress in Mathematics.”