This document offers a complete explanation of the hard problems of consciousness and free will, in only 34 pages. The explanation is given as an algorithm, that can be implemented on a computer as a software program. (Open-)Source code will be released by Jan 2011. A solid background in psychology, computer science & artificial intelligence is useful, but if you’re prepared to follow the hyperlinks in the document, it should be possible for most people to enjoy.
The author wishes to remain anonymous, but is not a crank. He/she has 10 years professional experience building cutting-edge artificial intelligence, computer vision and machine learning systems that are used on 4 continents, and has degrees in computer science, artificial intelligence, and robotics.
I haven’t read past the first page yet (don’t have time right now) but I thought that it might be something people here would be interested in.
You can argue with it.. But you don’t have to, because I wrote the article and I agree with you. It’s cranky stuff. :)
The description is supposed to be taken lightly (hence the tongue-in-cheek comment “in only 34 pages”). It’s not scientific content, and I wouldn’t claim it as such. It is because it is unscientific (and partly because contractually, my employer owns all my ideas) that it’s published anonymously. It’s fun to develop outrageous ideas that might be impractical to evaluate scientifically: It’s wrong to claim they’re proven fact, without strong evidence. Which I don’t have.
Not all good ideas make good or easy science, and not all bad ideas are unscientific.
To the commenter who thinks it stinks due to use of graph representation—there is a lot of evidence for the existence of a representational system within the brain, and the graph is simply one useful way of representing information. Agreed, by themselves graph-ontologies like Cyc ( http://www.cyc.com/ ) are not conscious. Unless you’re challenging representationalism itself (as behaviourists would), rejecting it on the basis that it has graphs is no better than rejecting it for the choice of font. Have a proper read, if you can spare the time.
For what it’s worth, I didn’t choose to put the article on lesswrong, but word eventually got back to me that a friend [of a friend… etc] had posted it. Which is quite nice actually, because I didn’t know about lesswrong before and I like it now I’m here.
Anyway, have a read if you want to and I’m happy to answer questions. In the meantime I’m going to continue reading some of the other articles here.
From a quick skim, in this proposal “knowledge” is modeled as symbols connected in a graphical structure, i.e. a bunch of suggestively-named LISP tokens. I obviously stopped skimming after that. Unfortunately, AI theories are like assholes: everyone’s got one, and they usually stink.
A friend linked me to this rather ambitiously described paper: An Algorithm for Consciousness:
I haven’t read past the first page yet (don’t have time right now) but I thought that it might be something people here would be interested in.
Wow. How can I argue with that?
You can argue with it.. But you don’t have to, because I wrote the article and I agree with you. It’s cranky stuff. :)
The description is supposed to be taken lightly (hence the tongue-in-cheek comment “in only 34 pages”). It’s not scientific content, and I wouldn’t claim it as such. It is because it is unscientific (and partly because contractually, my employer owns all my ideas) that it’s published anonymously. It’s fun to develop outrageous ideas that might be impractical to evaluate scientifically: It’s wrong to claim they’re proven fact, without strong evidence. Which I don’t have.
Not all good ideas make good or easy science, and not all bad ideas are unscientific.
To the commenter who thinks it stinks due to use of graph representation—there is a lot of evidence for the existence of a representational system within the brain, and the graph is simply one useful way of representing information. Agreed, by themselves graph-ontologies like Cyc ( http://www.cyc.com/ ) are not conscious. Unless you’re challenging representationalism itself (as behaviourists would), rejecting it on the basis that it has graphs is no better than rejecting it for the choice of font. Have a proper read, if you can spare the time.
For what it’s worth, I didn’t choose to put the article on lesswrong, but word eventually got back to me that a friend [of a friend… etc] had posted it. Which is quite nice actually, because I didn’t know about lesswrong before and I like it now I’m here.
Anyway, have a read if you want to and I’m happy to answer questions. In the meantime I’m going to continue reading some of the other articles here.
best regards,
From a quick skim, in this proposal “knowledge” is modeled as symbols connected in a graphical structure, i.e. a bunch of suggestively-named LISP tokens. I obviously stopped skimming after that. Unfortunately, AI theories are like assholes: everyone’s got one, and they usually stink.