No, they don’t have to be consciously chosen. The classic example of a simple agent is a thermostat (http://en.wikipedia.org/wiki/Intelligent_agent), which has the goal of keeping the room at a constant temperature. (Or you can say “describing the thermostat as having a goal of keeping the temperature constant is a simpler means of predicting its behaviour than describing its inner workings”). Goals are necessary but not sufficient for intelligence.
IsaacLewis
Intelligence is a spectrum, not either/or—a newborn baby is about as intelligent as some mammals. Although it doesn’t have any concious goals, its behaviour (hungry → cry, nipple → suck) can be explained in terms of it having the goal of staying alive.
A sleeping person—I didn’t actually think of that. What do you think?
Hmm, I feel like I should have made clearer that post is just a high-level summary of what I wrote on my blog. Seriously people, read the full post if you have time, I explain stuff in quite a bit more depth.
Thanks for the pointers—this post is still more at the “random idea” stage, not the “well-constructed argument stage”, so I do appreciate feedback on where I might have gone astray.
I’ve read some of the Sequences, but they’re quite long. What particular articles did you mean?
This post inspired me to work on my Mandarin study habits—I’ve been stuck in a low intermediate plateau for a while, and not sure how to advance. I just started to work on this mindmap, http://www.mindmeister.com/maps/show/98440507, based on the ideas in this article.
I’ve also recently started following GTD (the productivity system), which emphasises choosing specific actions to follow, rather than big and vague projects. I think this article’s approach is similar.
Two counters to the majoritarian argument:
First, it is being mentioned in the mainstream—there was a New York Times article about it recently.
Secondly, I can think of another monumental, civilisation-filtering event that took a long time to enter mainstream thought—nuclear war. I’ve been reading Bertrand Russel’s autobiography recently, and am up to the point where he begins campaigning against the possibility of nuclear destruction. In 1948 he made a speech to the House of Lords (UK’s upper chamber), explaining that more and more nations would attempt to acquire nuclear weapons, until mutual annihilation seemed certain. His fellow Lords agreed with this, but believed the matter to be a problem for their grandchildren.
Looking back even further, for decades after the concept of a nuclear bomb was first formulated, the possibility of nuclear was was only seriously discussed amongst physicists.
I think your second point is stronger. However, I don’t think a single AI rewiring itself is the only way it can go FOOM. Assume the AI is as intelligent as a human; put it on faster hardware (or let it design its own faster hardware) and you’ve got something that’s like a human brain, but faster. Let it replicate itself, and you’ve got the equivalent of a team of humans, but which have the advantages of shared memory and instantaneous communication.
Now, if humans can design an AI, surely a team 1,000,000 human equivalents running 1000x faster can design an improved AI?
Living Systems is from this guy: http://en.wikipedia.org/wiki/Living_systems. Even if he goes too far with his theorising, the basic idea makes sense—living systems are those which self-replicate, or maintain their structure in an environment that tries to break them down.
Thanks for pointing out that my use of terms was sloppy. The concepts of “intelligent” and “alive” I break down a bit more in the blog articles I linked. (I should point out that I see both concepts as a spectrum, not either/or). By “intrinsic goals” I mean self-defined goals—goals that arose through a process of evolution, not being built-in by some external designer.
My thoughts on these topics are still confused, so I’m in the process of clarifying them. Cheers for the feedback.