One of the obvious extensions of this thought experiment is to posit a laser-powered blue goo that absorbs laser energy, and uses it to grow larger.
This thought experiment also reminds me: Omohundro’s arguments regarding likely uFAI behavior are based on the AI having goals of some sort—that is, something we would recognize as goals. It’s entirely possible that we wouldn’t perceive it as having goals at all, merely behavior.
To the extent that intelligence solves problems, then yes, problem-solving modules of an intelligent entity have at least short-term goals. But whether the entity itself has goals is a different question.
I can imagine a sufficiently dangerous uFAI having no consistent goals we would recognise, and all of its problem-solving behaviour, however powerful and adaptable, being at a level we wouldn’t call intelligence (e.g. the uFAI could include a strategic module like a super-powerful but clearly non-sentient chess computer, which would not in itself have any awareness or intent, just problem-solving behaviour.)
Actually, I’m not going to disagree with you about definitions of intelligence. But I suspect most of them are place-holders until we understand enough to dissolve the question “what is intelligence?” a bit.
Be suspicious of arguments from definition. Why must intelligence be goal-directed? Why is this an integral part of the definition of intelligence, if it must be?
One of the obvious extensions of this thought experiment is to posit a laser-powered blue goo that absorbs laser energy, and uses it to grow larger.
This thought experiment also reminds me: Omohundro’s arguments regarding likely uFAI behavior are based on the AI having goals of some sort—that is, something we would recognize as goals. It’s entirely possible that we wouldn’t perceive it as having goals at all, merely behavior.
Please share your thoughts more often, that wasn’t obvious to me at all.
I not unoften say things I think are obvious and get surprised by extreme positive or negative reactions to them.
If is isn’t goal directed, it isn’t intelligent. Intelligence is goal-directed—by most definitions.
To the extent that intelligence solves problems, then yes, problem-solving modules of an intelligent entity have at least short-term goals. But whether the entity itself has goals is a different question.
I can imagine a sufficiently dangerous uFAI having no consistent goals we would recognise, and all of its problem-solving behaviour, however powerful and adaptable, being at a level we wouldn’t call intelligence (e.g. the uFAI could include a strategic module like a super-powerful but clearly non-sentient chess computer, which would not in itself have any awareness or intent, just problem-solving behaviour.)
Actually, I’m not going to disagree with you about definitions of intelligence. But I suspect most of them are place-holders until we understand enough to dissolve the question “what is intelligence?” a bit.
Be suspicious of arguments from definition. Why must intelligence be goal-directed? Why is this an integral part of the definition of intelligence, if it must be?