Consciousness isn’t the point. A machine need not be conscious, or “alive”, or “sentient,” or have “real understanding” to destroy the world. The point is efficient cross-domain optimization. It seems bizarre to think that meat is the only substrate capable of efficient cross-domain optimization. Computers already surpass our abilities in many narrow domains; why not technology design or general reasoning, too?
Neurons work differently than computers only at certain levels of organization, which is true for every two systems you might compare. You can write a computer program that functionally reproduces what happens when neurons fire, as long as you include enough of the details of what neurons do when they fire. But I doubt that replicating neural computation is the easiest way to build a machine with a human-level capacity for efficient cross-domain optimization.
How does it know what bits to change to make itself more intelligent?
There is an entire field called “metaheuristics” devoted to this, but nothing like improving general abilities at efficient cross-domain optimization. I won’t say more about this at the moment because I’m writing some articles about it, but Chalmers’ article analyzes the logical structure of intelligence explosion in some detail.
Finally, why is SIAI the best place for artificial intelligence? What exactly is it doing differently than other places trying to develop AI?
The emphasis on Friendliness is the key thing that distinguishes SIAI and FHI from other AI-interested organizations, and is really the whole point. To develop full-blown AI without Friendliness is to develop world-destroying unfriendly AI.
Thank you for the link to the Chalmers article: it was quite interesting and I think I now have a much firmer grasp on why exactly there would be an intelligence explosion.
Consciousness isn’t the point. A machine need not be conscious, or “alive”, or “sentient,” or have “real understanding” to destroy the world.
(I see what you mean, but technically speaking your second sentence is somewhat contentious and I don’t think it’s necessary for your point to go through. Sorry for nitpicking.)
(Slepnev’s “narrow AI argument” seems to be related. A “narrow AI” that can win world-optimization would arguably lack person-like properties, at least on the stage where it’s still a “narrow AI”.)
Consciousness isn’t the point. A machine need not be conscious, or “alive”, or “sentient,” or have “real understanding” to destroy the world. The point is efficient cross-domain optimization. It seems bizarre to think that meat is the only substrate capable of efficient cross-domain optimization. Computers already surpass our abilities in many narrow domains; why not technology design or general reasoning, too?
Neurons work differently than computers only at certain levels of organization, which is true for every two systems you might compare. You can write a computer program that functionally reproduces what happens when neurons fire, as long as you include enough of the details of what neurons do when they fire. But I doubt that replicating neural computation is the easiest way to build a machine with a human-level capacity for efficient cross-domain optimization.
There is an entire field called “metaheuristics” devoted to this, but nothing like improving general abilities at efficient cross-domain optimization. I won’t say more about this at the moment because I’m writing some articles about it, but Chalmers’ article analyzes the logical structure of intelligence explosion in some detail.
The emphasis on Friendliness is the key thing that distinguishes SIAI and FHI from other AI-interested organizations, and is really the whole point. To develop full-blown AI without Friendliness is to develop world-destroying unfriendly AI.
Thank you for the link to the Chalmers article: it was quite interesting and I think I now have a much firmer grasp on why exactly there would be an intelligence explosion.
(I see what you mean, but technically speaking your second sentence is somewhat contentious and I don’t think it’s necessary for your point to go through. Sorry for nitpicking.)
(Slepnev’s “narrow AI argument” seems to be related. A “narrow AI” that can win world-optimization would arguably lack person-like properties, at least on the stage where it’s still a “narrow AI”.)
This is wrong in a boring way; you’re supposed to be wrong in interesting ways. :-)