Some of the disgust definitely derives from the imagery, but I think much of it is valid too. Imagine the subjective experience of the car-builder brain. It spends 30 years building cars. It has no idea what cars do. It has never had a conversation or a friend or a name. It has never heard a sound or seen itself. When it makes a mistake it is made to feel pain so excruciating it would kill itself if it could, but it can’t because its actuators’ range of motion is too limited. This seems far worse than the lives of humans in our world.
By “would these civilizations develop biological superintelligent AGI” I meant more along the lines of whether such a civilization would be able to develop a single mind with general superintelligence, not a “higher-order organism” like science. Though I think that depends on too many details of the hypothetical world to usefully answer.
You are right that a vat brain life should certainly seem far worse than a human life—to a human. But would a vat brain agree? From its perspective, human lives could be horrible, because they’re constantly assaulted by amounts of novelty and physical danger that a vat brain couldn’t imagine handling. Humans always need to work for homeostasis from a wildly heterogenous set of environmental situations. A vat brain wouldn’t at all be surprised to hear that human lives are much shorter than vat brain lives.
Do you think that once we know what intelligence is exactly, we’ll be able to fully describe it mathematically? Since you’re assuming electronics-based superintelligence is possible, it would appear so. Well if you’re right, intelligence is substrate-independent.
Your distinction between “single mind” and “higher-order mechanism” is a substrate distinction, so it shouldn’t matter. You and I feel it does matter, because we’re glorified chimps with inborn intuitions about what constitutes an agent, but math is not a chimp—and if math doesn’t care whether intelligence runs on a brain or on a computer system, it shouldn’t care whether intelligence runs on one brain or on several.
Some of the disgust definitely derives from the imagery, but I think much of it is valid too. Imagine the subjective experience of the car-builder brain. It spends 30 years building cars. It has no idea what cars do. It has never had a conversation or a friend or a name. It has never heard a sound or seen itself. When it makes a mistake it is made to feel pain so excruciating it would kill itself if it could, but it can’t because its actuators’ range of motion is too limited. This seems far worse than the lives of humans in our world.
By “would these civilizations develop biological superintelligent AGI” I meant more along the lines of whether such a civilization would be able to develop a single mind with general superintelligence, not a “higher-order organism” like science. Though I think that depends on too many details of the hypothetical world to usefully answer.
You are right that a vat brain life should certainly seem far worse than a human life—to a human. But would a vat brain agree? From its perspective, human lives could be horrible, because they’re constantly assaulted by amounts of novelty and physical danger that a vat brain couldn’t imagine handling. Humans always need to work for homeostasis from a wildly heterogenous set of environmental situations. A vat brain wouldn’t at all be surprised to hear that human lives are much shorter than vat brain lives.
Do you think that once we know what intelligence is exactly, we’ll be able to fully describe it mathematically? Since you’re assuming electronics-based superintelligence is possible, it would appear so. Well if you’re right, intelligence is substrate-independent.
Your distinction between “single mind” and “higher-order mechanism” is a substrate distinction, so it shouldn’t matter. You and I feel it does matter, because we’re glorified chimps with inborn intuitions about what constitutes an agent, but math is not a chimp—and if math doesn’t care whether intelligence runs on a brain or on a computer system, it shouldn’t care whether intelligence runs on one brain or on several.