This idea has occurred to me before, but in the interim I dismissed it and then forgot. Since it is back again more-or-less unprompted, I am writing it down.
We usually talk about animals and their intelligence as a way to interrogate intelligence in general, or as a model for possible other minds. It occurred to me our relationship with animals is therefore a model for our relationship with other forms of intelligence.
In the mode of Prediction Machines, it is straightforward to consider: prediction engines in lieu of dogs to track and give warning; teaching/learning systems for exploring the map in lieu of horses; analysis engines to provide our solutions instead of cattle or sheep to provide our sustenance. The idea here is just to map animals-as-capital to the information economy, according to what they do for us.
Alongside what they do for us is the question of how we manage them. The Software 2.0 lens of adjusting weights to search program space reads closer to animal husbandry than building a new beast from the ground up with gears each time, to me. It allows for a notion of lineage,and we can envision using groups of machines with subtle variations, or entirely different machines in combination.
This analogy also feels like it does a reasonable job of priming the intuition about where dangerous thresholds might lie. How smart is smart enough to be dangerous for one AI? Tiger-ish? We can also think about relative intelligence: the primates with better tool ability and more powerful communication were able to establish patronage and then total domestication over packs of dogs and herds of horses, cattle, and sheep. How big is that gap exactly, and what does that imply about the threshold for doing the same to humans? Historically we are perfectly capable of doing it to ourselves, so it seems like the threshold might actually be lower than us.
Machine Pastoralism
This idea has occurred to me before, but in the interim I dismissed it and then forgot. Since it is back again more-or-less unprompted, I am writing it down.
We usually talk about animals and their intelligence as a way to interrogate intelligence in general, or as a model for possible other minds. It occurred to me our relationship with animals is therefore a model for our relationship with other forms of intelligence.
In the mode of Prediction Machines, it is straightforward to consider: prediction engines in lieu of dogs to track and give warning; teaching/learning systems for exploring the map in lieu of horses; analysis engines to provide our solutions instead of cattle or sheep to provide our sustenance. The idea here is just to map animals-as-capital to the information economy, according to what they do for us.
Alongside what they do for us is the question of how we manage them. The Software 2.0 lens of adjusting weights to search program space reads closer to animal husbandry than building a new beast from the ground up with gears each time, to me. It allows for a notion of lineage, and we can envision using groups of machines with subtle variations, or entirely different machines in combination.
This analogy also feels like it does a reasonable job of priming the intuition about where dangerous thresholds might lie. How smart is smart enough to be dangerous for one AI? Tiger-ish? We can also think about relative intelligence: the primates with better tool ability and more powerful communication were able to establish patronage and then total domestication over packs of dogs and herds of horses, cattle, and sheep. How big is that gap exactly, and what does that imply about the threshold for doing the same to humans? Historically we are perfectly capable of doing it to ourselves, so it seems like the threshold might actually be lower than us.