By picking these conditions precisely, we might cause the mold to spread to only the northern hemisphere of Mars, or to grow only at low altitudes, or only at high altitudes. In each case, the only thing we are transporting to Mars is a single specimen the size of a small rock. We are not ourselves spreading the mold over a mountain range or over the low-altitude parts of the planet, but by tweaking the configuration of atoms within this initial specimen we can choose how and where the mold will spread. In this sense the mold has expanding steerable consequences because a physically small specimen can be altered in a way that predictably steers large-scale effects over a long time horizon.
I actually don’t think this is a very good example because once the mold takes root on Mars, Darwinian processes will take over and any mutations with a strong reproductive fitness advantage (such as those that allow the mold to expand to new environments) will be selected for.
But I agree with your general point and think the mold analogy is apt. There’s a phrase I heard somewhere in an interview Elon Musk gave where he jokingly said “we’re just the biological bootloaders for digital superintelligence.” I kind of wonder what that life will look like. It will probably be so unimaginable that it’s useless to think about it, but given the types of reinforcement learning systems we have today, I can’t help but wonder if the ultimate aim of future digital systems will not be some grand ambition or even reproductive immortality but rather some silly poorly thought-out human goal that is mindlessly pursued until the heat death of the universe.
In some ways, that’s the true fear of all AI researchers: that we will not only misalign AI, but that we will do so badly that the digital gods we create will pursue some existentially tragic goal, chewing up all the universe’s resources in its light cone.
Yes, that is what I fear for with the future of AI.
I agree re Darwinian selection of the mold. Perhaps a better example would be a deliberately designed reproducing nanofactory with error correction sufficient to prevent viable mutations.
I actually don’t think this is a very good example because once the mold takes root on Mars, Darwinian processes will take over and any mutations with a strong reproductive fitness advantage (such as those that allow the mold to expand to new environments) will be selected for.
But I agree with your general point and think the mold analogy is apt. There’s a phrase I heard somewhere in an interview Elon Musk gave where he jokingly said “we’re just the biological bootloaders for digital superintelligence.” I kind of wonder what that life will look like. It will probably be so unimaginable that it’s useless to think about it, but given the types of reinforcement learning systems we have today, I can’t help but wonder if the ultimate aim of future digital systems will not be some grand ambition or even reproductive immortality but rather some silly poorly thought-out human goal that is mindlessly pursued until the heat death of the universe.
In some ways, that’s the true fear of all AI researchers: that we will not only misalign AI, but that we will do so badly that the digital gods we create will pursue some existentially tragic goal, chewing up all the universe’s resources in its light cone.
Yes, that is what I fear for with the future of AI.
I agree re Darwinian selection of the mold. Perhaps a better example would be a deliberately designed reproducing nanofactory with error correction sufficient to prevent viable mutations.