Is maintaining sufficient individuality likely to be a problem for the synthetic agents?
Only if they are built to want individuality. We will probably start of with collective systems—because if you have one agent, it is easy to make another one the same, whereas it is not easy to make an agent with a brain twice as big (unless you are trivially adding memory or something). So: collective systems are easier to get off the ground with—they are the ones we are likely to build first.
You can see this in most data centres—they typically contain thousands of small machines, loosely linked together.
Maybe they will ultimately find ways to plug their brains into each other and more comprehensively merge together—but that seems a bit further down the line.
I was concerned that synthetic agents might become so similar to each other that the advantages of different points of view would get lost. You brought up the possibility that they might start out very similar to each other.
If they started out similar, such agents could still come to differ culturally. So, one might be a hardware expert, another might be a programmer, and another might be a tester, as a result of exposure to different environments.
However, today we build computers of various sizes, optimised for various different applications—so probably more like that.
There’s a limit to how similar people can be made to each other, but if there are efforts to optimize all the testers (for example), it could be a problem.
Well, I doubt machines being too similar to each other will cause too many problems. The main case where that does cause problems is with resistance to pathogens—and let’s hope we do a good job of designing most of those out of existence. Apart from that, being similar is usually a major plus point. It facilitates mass production, streamlined and simplified support, etc.
Only if they are built to want individuality. We will probably start of with collective systems—because if you have one agent, it is easy to make another one the same, whereas it is not easy to make an agent with a brain twice as big (unless you are trivially adding memory or something). So: collective systems are easier to get off the ground with—they are the ones we are likely to build first.
You can see this in most data centres—they typically contain thousands of small machines, loosely linked together.
Maybe they will ultimately find ways to plug their brains into each other and more comprehensively merge together—but that seems a bit further down the line.
I was concerned that synthetic agents might become so similar to each other that the advantages of different points of view would get lost. You brought up the possibility that they might start out very similar to each other.
If they started out similar, such agents could still come to differ culturally. So, one might be a hardware expert, another might be a programmer, and another might be a tester, as a result of exposure to different environments.
However, today we build computers of various sizes, optimised for various different applications—so probably more like that.
There’s a limit to how similar people can be made to each other, but if there are efforts to optimize all the testers (for example), it could be a problem.
Well, I doubt machines being too similar to each other will cause too many problems. The main case where that does cause problems is with resistance to pathogens—and let’s hope we do a good job of designing most of those out of existence. Apart from that, being similar is usually a major plus point. It facilitates mass production, streamlined and simplified support, etc.