How can you be certain at this point, when we are nowhere near achieving it, that AI won’t be in the same league of complexity as the spaghetti brain?
It’s not really an issue of complexity, it’s about whether designed or engineered solutions are easier to modify and maintain. Since modularity and maintainability can be design criteria, it seems pretty obvious that a system built from the ground up with those in mind will be easier to maintain. The only issue I see is whether the “redesign-from-scratch” approch can catch up with the billions of years of evolutionary R&D. I think it can—and that it will happen early this century for brains.
It’s always tempting for programmers to want to throw away a huge tangled code set when they first have to start working on it, but it is almost always not the right approach.
It seems like a misleading analogy. Programmers are usually facing code written by other human programmers, in languages that are designed to facilitate maintainenance.
In this case, brain hackers are messing with a wholly-evolved system. The type of maintenance it is expecting is random gene flipping.
Yes, we could scale up the human brain. Create egg-head humans that can hardly hold their heads up. Fuze the human skulls of clones together in a matrix—to produce a brain-farm. Grow human brain tissue in huge vats. However, the yuck factor is substantial. Even if we go full throttle at such projects—stifling the revulsion humans feel for them with the belief that we are working to preserve at least some fragment of humanity—a designed-from-scratch approach without evolution’s baggage would still probably win in the end.
It’s not really an issue of complexity, it’s about whether designed or engineered solutions are easier to modify and maintain. Since modularity and maintainability can be design criteria, it seems pretty obvious that a system built from the ground up with those in mind will be easier to maintain. The only issue I see is whether the “redesign-from-scratch” approch can catch up with the billions of years of evolutionary R&D. I think it can—and that it will happen early this century for brains.
It seems like a misleading analogy. Programmers are usually facing code written by other human programmers, in languages that are designed to facilitate maintainenance.
In this case, brain hackers are messing with a wholly-evolved system. The type of maintenance it is expecting is random gene flipping.
Yes, we could scale up the human brain. Create egg-head humans that can hardly hold their heads up. Fuze the human skulls of clones together in a matrix—to produce a brain-farm. Grow human brain tissue in huge vats. However, the yuck factor is substantial. Even if we go full throttle at such projects—stifling the revulsion humans feel for them with the belief that we are working to preserve at least some fragment of humanity—a designed-from-scratch approach without evolution’s baggage would still probably win in the end.