“You want to hack evolution’s sphagetti code? Good luck with that. Let us know if you get FDA approval.”
I think I’ve seen Eli make this same point. How can you be certain at this point, when we are nowhere near achieving it, that AI won’t be in the same league of complexity as the spaghetti brain? I would admit that there are likely artifacts of the brain that are unnecessarily kludgy (or plain irrelevent) but not necessarily in a manner that excessively obfuscates the primary design. It’s always tempting for programmers to want to throw away a huge tangled code set when they first have to start working on it, but it is almost always not the right approach.
I expect advances in understanding how to build intelligence to serve as the groundwork for hypothesis of how the brain functions and vice-versa.
On the friendliness issue, isn’t the primary logical way to avoid problems to create a network of competitive systems and goals? If one system wants to tile the universe with smileys that is almost certainly going to get in the way of the goal sets of the millions of other intelligences out there. They logically then should see value in reporting or acting upon their belief that a rival AI is making their jobs harder. I’d be suprised if humans don’t have half their cognitive power devoted to anticipating and manipulating their expectations of rival’s actions.
“You want to hack evolution’s sphagetti code? Good luck with that. Let us know if you get FDA approval.”
I think I’ve seen Eli make this same point. How can you be certain at this point, when we are nowhere near achieving it, that AI won’t be in the same league of complexity as the spaghetti brain? I would admit that there are likely artifacts of the brain that are unnecessarily kludgy (or plain irrelevent) but not necessarily in a manner that excessively obfuscates the primary design. It’s always tempting for programmers to want to throw away a huge tangled code set when they first have to start working on it, but it is almost always not the right approach.
I expect advances in understanding how to build intelligence to serve as the groundwork for hypothesis of how the brain functions and vice-versa.
On the friendliness issue, isn’t the primary logical way to avoid problems to create a network of competitive systems and goals? If one system wants to tile the universe with smileys that is almost certainly going to get in the way of the goal sets of the millions of other intelligences out there. They logically then should see value in reporting or acting upon their belief that a rival AI is making their jobs harder. I’d be suprised if humans don’t have half their cognitive power devoted to anticipating and manipulating their expectations of rival’s actions.