And do you mean simply ‘not well specified enough’? Or more like ‘unfalsifiable’?
You also seem to be implying that scientists cannot discuss topics outside of their field, or even outside its current reach.
My philosophy on language is that people can generally discuss anything. For any words that we have heard (and indeed, many we haven’t), we have some clues as to their meaning, e.g. based on the context in which they’ve been used and similarity to other words.
Also, would you consider being cautious an inherently good thing?
Finally, from my experience as a Masters student in AI, many people are happy to give opinions on transhumanism, it’s just that many of those opinions are negative.
Technological Singularity, for example (as defined in Wikipedia). In my view, it is just atheistic version of Rapture or The End Of World As We Know It endemic in various cults and equally likely.
Reason for that is that recursive self-improvement is not possible, since it requires perfect self-knowledge and self-understanding. In reality, AI will be black box to itself, like our brains are black boxes to ourself.
More precisely, my claim is that any mind on any level of complexity is insuficient to understand itself. It is possible for more advanced mind to understand simpler mind, but it obviously does not help very much in context of direct self-improvement.
AI with any self-preservation instincts would be as likely to willingly preform direct self-modification to its mind as you to get stabbed by icepick through eyesocket.
So any AI improvement would have to be done old way. Slow way. No fast takeoff. No intelligence explosion. No Singularity.
Our brains are mysterious to us not simply because they’re our brains and no one can fully understand themselves, but because our brains are the result of millions of years of evolutionary kludges and because they’re made out of hard-to-probe meat. We are baffled by chimpanzee brains or even rabbit brains in many of the same ways as we’re baffled by human brains.
Imagine an intelligent agent whose thinking machinery is designed differently from ours. It’s cleanly and explicitly divided into modules. It comes with source code and comments and documentation and even, in some cases, correctness proofs. Maybe there are some mysterious black boxes; they come with labels saying “Mysterious Black Box #115. Neural network trained to do X. Empirically appears to do X reliably. Other components assume only that it does X within such-and-such parameters.”. Its hardware is made out of (notionally) discrete components with precise specifications, and comes with some analysis to show that if the low-level components meet the spec then the overall function of the hardware should be as documented.
Suppose that’s your brain. You might, I guess, be reluctant to experiment on it in any way in place, but you might feel quite comfortable changing EXPLICIT_FACT_STORAGE_SIZE from 4GB to 8GB, or reimplementing the hardware on a new semiconductor substrate you’ve designed that lets every component run at twice the speed while remaining within the appropriately-scaled specifications, and making a new instance. If it causes disaster, you can probably tell; if not, you’ve got a New Smarter You up and running.
Of course, maybe you couldn’t tell if some such change caused disasters of a sufficiently subtle kind. That’s a reasonable concern. But this isn’t an ice-pick-through-the-eye-socket sort of concern, and it isn’t the sort of concern that makes it obvious that “recursive self-improvement is not possible”.
Which transhumanist ideas are “not even wrong”?
And do you mean simply ‘not well specified enough’? Or more like ‘unfalsifiable’?
You also seem to be implying that scientists cannot discuss topics outside of their field, or even outside its current reach.
My philosophy on language is that people can generally discuss anything. For any words that we have heard (and indeed, many we haven’t), we have some clues as to their meaning, e.g. based on the context in which they’ve been used and similarity to other words.
Also, would you consider being cautious an inherently good thing?
Finally, from my experience as a Masters student in AI, many people are happy to give opinions on transhumanism, it’s just that many of those opinions are negative.
“Which transhumanist ideas are “not even wrong”?”
Technological Singularity, for example (as defined in Wikipedia). In my view, it is just atheistic version of Rapture or The End Of World As We Know It endemic in various cults and equally likely.
Reason for that is that recursive self-improvement is not possible, since it requires perfect self-knowledge and self-understanding. In reality, AI will be black box to itself, like our brains are black boxes to ourself.
More precisely, my claim is that any mind on any level of complexity is insuficient to understand itself. It is possible for more advanced mind to understand simpler mind, but it obviously does not help very much in context of direct self-improvement.
AI with any self-preservation instincts would be as likely to willingly preform direct self-modification to its mind as you to get stabbed by icepick through eyesocket.
So any AI improvement would have to be done old way. Slow way. No fast takeoff. No intelligence explosion. No Singularity.
Our brains are mysterious to us not simply because they’re our brains and no one can fully understand themselves, but because our brains are the result of millions of years of evolutionary kludges and because they’re made out of hard-to-probe meat. We are baffled by chimpanzee brains or even rabbit brains in many of the same ways as we’re baffled by human brains.
Imagine an intelligent agent whose thinking machinery is designed differently from ours. It’s cleanly and explicitly divided into modules. It comes with source code and comments and documentation and even, in some cases, correctness proofs. Maybe there are some mysterious black boxes; they come with labels saying “Mysterious Black Box #115. Neural network trained to do X. Empirically appears to do X reliably. Other components assume only that it does X within such-and-such parameters.”. Its hardware is made out of (notionally) discrete components with precise specifications, and comes with some analysis to show that if the low-level components meet the spec then the overall function of the hardware should be as documented.
Suppose that’s your brain. You might, I guess, be reluctant to experiment on it in any way in place, but you might feel quite comfortable changing EXPLICIT_FACT_STORAGE_SIZE from 4GB to 8GB, or reimplementing the hardware on a new semiconductor substrate you’ve designed that lets every component run at twice the speed while remaining within the appropriately-scaled specifications, and making a new instance. If it causes disaster, you can probably tell; if not, you’ve got a New Smarter You up and running.
Of course, maybe you couldn’t tell if some such change caused disasters of a sufficiently subtle kind. That’s a reasonable concern. But this isn’t an ice-pick-through-the-eye-socket sort of concern, and it isn’t the sort of concern that makes it obvious that “recursive self-improvement is not possible”.
While I agree with the overall thrust of your comment, this brought to mind an old anecdote...
Such things are why I said “maybe you couldn’t tell if some such change caused disasters of a sufficiently subtle kind”.