Most transhumanist ideas fall under the category of “not even wrong.” Drexler’s Nanosystems is ignored because it’s a work of “speculative engineering” that doesn’t address any of the questions a chemist would pose (i.e., regarding synthesis). It’s a non-event. It shows that you can make fancy molecular structures under certain computational models. SI is similar. What do you expect a scientist to say about SI? Sure, they can’t disprove the notion, but there’s nothing for them to discuss either. The transhumanist community has a tendency to argue for its positions along the lines of “you can’t prove this isn’t possible” which is completely uninteresting from a practical viewpoint.
If I was going to depack “you should get a PhD” I’d say the intention is along the lines of: you should attempt to tackle something tractable before you start speculating on Big Ideas. If you had a PhD, maybe you’d be more cautious. If you had a PhD, maybe you’d be able to step outside the incestuous milieu of pop sci musings you find yourself trapped in. There’s two things you get from a formal education: one is broad, you’re exposed to a variety of subject matter that you’re unlikely to encounter as an autodidact; the other is specific, you’re forced to focus on problems you’d likely dismiss as trivial as an autodidact. Both offer strong correctives to preconceptions.
As for why people are less likely to express the same concern when the topic is rationality; there’s a long tradition of disrespect for formal education when it comes to dispensing advice. Your discussions of rationality usually have the format of sage advice rather than scientific analysis. Nobody cares if Dr. Phil is a real doctor.
“There’s two things you get from a formal education: one is broad, you’re exposed to a variety of subject matter that you’re unlikely to encounter as an autodidact;”
As someone who has a Ph.D., I have to disagree here. Most of my own breadth of knowledge has come from pursuing topics on my own initiative outside of the classroom, simply because they interested me or because they seemed likely to help me solve some problem I was working on. In fact, as a grad student, most of the things I needed to learn weren’t being taught in any of the classes available to me.
The choice isn’t between being an autodidact or getting a Ph.D.; I don’t think you can really earn the latter unless you have the skills of the former.
Of course, those aren’t so different: if I expect that getting a Ph. D would make one less likely to believe X, then believing X after getting a Ph.D is a stronger signal than simply believing X.
And do you mean simply ‘not well specified enough’? Or more like ‘unfalsifiable’?
You also seem to be implying that scientists cannot discuss topics outside of their field, or even outside its current reach.
My philosophy on language is that people can generally discuss anything. For any words that we have heard (and indeed, many we haven’t), we have some clues as to their meaning, e.g. based on the context in which they’ve been used and similarity to other words.
Also, would you consider being cautious an inherently good thing?
Finally, from my experience as a Masters student in AI, many people are happy to give opinions on transhumanism, it’s just that many of those opinions are negative.
Technological Singularity, for example (as defined in Wikipedia). In my view, it is just atheistic version of Rapture or The End Of World As We Know It endemic in various cults and equally likely.
Reason for that is that recursive self-improvement is not possible, since it requires perfect self-knowledge and self-understanding. In reality, AI will be black box to itself, like our brains are black boxes to ourself.
More precisely, my claim is that any mind on any level of complexity is insuficient to understand itself. It is possible for more advanced mind to understand simpler mind, but it obviously does not help very much in context of direct self-improvement.
AI with any self-preservation instincts would be as likely to willingly preform direct self-modification to its mind as you to get stabbed by icepick through eyesocket.
So any AI improvement would have to be done old way. Slow way. No fast takeoff. No intelligence explosion. No Singularity.
Our brains are mysterious to us not simply because they’re our brains and no one can fully understand themselves, but because our brains are the result of millions of years of evolutionary kludges and because they’re made out of hard-to-probe meat. We are baffled by chimpanzee brains or even rabbit brains in many of the same ways as we’re baffled by human brains.
Imagine an intelligent agent whose thinking machinery is designed differently from ours. It’s cleanly and explicitly divided into modules. It comes with source code and comments and documentation and even, in some cases, correctness proofs. Maybe there are some mysterious black boxes; they come with labels saying “Mysterious Black Box #115. Neural network trained to do X. Empirically appears to do X reliably. Other components assume only that it does X within such-and-such parameters.”. Its hardware is made out of (notionally) discrete components with precise specifications, and comes with some analysis to show that if the low-level components meet the spec then the overall function of the hardware should be as documented.
Suppose that’s your brain. You might, I guess, be reluctant to experiment on it in any way in place, but you might feel quite comfortable changing EXPLICIT_FACT_STORAGE_SIZE from 4GB to 8GB, or reimplementing the hardware on a new semiconductor substrate you’ve designed that lets every component run at twice the speed while remaining within the appropriately-scaled specifications, and making a new instance. If it causes disaster, you can probably tell; if not, you’ve got a New Smarter You up and running.
Of course, maybe you couldn’t tell if some such change caused disasters of a sufficiently subtle kind. That’s a reasonable concern. But this isn’t an ice-pick-through-the-eye-socket sort of concern, and it isn’t the sort of concern that makes it obvious that “recursive self-improvement is not possible”.
Most transhumanist ideas fall under the category of “not even wrong.” Drexler’s Nanosystems is ignored because it’s a work of “speculative engineering” that doesn’t address any of the questions a chemist would pose (i.e., regarding synthesis). It’s a non-event. It shows that you can make fancy molecular structures under certain computational models. SI is similar. What do you expect a scientist to say about SI? Sure, they can’t disprove the notion, but there’s nothing for them to discuss either. The transhumanist community has a tendency to argue for its positions along the lines of “you can’t prove this isn’t possible” which is completely uninteresting from a practical viewpoint.
If I was going to depack “you should get a PhD” I’d say the intention is along the lines of: you should attempt to tackle something tractable before you start speculating on Big Ideas. If you had a PhD, maybe you’d be more cautious. If you had a PhD, maybe you’d be able to step outside the incestuous milieu of pop sci musings you find yourself trapped in. There’s two things you get from a formal education: one is broad, you’re exposed to a variety of subject matter that you’re unlikely to encounter as an autodidact; the other is specific, you’re forced to focus on problems you’d likely dismiss as trivial as an autodidact. Both offer strong correctives to preconceptions.
As for why people are less likely to express the same concern when the topic is rationality; there’s a long tradition of disrespect for formal education when it comes to dispensing advice. Your discussions of rationality usually have the format of sage advice rather than scientific analysis. Nobody cares if Dr. Phil is a real doctor.
“There’s two things you get from a formal education: one is broad, you’re exposed to a variety of subject matter that you’re unlikely to encounter as an autodidact;”
As someone who has a Ph.D., I have to disagree here. Most of my own breadth of knowledge has come from pursuing topics on my own initiative outside of the classroom, simply because they interested me or because they seemed likely to help me solve some problem I was working on. In fact, as a grad student, most of the things I needed to learn weren’t being taught in any of the classes available to me.
The choice isn’t between being an autodidact or getting a Ph.D.; I don’t think you can really earn the latter unless you have the skills of the former.
But being a grad student gave you the need to learn them.
Or a common factor caused both.
That sounds like it’s less “Once you get a Ph.D., I’ll believe you,” than “Once you get a Ph.D., you’ll stop believing that.”
Of course, those aren’t so different: if I expect that getting a Ph. D would make one less likely to believe X, then believing X after getting a Ph.D is a stronger signal than simply believing X.
Which transhumanist ideas are “not even wrong”?
And do you mean simply ‘not well specified enough’? Or more like ‘unfalsifiable’?
You also seem to be implying that scientists cannot discuss topics outside of their field, or even outside its current reach.
My philosophy on language is that people can generally discuss anything. For any words that we have heard (and indeed, many we haven’t), we have some clues as to their meaning, e.g. based on the context in which they’ve been used and similarity to other words.
Also, would you consider being cautious an inherently good thing?
Finally, from my experience as a Masters student in AI, many people are happy to give opinions on transhumanism, it’s just that many of those opinions are negative.
“Which transhumanist ideas are “not even wrong”?”
Technological Singularity, for example (as defined in Wikipedia). In my view, it is just atheistic version of Rapture or The End Of World As We Know It endemic in various cults and equally likely.
Reason for that is that recursive self-improvement is not possible, since it requires perfect self-knowledge and self-understanding. In reality, AI will be black box to itself, like our brains are black boxes to ourself.
More precisely, my claim is that any mind on any level of complexity is insuficient to understand itself. It is possible for more advanced mind to understand simpler mind, but it obviously does not help very much in context of direct self-improvement.
AI with any self-preservation instincts would be as likely to willingly preform direct self-modification to its mind as you to get stabbed by icepick through eyesocket.
So any AI improvement would have to be done old way. Slow way. No fast takeoff. No intelligence explosion. No Singularity.
Our brains are mysterious to us not simply because they’re our brains and no one can fully understand themselves, but because our brains are the result of millions of years of evolutionary kludges and because they’re made out of hard-to-probe meat. We are baffled by chimpanzee brains or even rabbit brains in many of the same ways as we’re baffled by human brains.
Imagine an intelligent agent whose thinking machinery is designed differently from ours. It’s cleanly and explicitly divided into modules. It comes with source code and comments and documentation and even, in some cases, correctness proofs. Maybe there are some mysterious black boxes; they come with labels saying “Mysterious Black Box #115. Neural network trained to do X. Empirically appears to do X reliably. Other components assume only that it does X within such-and-such parameters.”. Its hardware is made out of (notionally) discrete components with precise specifications, and comes with some analysis to show that if the low-level components meet the spec then the overall function of the hardware should be as documented.
Suppose that’s your brain. You might, I guess, be reluctant to experiment on it in any way in place, but you might feel quite comfortable changing EXPLICIT_FACT_STORAGE_SIZE from 4GB to 8GB, or reimplementing the hardware on a new semiconductor substrate you’ve designed that lets every component run at twice the speed while remaining within the appropriately-scaled specifications, and making a new instance. If it causes disaster, you can probably tell; if not, you’ve got a New Smarter You up and running.
Of course, maybe you couldn’t tell if some such change caused disasters of a sufficiently subtle kind. That’s a reasonable concern. But this isn’t an ice-pick-through-the-eye-socket sort of concern, and it isn’t the sort of concern that makes it obvious that “recursive self-improvement is not possible”.
While I agree with the overall thrust of your comment, this brought to mind an old anecdote...
Such things are why I said “maybe you couldn’t tell if some such change caused disasters of a sufficiently subtle kind”.