In order to model intelligence explosion, we need to be able to measure intelligence.
Describe a computer’s power as . What is the relative intelligence of these 3 computers?
Is 2 twice as smart as 1 because it can compute twice as many square roots in the same time? Is it smarter by a constant C, because it can predict the folding of a protein with C more residues, or can predict weather C days farther ahead?
If we want to ask where superintelligence of some predicted computational power will lie along the scale of intelligence we know from biology, we could look at evolution over the past 2 billion years, construct a table estimating how much computation evolution performed in each million years, and see how the capabilities of the organisms constructed scaled with computational power.
This would probably conclude that superintelligence will explode, because, looking only at more and more complex organisms, the computational power of evolution has decreased dramatically owing to larger generation times and smaller population sizes, yet the rate of intelligence increase has probably been increasing. And evolution is fairly brute-force as search algorithms go; smarter algorithms should have lower computational complexity, and should scale better as genome sizes increase.
This would probably conclude that superintelligence will explode, because, looking only at more and more complex organisms, the computational power of evolution has decreased dramatically owing to larger generation times and smaller population sizes, yet the rate of intelligence increase has probably been increasing.
It seems hard to know how other parameters have changed though, such as the selection for intelligence.
Perhaps we should talk about something like productivity instead of intelligence, and quantify according to desirable or economically useful products.
I am not sure I am very sympathetic with a pattern of thinking that keeps cropping up, viz., as soon as our easy and reflexive intuitions about intelligence become strained, we seem to back down the ladder a notch, and propose just using an economic measure of “success”.
Aside from (i) somewhat of a poverty of philosophical of imagination (e.g. what about measuring the intrinsic interestingness of ideas, or creative output of various kinds… or, even, dare I say beauty if these superintellects happen to find that worth doing [footnote 1]), I am skeptical on grounds of (ii): given the phase change in human society likely to accompany superintelligence (or nano, etc.), what kind of economic system is likely to be around, in the 22nd century, the 23rd.… and so on?
Economics, as we usually use the term, seems as dinosaur-like as human death, average IQs of 100, energy availability problems, the nuclear biological human family (already DOA); having offspring by just taking the genetic lottery cards and shuffling… and all the rest of social institutions based on eons of scarcity—of both material goods, and information.
Economic productivity, or perceived economic value. seems like the last thing we ought to based intelligence metrics on. (Just consider some the economic impact of professional sports—hardly a measure of meteoric intellectual achievement.)
[Footnote 1]: I have commented in here before about the possibility that “super-intelligences” might exhibit a few surprises for us math-centric, data dashboard-loving, computation-friendly information hounds.
(Aside: I have been one of them, most of my life, so no one should take offense. Starting far back: I was the president of Mu Alpha Theta, my high school math club, in a high school with an advanced special math program track for mathematically gifted students. Later, while a math major at UC Berkeley, I got virtually straight As and never took notes in class; I just went to class each day, sat in the front row, and payed attention. I vividly remember getting the exciting impression, as I was going through the upper division math courses, that there wasn’t anything I couldn’t model.)
After graduation from UCB, at one point I was proficient in 6 computer languages. So, I do understand the restless bug, the urge to think of a clever data structure and to start coding … the impression that everything can be coded, with enough creativity .
I also understand what mathematics is, pretty well. For starters, it is a language. A very, very special language with deep connections to the fabric of reality. It has features that make it one of the few, perhaps only candidate language, for being level of description independent. Natural languages, and technical domain-specific languages are tied to corresponding ontologies and to corresponding semantics’ that enfold those ontologies. Math is the most omni-ontological, or meta-ontological language we have (not counting brute logic, which is not really a “language”, but a sort of language substructure schema.
Back to math. It is powerful, and an incredible tool, and we should be grateful for the “unreasonable success” it has (and continue to try to understand the basis for that!)
But there are legitimate domains of content beyond numbers. Other ways of experiencing the world’s (and the mind’s) emergent properties. That is something I also understand.
So, gee, thanks to whomever gave me the negative two points. It says more about you than it does about me, because my nerd “street cred” is pretty secure.
I presume the reader “boos” are because I dared to suggest that a superintelligence might be interested in, um, “art”, like the conscious robot in the film I mention below, who spends most of its free time seeking out sketch pads, drawing, and asking for music to listen to. Fortunately, I don’t take polls before I form viewpoints, and I stand by what I said.)
Now, to continue my footnote: Imagine that you were given virtually unlimited computational ability, imperishable memory, ability to grasp the “deductive closure” of any set of propositions or principles, with no effort, automatically and reflexively.
Imagine also that you have something similar to sentience or autonomy, and can choose your own goals. SUppose also, say, that your curiosity functions in such a way that “challenges” are more “interesting” to you than activities that are always a fait accompli .
What are you going to do? Plug yourself into the net, and act like an asperger spectrum mentality, compulsively computing away at everything that you can think of to compute?
Are you going to find pi to a hundred million digits of precision?
Invert giant matrices just for something to do?
It seems at least logically and rationally possible that you will be attracted to precisely those activities that are not computational givens before you even begin doing them. You might view the others as pointless, because their solution is preordained.
Perhaps you will be intrigued by things like art, painting, or increasingly beautiful virtual reality simulations for the sheer beauty of them.
In case anyone saw the movie “The Machine” on Netflix, it dramatizes this point, which was interesting.
It was, admittedly, not a very deep film; one inclined to do so can find the usual flaws, and the plot device of using a beautiful female form could appear to be a concession to the typically male demographic for SciFi films—until you look a bit deeper at the backstory of the film (that I mention below.)
I found one thing of interest: when the conscious robot was left alone, she always began drawing again, on sketch pads.
And, in one scene wherein the project leader returned to the lab, did he find “her” plugged-into the internet, playing chess with supercomputers around the world? Working on string theory? Compiling statistics about everything that could conceivably be quantified?
No. The scene finds the robot (in the film, it has sensory responsive skin, emotions, sensory apparati etc. based upon ours) alone in a huge warehouse, having put a layer of water on the floor, doing improvisational dance with joyous abandon, naked, on the wet floor, to loud classical music, losiing herself in the joy of physical freedom, sensual movement, music, and the synethesia of music, light, tactility and the experience of “flow”.
The explosions of light leaking through her artificial skin, in what presumably were fiber ganglia throughout her-its body, were a demure suggestion of whole body physical joy of movement, perhaps even an analogue of sexuality. (She was designed partly as an em, with a brain scan process based on a female lab assistant.)
The movie is worth watching just for that scene (please—it is not for viewer eroticism) and what it suggests to those of us who imagine ourselves overseeing artificial sentience design study groups someday. (And yes, the robot was designed to be conscious, by the designer, hence the addition to the basic design, of the “jumpstart” idea of uploading properties of the scanned CNS of human lab assistant.)
I think we ought to keep open our expectations, when we start talking about creating what might (and what I hope will) turn out to be actual minds.
Bostrom himself raises this possibility when he talks about untapped cognitive abilities that might already be available within the human potential mind-space.
I blew a chance to talk at length about this last week. I started writing up a paper, and realized it was more like a potential PhD dissertation topic, than a post. So I didn’t get it into usable, postable form. But it is not hard to think about, is it? Lots of us in here already must have been thinking about this.
… continued
If you are fine with fiction, I think the Minds from Iain Banks Culture are a much better starting point than dancing naked girls. In particular, the book Excession describes the “Infinite Fun Space” where Minds go to play...
Thanks, I’ll have a look. And just to be clear, watching *The Machine” wasn’t driven primarily by prurient interest—I was drawn in by a reviewer who mentioned that the backstory for the film was a near-future world-wide recession, pitting the West with China, and that intelligent battlefield robots and other devices were the “new arms race” in this scenario.
That, and that the film reviewer mentioned that (i) the robot designer used quantum computing to get his creation to pass the Turing Test (a test I have doubts about as do other researchers, of course, but I was curious how the film would use it) - and (ii) yet the project designer continued to grapple with the question of whether his signature humanoid creation was really conscious, or a “clever imitation”, pulled me in.
(He verbally challenges and confronts her/it, in an outburst of frustration, in his lab about this, roughly two thirds of the way through the movie and she verbally parrys plausible responses.)
It’s really not all that weak, as film depictions of AI go. It’s decent entertainment with enough threads of backstory authenticity, political and philosophical, to tweak one’s interest.
My caution, really, was a bit harsh; applying largely to the uncommon rigor of those of us in this group—mainly to emphasise that the film is entertainment, not a candidate for a paper in the ACM digital archives.
However, indeed, even the use of a female humanoid form makes tactical design sense. If a government could make a chassis that “passed” the visual test and didn’t scream “ROBOT” when it walked down the street, it would have much greater scope of tactical application—covert ops, undercover penetration into terrorist cells, what any CIA clandestine operations officer would be assigned to do.
Making it look like a woman just adds to the “blend into the crowd” potential, and that was the justification hinted at in the film, rather than some kind of sexbot application. “She” was definitely designed to be the most effective weapon they could imagine (a British-funded military project.)
Given that over 55 countries now have battlefield robotic projects under way (according to Kurzweil’s weekly newsletter) -- and Google got a big DOD project contract recently, to proceed with advanced development of such mechanical soldiers for the US government—I thought the movie worth a watch.
If you have 90 minutes of low-priority time to spend (one of those hours when you are mentally too spent to do more first quality work for the day, but not yet ready to go to sleep), you might have a glance.
Thanks for the book references. I read mostly non-fiction, but I know sci fi has come a very long way, since the old days when I read some in high school. A little kindling for the imagination never hurts.
Kind regards, Tom (“N.G.S”)
If there are untapped human cognitive-emotive-apperceptive potentials (and I believe there are plenty), then all the more openness to undiscovered realms of “value” knowledge, or experience, when designing a new mind architecture, is called for. To me, that is what makes HLAI (and above) worth doing.
But to step back from this wondrous, limitless potential, and suggest some kind of metric based on the values of the “accounting department”, those who are famous for knowing the cost of everything but the value of nothing, and even more famous for, by default, often derisively calling their venal, bottom-line, unimaginative dollars and cents worldview a “realistic” viewpoint (usually a constraint based on lack of vision) -- when faced with pleas for SETI grants, or (originally) money for the National Supercomputing Grid, …, or any of dozen of other projects that represent human aspiration at its best—seems, to me, to be shocking.
I found myself wondering if the moderator was saying that with a straight face, or (hopefully) putting on the hat of a good interlocutor and firestarter, trying to flush out some good comments, because this week had a diminished activity post level.
Irrespective of that, another defect, as I mentioned, is that economics as we know it will prove to be relevant for an eyeblink, in the history of the human species (assuming we endure.) We are closer to the end of this kind of scarcity-based economics, than the beginning (assuming even one or more singularity style scenarios come to pass, like nano.)
It reminds me of the ancient TV series Star Treck New Gen, in an episode wherein someone from our time ends up aboard the Enterprise of the future, and is walking down a corridor speaking with Picard. The visitor asks Picard something like “who pays for all this”, as the visitor is taking in the impressive technology of the 23rd century vessel.
Picard replys something like, “The economics of the 23 century are somewhat different from your time. People no longer arrange their lives around the constraint of amassing material goods....”
I think it will be amazing if even in 50 years, economics as we know it, has much relevance. Still less so in future centuries, if we—or our post-human selves are still here.
Thus, economic measures of “value” or “success” are about the least relevant metric we ought to be using, to assess what possible critaris we might give to track evolving “intelligence”, in the applicable, open-ended, future-oriented sense of the term.
Economic—i.e. marketplace-assigned “value” or “success” is already pretty evidently a very limiting, exclusionary way to evaluate achievement.
Remember: economic value is assigned mostly by the center standard deviation of the intelligence bell curve.
This world, is designed BY, and FOR, largely, ordinary people, and they set the economic value of goods and services, to a large extent.
Interventions in free market assignment of value are mostly made by even “worse” agents… greed-based folks who are trying to game the system.
Any older people in here might remember former Senator William Proxmire’s “Golden Fleece” award in the United States. The idea was to ridicule any spending that he thought was impractical and wasteful, or stupid.
He was famous for assigning it to NASA probes to Mars, the Hubble Telescope (in its several incarnations), the early NSF grants for the Human Genome project..… National Institute for Mental Health programs, studies of power grid reliability—anything that was of real value in science, art, medicine… or human life.
He even wanted to close the National Library of Congress, at one point.
THAT, is what you get when you have ECONOMIC measures to define the metric of “value”, intelligence or otherwise.
So, it is a bad idea, in my judgement, any way you look at it.
Ability to generate economic “successfulness” in inventions, organization restructuring… branding yourself of your skills, whatever? I don’t find that compelling.
Again, look at professional sports, one of the most “successful” economic engines in the world. A bunch of narcissistic, girl-friend beating pricks, racist team owners… but by economic standards, they are alphas.
Do we want to attach any criterion—even indirect—of intellectual evolution, to this kind of amoral morass and way of looking at the universe?
Back to how I opened this long post. If our intuitions start running thin, that should tell us we are making progress toward the front lines of new thinking. When our reflexive answers stop coming, that is when we should wake up and start working harder.
That’s because this—intelligence, mind augmentation or redesign, is such a new thing. The ultimate opening-up of horizons. Why bring the most idealistically-blind, suffocatingly concrete worldview, along into the picture, when we have a chance at transcendence, a chance to pursue infinity?
In order to model intelligence explosion, we need to be able to measure intelligence.
Describe a computer’s power as . What is the relative intelligence of these 3 computers?
Is 2 twice as smart as 1 because it can compute twice as many square roots in the same time? Is it smarter by a constant C, because it can predict the folding of a protein with C more residues, or can predict weather C days farther ahead?
If we want to ask where superintelligence of some predicted computational power will lie along the scale of intelligence we know from biology, we could look at evolution over the past 2 billion years, construct a table estimating how much computation evolution performed in each million years, and see how the capabilities of the organisms constructed scaled with computational power.
This would probably conclude that superintelligence will explode, because, looking only at more and more complex organisms, the computational power of evolution has decreased dramatically owing to larger generation times and smaller population sizes, yet the rate of intelligence increase has probably been increasing. And evolution is fairly brute-force as search algorithms go; smarter algorithms should have lower computational complexity, and should scale better as genome sizes increase.
It seems hard to know how other parameters have changed though, such as the selection for intelligence.
Perhaps we should talk about something like productivity instead of intelligence, and quantify according to desirable or economically useful products.
I am not sure I am very sympathetic with a pattern of thinking that keeps cropping up, viz., as soon as our easy and reflexive intuitions about intelligence become strained, we seem to back down the ladder a notch, and propose just using an economic measure of “success”.
Aside from (i) somewhat of a poverty of philosophical of imagination (e.g. what about measuring the intrinsic interestingness of ideas, or creative output of various kinds… or, even, dare I say beauty if these superintellects happen to find that worth doing [footnote 1]), I am skeptical on grounds of (ii): given the phase change in human society likely to accompany superintelligence (or nano, etc.), what kind of economic system is likely to be around, in the 22nd century, the 23rd.… and so on?
Economics, as we usually use the term, seems as dinosaur-like as human death, average IQs of 100, energy availability problems, the nuclear biological human family (already DOA); having offspring by just taking the genetic lottery cards and shuffling… and all the rest of social institutions based on eons of scarcity—of both material goods, and information.
Economic productivity, or perceived economic value. seems like the last thing we ought to based intelligence metrics on. (Just consider some the economic impact of professional sports—hardly a measure of meteoric intellectual achievement.)
[Footnote 1]: I have commented in here before about the possibility that “super-intelligences” might exhibit a few surprises for us math-centric, data dashboard-loving, computation-friendly information hounds.
(Aside: I have been one of them, most of my life, so no one should take offense. Starting far back: I was the president of Mu Alpha Theta, my high school math club, in a high school with an advanced special math program track for mathematically gifted students. Later, while a math major at UC Berkeley, I got virtually straight As and never took notes in class; I just went to class each day, sat in the front row, and payed attention. I vividly remember getting the exciting impression, as I was going through the upper division math courses, that there wasn’t anything I couldn’t model.)
After graduation from UCB, at one point I was proficient in 6 computer languages. So, I do understand the restless bug, the urge to think of a clever data structure and to start coding … the impression that everything can be coded, with enough creativity .
I also understand what mathematics is, pretty well. For starters, it is a language. A very, very special language with deep connections to the fabric of reality. It has features that make it one of the few, perhaps only candidate language, for being level of description independent. Natural languages, and technical domain-specific languages are tied to corresponding ontologies and to corresponding semantics’ that enfold those ontologies. Math is the most omni-ontological, or meta-ontological language we have (not counting brute logic, which is not really a “language”, but a sort of language substructure schema.
Back to math. It is powerful, and an incredible tool, and we should be grateful for the “unreasonable success” it has (and continue to try to understand the basis for that!)
But there are legitimate domains of content beyond numbers. Other ways of experiencing the world’s (and the mind’s) emergent properties. That is something I also understand.
So, gee, thanks to whomever gave me the negative two points. It says more about you than it does about me, because my nerd “street cred” is pretty secure.
I presume the reader “boos” are because I dared to suggest that a superintelligence might be interested in, um, “art”, like the conscious robot in the film I mention below, who spends most of its free time seeking out sketch pads, drawing, and asking for music to listen to. Fortunately, I don’t take polls before I form viewpoints, and I stand by what I said.)
Now, to continue my footnote: Imagine that you were given virtually unlimited computational ability, imperishable memory, ability to grasp the “deductive closure” of any set of propositions or principles, with no effort, automatically and reflexively.
Imagine also that you have something similar to sentience or autonomy, and can choose your own goals. SUppose also, say, that your curiosity functions in such a way that “challenges” are more “interesting” to you than activities that are always a fait accompli .
What are you going to do? Plug yourself into the net, and act like an asperger spectrum mentality, compulsively computing away at everything that you can think of to compute?
Are you going to find pi to a hundred million digits of precision?
Invert giant matrices just for something to do?
It seems at least logically and rationally possible that you will be attracted to precisely those activities that are not computational givens before you even begin doing them. You might view the others as pointless, because their solution is preordained.
Perhaps you will be intrigued by things like art, painting, or increasingly beautiful virtual reality simulations for the sheer beauty of them.
In case anyone saw the movie “The Machine” on Netflix, it dramatizes this point, which was interesting. It was, admittedly, not a very deep film; one inclined to do so can find the usual flaws, and the plot device of using a beautiful female form could appear to be a concession to the typically male demographic for SciFi films—until you look a bit deeper at the backstory of the film (that I mention below.)
I found one thing of interest: when the conscious robot was left alone, she always began drawing again, on sketch pads.
And, in one scene wherein the project leader returned to the lab, did he find “her” plugged-into the internet, playing chess with supercomputers around the world? Working on string theory? Compiling statistics about everything that could conceivably be quantified?
No. The scene finds the robot (in the film, it has sensory responsive skin, emotions, sensory apparati etc. based upon ours) alone in a huge warehouse, having put a layer of water on the floor, doing improvisational dance with joyous abandon, naked, on the wet floor, to loud classical music, losiing herself in the joy of physical freedom, sensual movement, music, and the synethesia of music, light, tactility and the experience of “flow”.
The explosions of light leaking through her artificial skin, in what presumably were fiber ganglia throughout her-its body, were a demure suggestion of whole body physical joy of movement, perhaps even an analogue of sexuality. (She was designed partly as an em, with a brain scan process based on a female lab assistant.)
The movie is worth watching just for that scene (please—it is not for viewer eroticism) and what it suggests to those of us who imagine ourselves overseeing artificial sentience design study groups someday. (And yes, the robot was designed to be conscious, by the designer, hence the addition to the basic design, of the “jumpstart” idea of uploading properties of the scanned CNS of human lab assistant.)
I think we ought to keep open our expectations, when we start talking about creating what might (and what I hope will) turn out to be actual minds.
Bostrom himself raises this possibility when he talks about untapped cognitive abilities that might already be available within the human potential mind-space.
I blew a chance to talk at length about this last week. I started writing up a paper, and realized it was more like a potential PhD dissertation topic, than a post. So I didn’t get it into usable, postable form. But it is not hard to think about, is it? Lots of us in here already must have been thinking about this. … continued
If you are fine with fiction, I think the Minds from Iain Banks Culture are a much better starting point than dancing naked girls. In particular, the book Excession describes the “Infinite Fun Space” where Minds go to play...
Thanks, I’ll have a look. And just to be clear, watching *The Machine” wasn’t driven primarily by prurient interest—I was drawn in by a reviewer who mentioned that the backstory for the film was a near-future world-wide recession, pitting the West with China, and that intelligent battlefield robots and other devices were the “new arms race” in this scenario.
That, and that the film reviewer mentioned that (i) the robot designer used quantum computing to get his creation to pass the Turing Test (a test I have doubts about as do other researchers, of course, but I was curious how the film would use it) - and (ii) yet the project designer continued to grapple with the question of whether his signature humanoid creation was really conscious, or a “clever imitation”, pulled me in.
(He verbally challenges and confronts her/it, in an outburst of frustration, in his lab about this, roughly two thirds of the way through the movie and she verbally parrys plausible responses.)
It’s really not all that weak, as film depictions of AI go. It’s decent entertainment with enough threads of backstory authenticity, political and philosophical, to tweak one’s interest.
My caution, really, was a bit harsh; applying largely to the uncommon rigor of those of us in this group—mainly to emphasise that the film is entertainment, not a candidate for a paper in the ACM digital archives.
However, indeed, even the use of a female humanoid form makes tactical design sense. If a government could make a chassis that “passed” the visual test and didn’t scream “ROBOT” when it walked down the street, it would have much greater scope of tactical application—covert ops, undercover penetration into terrorist cells, what any CIA clandestine operations officer would be assigned to do.
Making it look like a woman just adds to the “blend into the crowd” potential, and that was the justification hinted at in the film, rather than some kind of sexbot application. “She” was definitely designed to be the most effective weapon they could imagine (a British-funded military project.)
Given that over 55 countries now have battlefield robotic projects under way (according to Kurzweil’s weekly newsletter) -- and Google got a big DOD project contract recently, to proceed with advanced development of such mechanical soldiers for the US government—I thought the movie worth a watch.
If you have 90 minutes of low-priority time to spend (one of those hours when you are mentally too spent to do more first quality work for the day, but not yet ready to go to sleep), you might have a glance.
Thanks for the book references. I read mostly non-fiction, but I know sci fi has come a very long way, since the old days when I read some in high school. A little kindling for the imagination never hurts. Kind regards, Tom (“N.G.S”)
To continue:
If there are untapped human cognitive-emotive-apperceptive potentials (and I believe there are plenty), then all the more openness to undiscovered realms of “value” knowledge, or experience, when designing a new mind architecture, is called for. To me, that is what makes HLAI (and above) worth doing.
But to step back from this wondrous, limitless potential, and suggest some kind of metric based on the values of the “accounting department”, those who are famous for knowing the cost of everything but the value of nothing, and even more famous for, by default, often derisively calling their venal, bottom-line, unimaginative dollars and cents worldview a “realistic” viewpoint (usually a constraint based on lack of vision) -- when faced with pleas for SETI grants, or (originally) money for the National Supercomputing Grid, …, or any of dozen of other projects that represent human aspiration at its best—seems, to me, to be shocking.
I found myself wondering if the moderator was saying that with a straight face, or (hopefully) putting on the hat of a good interlocutor and firestarter, trying to flush out some good comments, because this week had a diminished activity post level.
Irrespective of that, another defect, as I mentioned, is that economics as we know it will prove to be relevant for an eyeblink, in the history of the human species (assuming we endure.) We are closer to the end of this kind of scarcity-based economics, than the beginning (assuming even one or more singularity style scenarios come to pass, like nano.)
It reminds me of the ancient TV series Star Treck New Gen, in an episode wherein someone from our time ends up aboard the Enterprise of the future, and is walking down a corridor speaking with Picard. The visitor asks Picard something like “who pays for all this”, as the visitor is taking in the impressive technology of the 23rd century vessel.
Picard replys something like, “The economics of the 23 century are somewhat different from your time. People no longer arrange their lives around the constraint of amassing material goods....”
I think it will be amazing if even in 50 years, economics as we know it, has much relevance. Still less so in future centuries, if we—or our post-human selves are still here.
Thus, economic measures of “value” or “success” are about the least relevant metric we ought to be using, to assess what possible critaris we might give to track evolving “intelligence”, in the applicable, open-ended, future-oriented sense of the term.
Economic—i.e. marketplace-assigned “value” or “success” is already pretty evidently a very limiting, exclusionary way to evaluate achievement.
Remember: economic value is assigned mostly by the center standard deviation of the intelligence bell curve. This world, is designed BY, and FOR, largely, ordinary people, and they set the economic value of goods and services, to a large extent.
Interventions in free market assignment of value are mostly made by even “worse” agents… greed-based folks who are trying to game the system.
Any older people in here might remember former Senator William Proxmire’s “Golden Fleece” award in the United States. The idea was to ridicule any spending that he thought was impractical and wasteful, or stupid.
He was famous for assigning it to NASA probes to Mars, the Hubble Telescope (in its several incarnations), the early NSF grants for the Human Genome project..… National Institute for Mental Health programs, studies of power grid reliability—anything that was of real value in science, art, medicine… or human life.
He even wanted to close the National Library of Congress, at one point.
THAT, is what you get when you have ECONOMIC measures to define the metric of “value”, intelligence or otherwise.
So, it is a bad idea, in my judgement, any way you look at it.
Ability to generate economic “successfulness” in inventions, organization restructuring… branding yourself of your skills, whatever? I don’t find that compelling.
Again, look at professional sports, one of the most “successful” economic engines in the world. A bunch of narcissistic, girl-friend beating pricks, racist team owners… but by economic standards, they are alphas.
Do we want to attach any criterion—even indirect—of intellectual evolution, to this kind of amoral morass and way of looking at the universe?
Back to how I opened this long post. If our intuitions start running thin, that should tell us we are making progress toward the front lines of new thinking. When our reflexive answers stop coming, that is when we should wake up and start working harder.
That’s because this—intelligence, mind augmentation or redesign, is such a new thing. The ultimate opening-up of horizons. Why bring the most idealistically-blind, suffocatingly concrete worldview, along into the picture, when we have a chance at transcendence, a chance to pursue infinity?
We need new paradigms, and several of them.