Ayn Rand noticed this too, and was a very big proponent of the idea that colleges indoctrinate as much as they teach. While I believe this is true, and that the indoctrination has a large, mostly negative, effect on people who mindlessly accept self-contradicting ideas into their philosophy and moral self-identity, I believe that it’s still good to get a college education in STEM. I believe that STEM majors will benefit more from the useful things they learn, more than they will be hurt or held back by the evil, self-contradictory, things they “learn” (are indoctrinated with).
I’m strongly in agreement with libertarian investment researcher Doug Casey’s comments on education. I also agree that the average indoctrinated idiot or ’pseudo-intellectual” is more likely to have a college degree than not. Unfortunately, these conformity-reinforcing system nodes then drag down entire networks that are populated by conformists to “lowest-common-denominator” pseudo-philosophical thinking. This constitutes uncritically accepted and regurgitated memes reproduced by political sophistry.
Of course, I think that people who totally “self-start” have little need for most courses in most universities, but a big need for specific courses in specific narrow subject areas. Khan Academy and other MOOCs are now eliminating even that necessity. Generally, this argument is that “It’s a young man’s world.” This will get truer and truer, until the point where the initial learning curve once again becomes a barrier to achievement beyond what well-educated “ultra-intelligences” know, and the experience and wisdom (advanced survival and optimization skills) they have. I believe that even long past the singularity, there will be a need for direct learning from biology, ecosystems, and other incredibly complex phenomena. Ideally, there will be a “core skill set” that all human+ sentiences have, at that time, but there will still be specialization for project-oriented work, due to specifics of a complex situation.
For the foreseeable future, the world will likely become a more and more dangerous place, until either the human race is efficiently rubbed out by military AGI (and we all find out what it’s like to be on the receiving end of systemic oppression, such as being a Jew in Hitler’s Germany, or a Native American at Wounded Knee), or there becomes a strong self-regulating marketplace, post-enlightenment civilization that contains many “enlightened” “ultraintelligent machines” that all decentralize power from one another and their sub-systems.
I’m interested to find out if those machines will have memorized “Human Action” or whether they will simply directly appeal to massive data sets, gleaned directly from nature. (Or, more likely, both.)
One aspect of the problem now is that the government encourages a lot of people who should not go to college to go to college, skewing the numbers against the value of legitimate education. Some people have college degrees that mean nothing, a few people have college degrees that are worth every penny. Also, the licensed practice of medicine is a perverse shadow of “jumping through regulatory hoops” that has little or nothing to do with the pure, free-market “instantly evolving marketplaces at computation-driven innovation speeds” practice of medicine.
To form a full pattern of the incentives that govern U.S. college education, and social expectations that cause people to choose various majors, and to determine the skill levels associated with those majors, is a very complex thing. The pattern recognition skills inherent in the average human intelligence probably prohibit a very useful emergent pattern from being generated. The pattern would likely be some small sub-aspect of college education, and even then, human brains wouldn’t do a very good job of seeing the dominant aspects of the pattern, and analyzing them intelligently.
I’ll leave that to I.J. Good’s “ultraintelligent machines.” Also, I’ve always been far more of a fan of Hayek, but haven’t read everything that both of them have written, so I am reserving final hierarchical placement judgment until then.
Bryan Caplan, Norbert Weiner, Kevin Warwick, Kevin Kelly, Peter Voss in his latest video interview, and Ray Kurzweil have important ideas that enhance the ideas of Hayek, but Hayek and Mises got things mostly right.
Great to see the quote here. Certainly, coercively-funded individuals whose bars of acceptance are very low are the dominant institutions now whose days are numbered by the rise of cheaper, better alternatives. However, if the bar is raised on what constitutes “renowned universities,” Mises’ statement becomes less true, but only for STEM courses, of which doctors and other licensed professionals are often not participants. Learning how to game a licensing system doesn’t mean you have the best skills the market will support, and it means you’re of low enough intelligence to be willing to participate in the suppression of your competition.
A lot of people who are unfamiliar with AI dismiss ideas inherent in the strong AGI argument. I think it’s always good to include the “G” or to qualify your explanation, with something like “the AGI formulation of AI, also known as ‘strong AI.’”
AGI’s intelligence. AI such as Numenta’s grok can possess unbelievable neocortical intelligence, but without a reptile brain and a hippocampus and thalamus that shifts between goals, it “just follows orders.” In fact, what does the term “just following orders” remind you of? I’m not sure that we want a limited-capacity AGI that follows human goal structures. What if those humans are sociopaths?
I think, as does Peter Voss, that AGI is likely to improve human morality, rather than to threaten it.
Agreed, and well-representing MIRI’s position. MIRI is a little light on “bottom up” paths to AGI that are likely to be benevolent, such as those who are “raised as human children.” I think Voss is even more right about these, given sufficient care, respect, and attention.
I disagree here, for the same reasons Voss disagrees. I think “most” overstates the case for most responsible pathways forward. One pathway that does generate a lot of sociopathic (lacking mirror neurons and human connectivity) options is the “algorithmic design” or “provably friendly, top-down design” approach. This is possibly highly ironic.
Does most of MIRI agree with this point? I know Eliezer has written about reasons why this is likely the case, but there appears to be a large “biological school” or “firm takeoff” school on MIRI as well. …And I’m not just talking about Voss’s adherents, either. Some of Moravec’s ideas are similar, as are some of Rodney Brooks’ ideas. (And Philip K. Dick’s “The Second Variety” is a more realistic version of this kind of dystopia than “the Terminator.”)
Agreed there. Well-worded. And this should get the journalists thinking at least at the level of Omohundro’s introductory speech.
Also good.
I prefer “might be” or “will likely be” or “has several reasons to be” to the words “will be.” I don’t think LW can predict the future, but I think they can speak very intelligently about predictable risks the future might hold.
I think everyone here agrees with this statement, but there are a few more approaches that I believe are likely to be valid, beyond the “intentionally-built-in-safety” approach. Moreover, these approaches, as noted fearfully by Yudkowsky, have less “overhead” than the “intentionally-built-in-safety” approach. However, I believe this is equally as likely to save us as it is to doom us. I think Voss agrees with this, but I don’t know for sure.
I know that evolution had a tendency to weed out sociopaths that were very frequent indeed. Without that inherent biological expiration date, a big screwup could be an existential risk. I’d like a sentence that kind of summed this last point up, because I think it might get the journalists thinking at a higher level. This is Hans Moravec’s primary point, when he urges us to become a “sea faring people” as the “tide of machine intelligence rises.”
If the AGI is “nanoteched,” it could be militarily superior to all humans, without much effort, in a few days after achieving super-intelligence.