The statement the above post refers to:
http://www.singinst.org/overview/whyworktowardthesingularity
The statement the above post refers to:
http://www.singinst.org/overview/whyworktowardthesingularity
This statement seems to me to be extraordinarily (relative to the capabilities of the presumed authors) ungrounded in empiricism. All sorts of ideas in it are framed as declarative fact, when I think they should be more accurately presented as conjecture or aspirations of unknown certainty. I’m very interested in the Singularity Institute people at overcomingbias addressing these concerns directly.
Maybe the Brazilian Appeals Court was right?
http://apnews.myway.com/article/20070718/D8QEV3703.html
I’d like to lobby for a new open thread to be created weekly.
It may be a fair question of whether better outcomes result when a substantial portion of the population is taught followign directions rather than to think critically. Sort of like how the Straussians approach religion and how the armed forces approach chain of command.
Robin, good point. At the same time, there might be a large functional vs. optimal gap in the degree to which school is fulfilling its real purposes. Although the best way to optimize it might not be to brainstorm about how to get it closer to its stated purposes -so point well-taken on that end.
Great post, Eliezer (you’ve earned my approval). I think tied for worst school-nutured habit, along with parroting things back, is emphasis on what we think we know, as opposed to what we don’t know. I think school science and history subjects would be a lot more interesting, and accurately presented, if at least equal time was given to all the problems and areas where we don’t know what’s going on, and for which there are various competing theories. Unfortunately one doesn’t usually get this presentation of the state of things until one is working as a research assistant in college or grad school.
Nick and Eliezer, are you still Singularitarians?
http://en.wikipedia.org/wiki/Singularitarian
The idea that people are actively working to bring about self-improving, smarter-than-humanity intelligences scares me, because I think you’re blind to your own ruthless selfishness (not meant pejoratively) and thus think that by creating something smarter than us (and therefore you) it can also attempt to be kind to us, as you perceive yourself to be attempting to be kind to people generally.
In contrast, I don’t see either of you as Gandhi-types (here I’m referring to the archetypal elements of Gandhi’s self-cultivated image, not his actual life-in-practice). It may be a hubris-derived bias that makes you think otherwise. I don’t see any singulatarians performing and attempt to engage in minimal pleasurable resource use to maximize their ability to save currently existing lives. Instead I see thousands or millions of people dying daily, permanently, while leading singularians enjoy a variety of life’s simple pleasures.
My prescriptive solution: more selfishness, fear, and paranoia on your end. Be thankful that you’re apparently (big caveat) one of the smartest entities in apparent reality and there’s apparently nothing of much greater intelligence seeking resources in your shared environment. Rather than consciously try to bring about a singularity, I think we should race against a naturally occuring singularity to understand the various existential threats to us and to minimize them.
At the same time, I think we should try to realistically assess more mundane existential threats and threats to our personal persistence, and try to minimize these too with what seems to be the best proportionate energy and effort.
But the rationalizations of why people are trying to inentionally create a self-improving intelligence smarter than humanity seem to me to be very, very weak, and could be unecessarily catastrophic to our existence.
This makes notions of representative democracy, at least in the USA, seem a bit silly:
http://andrewsullivan.theatlantic.com/the_daily_dish/2007/07/one-problem-wit.html
The link details evidence that most Americans have very low knowledge levels of the basics of American government.
Mark, alarmingly high? I don’t see how that probability can be calculated as any higher than the existential threat of quantum flux or other simple, random end to our apparent reality, but I’d be interested in seeing the paper.
Mark, until I read Kurzweil’s interesting argument that we’re most likely living in a simulation (within a simulation, etc. -almost all the way down) I thought there was more likely than not no intelligent creator of our apparent reality. Now it seems to me the stronger argument that our apparent reality is a simulation of some other intelligence’s reality, with some abstractions/reductions of their more complex reality. Just like we’ve already filled the Earth with various (and increasingly better) simulations of the universe and our own apparent reality).
forgot to include the link:
http://www.commentarymagazine.com/cm/main/viewArticle.html?id=10916&page=all
I thought Cochran and Harpending’s letter was the most interesting. As for Murray, I think he tends to mythologize more than give primacy to empiricism. I find a Murray vs. Patricia Williams type dialectic to be annoying, performative, and mostly about manufacturing American cultural norms (while drowning out more interesting and critical voices). So I’m glad the discussion on the topics related to human intelligence is expanding, and expanding beyond some narrow Left/Right performance.
I’m interested in responses to these lines:
...
“But I think it’s a bit arbitrary that freedom can be curtailed to forestall death from a threat in one hour’s time, or one day’s time, or one week’s time, but not in a few decade’s time (as would be attempted with the compulsory medical trial participation example).”
and
“I don’t think randomly drafting people into medical experiments to benefit human health/medical knowledge would just help society. I think it helps all of us individuals at risk of being so drafted, provided it’s structured in such a way that our risk of disease and death ends up net lower than if human medical experimentation wasn’t being done in this way.
I’d think economists might look at our humoring of various “moral intuitions”/biases as a sort of luxury spending, or waste. There also might be a cost in terms of human life, health, etc. that could legitimately be described as morally horrific.
It goes to the problem of how people often think shooting and killing 3 people is much worse than fraud, corruption, or waste that wipes out hundreds of millions of dollars of wealth, although objectively that reduction in global wealth might mean a much greater negative impact on human life and health.”
...
I think it’s worth looking into if waste from eww bias-derived moral intuitions on topics such as freedom actually result in social waste such that the net freedom for all humans is lower. For example, we all may be more likely to die as a result of failing to have randomized compulsory medical trials at this stage of human history. Thus, by not engaging in this temporary fix, are we reducing a lot more freedom 50 years from now.
The valuing of freedom now more than freedom later (if that’s what this is) is, parallels a classic bias of preferring less money now rather than more money later, beyond the advantages of Time Value of Money.
What’s the point of freedom? Is it god-given? an illusion? is it utilitarian (for example to promote innovation and economic growth through market participation) within certain threshhold levels to the degree that it helps maximize our mutual odds of persistence? Personally, I lean at least towards the latter justification for promoting certain amounts of free agency for people in society. But I think it’s a bit arbitrary that freedom can be curtailed to forestall death from a threat in one hour’s time, or one day’s time, or one week’s time, but not in a few decade’s time (as would be attempted with the compulsory medical trial participation example).
I’m sure readers other than me have pet ideas that they’d like to see exposed to community scrutiny so I hope some other readers throw out some bombs, too.
Another interest is a better version of the “Nobel Prize Sperm Bank”. A version individualists could support, structured by encouraging volunteering financial incentives, and incorporating both donated (or purchased) eggs, sperms, surrogate wombs, and adoptive parents, with the genetic material selected from those most talented at solving existential threats humanity faces (not necessarily nobel prize winners) the surrogates and adoptive parents probably being less talented, but still the best at some combination of nuturing and existential threat-solving, and each offspring having an endowed trust that gives them financial rewards for each stage of education and professional development they choose to complete geared towards making them an expert at solving existential threats. I think all this could be done with current laws and social norms in the West. If the singularity is coming, this is all probably unecessary (or, more ominously, useless), but if there are barriers to AI of which we’re currently unaware, this could speed up solving the challenges (in particular) of our current aging/SENS problem, and various other difficult existential problems of which we’re currently aware or are unknown.
I think this relates to overcoming bias, because I’m not sure of objections to doing something like this other than a social aesthetic bias that this would be yucky, or that people smart at solving difficult challenge that humanity faces arise magically.
Eliezer, I don’t think the approach I’m suggesting needs to be done through government. For example, it could be done extragovernmentally, and then it would require an excercise a government power to prevent extragovernmental agents from carrying it out.
TGGP, it sounds like you’re saying that if certain social arrangements become too yucky to optimize yor personal odds of persistence (and I understand maximizing general odds is different than maximizing personal odds) then you’d rather die (or at least take an increased odds of death)? I can’t say I relate to that point of view, at all.
Nathan, I think the reason we don’t have compulsory medical trials is probably explained more by “functional not optimal” than the possibility that it doesn’t pass cost-benefit. Here I’m specifically making randomized compulsory medical trials contingent on the degree that they pass cost-benefit. It seems to me to be such a naturally beneficial idea (at least on some levels) that I’m curious if utilitarians like Singer have at least done the analysis.
Anders, Thanks for the really interesting response. Perhaps I should be pitching this idea to leading utilitarians and finding out the groundwork they’ve already laid in this area.
I do think many “moral intuitions” fall neatly with already articulated biases, such as Eww bias.
One thing I’m not sure if you picked up on from my post. I don’t think randomly drafting people into medical experiments to benefit human health/medical knowledge would just help society. I think it helps all of us individuals at risk of being so drafted, provided it’s structured in such a way that our risk of disease and death ends up net lower than if human medical experimentation wasn’t being done in this way.
I’d think economists might look at our humoring of various “moral intuitions”/biases as a sort of luxury spending, or waste. There also might be a cost in terms of human life, health, etc. that could legitimately be described as morally horrific.
It goes to the problem of how people often think shooting and killing 3 people is much worse than fraud, corruption, or waste that wipes out hundreds of millions of dollars of wealth, although objectively that reduction in global wealth might mean a much greater negative impact on human life and health.
Adding Adam Crowe as another person who I’d like to hear from on this topic.
I agree with Nathan, but I think 1 or 2 per week would be ideal. What do people think about moving to a system of laws an social norms focused on rationally minimizing our odds of death or harm, rather than on maintaining certain principles.
To take an example that gets extreme negative reactions, human societies don’t force random sets of people involuntarily into medical experiments that could adversely impact their health, even though every individual human might have our odds of health outcomes improved if we did have such a policy. Does that make us currently irrational for not pursuing such a policy? I think it does. If each individual human would odds-wise be better off healthwise if we engaged in mandatory compulsory drafting into medical experiments than if we didn’t, then I think it’s irrational for human societies not to do this. And I think this general principle applies widely to other areas of rule-making and social policy.
Is any expert in the fields of applied ethics and social policy studying this? Or done so in the past (no cheap throw away lines about Nazis or Tuskeegee please). Directions to links and publications are welcome.
I’m especially interested in responses from Anders Sandberg and TGGP. Contributors are welcome to respond in this thread on this topic anonymously for obvious reasons.
Anna, If you’re talking about real dragons, the theory that made the most intuitive sense to me (I think I read it in an E.O. Wilson writing?) is that dragons are an amalgamation of things we’ve been naturally selected to biologically fear: snakes and birds of prey (I think rats may have also been part of the list). Dragons don’t incorporate an element of them that looks like a handgun or a piping hot electric stove, probably because they’re too new as threats for us to be naturally selected to fear things with those properties.