Doug S.
I’m interested in learning more about extremely early readers. I would be grateful if you contacted me at
EconomicProf@Yahoo.com
Doug S.
I’m interested in learning more about extremely early readers. I would be grateful if you contacted me at
EconomicProf@Yahoo.com
High functioning autism might in part be caused by an “overclocking” of the brain.
My evidence:
(1) Autistic children have on average larger brains than neurotypical children do. (2) High IQ parents are more likely than average to have autistic children. (3) An extremely disproportionate number of mathematical geniuses have been autistic. (4) Some children learn to read before they are 2.5 years old. From what I know all of these early readers turn out to be autistic.
Eliezer-
“What justifies the right of your past self to exert coercive control over your future self? There may be overlap of interests, which is one of the typical de facto criteria for coercive intervention; but can your past self have an epistemic vantage point over your future self?”
In general I agree. But werewolf contracts protect against temporary lapses in rationality. My level of rationality varies. Even assuming that I remain in good health for eternity there will almost certainly exist some hour in the future in which my rationality is much lower than it is today. My current self, therefore, will almost certainly have an “epistemic vantage point over [at least a small part of my] future self.” Given that I could cause great harm to myself in a very short period of time I am willing to significantly reduce my freedom in return for protecting myself against future temporary irrationality.
Having my past self exert coercive control of my future self will reduce my future information costs. For example, when you download something from the web you must often agree to a long list of conditions. Under current law if these terms of conditions included something like “you must give Microsoft all of your wealth” the term wouldn’t be enforced. If the law did enforce such terms then you would have to spend a lot of time examining the terms of everything you agreed to. You would be much better off if your past self prevented your current self from giving away too much in the fine print of agreements.
“If you constrain the contracts that can be written, then clearly you have an idea of good or bad mindstates apart from the raw contract law, and someone is bound to ask why you don’t outlaw the bad mindstates directly.”
The set of possible future mindstates / world state combinations is very large. It’s too difficult to figure out in advance which combinations are bad. It’s much more practical to sign a Werewolf contract which gives your guardian the ability to look at the mindstate / worldstate you are in and then decide if you should be forced to move to a different mindstate.
“why force Phaethon to sacrifice his pride, by putting him in that environment?”
Phaethon placed greater weight on freedom than pride and your type of paternalism would reduce his freedom.
But in general I agree that if most humans alive today were put in the Golden Age world then many would do great harm to themselves and in such a world I would prefer that the Sophotechs exercise some paternalism. But if such paternalism didn’t exist then Warewolf contracts would greatly reduce the type of harm you refer to.
ShardPhoenix wrote “Doesn’t the choice of a perfect external regulator amount to the same thing as directly imposing restrictions on yourself, thereby going back to the original problem?”
No because if there are many possible future states of the world it wouldn’t be practical for you in advance to specify what restrictions you will have in every possible future state. It’s much more practical for you to appoint a guardian who will make decisions after it has observed what state of the world has come to pass. Also, you might pick a regulator who would impose different restrictions on you than you would if you acted without a regulator.
ShardPhoenix also wrote “Another way to do it might be to create many copies of yourself (I’m assuming this scenario takes place inside a computer) and let majority (or 2/3s majority or etc) rule when it comes to ‘rescuing’ copies that have made un-self-recoverable errors.”
Good idea except in the Golden Age World these copies would become free individuals who could modify themselves. You would also be financially responsible for all of these copies until they became adults.
You are forgetting about “Werewolf Contracts” in the Golden Age. Under these contracts you can appoint someone who can “use force, if necessary, to keep the subscribing party away from addictions, bad nanomachines, bad dreams or other self-imposed mental alterations.”
If you sign such a contract then, unlike what you wrote, it’s not true that “one moment of weakness is enough to betray you.”
Non-lawyers often believe that lawyers and judges believe that laws and contracts should be interpreted literally.
“Eliezer, I’d advise no sudden moves; think very carefully before doing anything.”
But about 100 people die every minute!
I have signed up with Alcor. When I suggest to other people that they should sign up the common response has been that they wouldn’t want to be brought back to life after they died.
I don’t understand this response. I’m almost certain that if most of these people found out they had cancer and would die unless they got a treatment and (1) with the treatment they would have only a 20% chance of survival, (2) the treatment would be very painful, (3) the treatment would be very expensive, and (4) if the treatment worked they would be unhealthy for the rest of their lives; then almost all of these cryonics rejectors would take the treatment.
One of the primary cost of cryonics is the “you seem insane tax” one has to pay if people find out you have signed up. Posts like this will hopefully reduce the cryonics insanity tax.
You and Robin seem to be focused on different time periods. Robin is claiming that after ems are created one group probably won’t get a dominant position. You are saying that post-singularity (or at least post one day before the singularity) there will be either one dominant group or a high likelihood of total war. You are not in conflict if there is a large time gap between when we first have ems and when there is a singularity.
I wrote in this post that such a gap is likely: http://www.overcomingbias.com/2008/11/billion-dollar.html
Have you ever had a job where your boss yelled at you if you weren’t continually working? If not consider getting a part-time job at a fast food restaurant where you work maybe one day a week for eight hours at a time. Fast food restaurant managers are quite skilled at motivating (and please forgive this word) “lazy” youths.
Think of willpower as a muscle. And think of the fast food manager as your personal trainer.
My guess is your problem arises from never having had to stay up all night doing homework that you found boring, pointless, tedious, and very difficult.
“In real life, I’d expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world.”
If you believe this you should be in favor of the slowing down of AI research and the speeding up of work on enhancing human intelligence. The smarter we are the more likely we are to figure out friendly AI before we have true AI.
Also, if you really believe this shouldn’t you want the CIA to start assassinating AI programmers?
Economists do look at innovation. See my working paper “Teaching Innovation in principles of microeconomics classes.”
http://sophia.smith.edu/~jdmiller/teachinginnovation.pdf
The Real Ultimate Power: Reproduction.
Two compatible users of this ability can create new life forms which possess many of the traits of the two users. And many of these new life forms will themselves be able to reproduce, leading to a potential exponential spreading of the users’ traits. Through reproduction users can obtain a kind of immortality.
Sorry, I misread the question. Ignore my last answer.
We should take into account the costs to a scientist of being wrong. Assume that the first scientist would pay a high price if the second ten data points didn’t support his theory. In this case he would only propose the theory if he was confident it was correct. This confidence might come from his intuitive understanding of the theory and so wouldn’t be captured by us if we just observed the 20 data points.
In contrast, if there will be no more data the second scientist knows his theory will never be proved wrong.
Carl Shulman,
Under either your (1) or (2) passable programmers contribute to advancement, so Eliezer’s Masters in chemistry guy can (if he learns enough programming to become a programming grunt) help advance the AGI field.
The best way to judge productivity differences is to look at salaries. Would Google be willing to pay Eliezer 50 times more than what it pays its average engineer? I know that managers are often paid more than 50 times what average employees are, but do pure engineers ever get 50 times more? I really don’t know.
The benefits humanity has received from innovations have mostly come about through gradual improvements in existing products rather then through huge breakthroughs. For these kinds of innovations 50 people with the minimal IQ needed to get a masters degree in chemistry (even if each of them believes that the Bible is the literal word of God) are far more valuable than one atheist with an Eliezer level IQ.
Based on my limited understanding of AI, I suspect that AGI will come about through small continuous improvements in services such as Google search. Google search, for example, might get better and better at understanding human requests and slowly acquire the ability to pass a Turing test. And Google doesn’t need a “precise theory to permit stable self-improvement” to continually improve its search engine.
“Maybe someday, the names of people who decide not to start nuclear wars will be as well known as the name of Britney Spears.” should read:
“Maybe someday, the names of people who prevent wars from occurring will be as well known as the names of people who win wars.”
If: (The probability that the LHC’s design is flawed and because of this flaw the LHC will never work) is much, much greater than (the probability that the LHC would destroy us if it were to function properly), then regardless of how many times the LHC failed it would never be the case that we should give any significant weight to the anthropic explanation.
Similarly, if the probability that someone is deliberately sabotaging the LHC is relatively high then we should also ignore the anthropic explanation.
The safest investment is Treasury Inflation Protected Securities (TIPS). Ordinary investors should avoid investing in derivative securities such as options. If you are rationally pessimistic go with TIPS.
Also, you would never get the 1⁄100 odds because in a sense money is more valuable in the state in which the economy is doing poorly. So say there are two bonds, each in 30 years have a 99% chance of paying 0 and a 1% chance of paying $1,000. The first bond pays off in a state in which the economy has done very poorly, the second in a state in which the economy has done OK. The first bond will cost a lot more than the second.
If you do want to play with derivative securities just maintain a short position in the S&P 500. If you think the decline will be gradual rather then all at once you could just keep buying short term put options on the S&P 500. As the market declines you will gain wealth which you could use to increase your short position.
If you are really, really pessimistic spend your money stocking up on canned goods and guns.