Hypercomputation doesn’t exist. There’s no evidence for it—and nor will there ever be. It’s an irrelevance that few care about. Solomonoff induction is right about this.
timtyler
Also, competition between humans (with machines as tools) seems far more likely to kill people than a superintelligent runaway. However, it’s (arguably) not so likely to kill everybody. MIRI appears to be focussing on the “killing everybody case”. That is because—according to them—that is a really, really bad outcome.
The idea that losing 99% of humans would be acceptable losses may strike laymen as crazy. However, it might appeal to some of those in the top 1%. People like Peter Thiel, maybe.
Right. So, if we are playing the game of giving counter-intuitive technical meanings to ordinary English words, humans have thrived for millions of years—with their “UnFriendly” peers and their “UnFriendly” institutions. Evidently, “Friendliness” is not necessary for human flourishing.
“8 lives saved per dollar donated to the Machine Intelligence Research Institute. — Anna Salamon”
Nor does the fact that evolution ‘failed’ in its goals in all the people who voluntarily abstain from reproducing (and didn’t, e.g., hugely benefit their siblings’ reproductive chances in the process) imply that evolution is too weak and stupid to produce anything interesting or dangerous.
Failure is a necessary part of mapping out the area where success is possible.
Uploads first? It just seems silly to me.
The movie features a luddite group assassinating machine learning researchers—not a great meme to spread around IMHO :-(
Slightly interestingly, their actions backfire, and they accelerate what they seek to prevent.
Overall, I think I would have preferred Robopocalypse.
One other point I should make: this isn’t just about “someone” being wrong. It’s about an author frequently cited by people in the LessWrong community on an important issue being wrong.
Not experts on the topic of diet. I associated with members of the Calorie Restriction Society some time ago. Many of them were experts on diet. IIRC, Taubes was generally treated as a low-grade crackpot by those folk: barely better than Atkins.
To learn more about this, see “Scientific Induction in Probabilistic Mathematics”, written up by Jeremy Hahn
This line:
Choose a random sentence from S, with the probability that O is chosen proportional to u(O) − 2^-length(O).
...looks like a subtraction operation to the reader. Perhaps use “i.e.” instead.
The paper appears to be arguing against the applicability of the universal prior to mathematics.
However, why not just accept the universal prior—and then update on learning the laws of mathematics?
why did you bring up the ‘society’ topic in the first place?
A society leads to a structure with advantages of power and intelligence over individuals. It means that we’ll always be able to restrain agents in test harnesses, for instance. It means that the designers will be smarter than the designed—via collective intelligence. If the the designers are smarter than the designed, maybe they’ll be able to stop them from wireheading themselves.
If wireheading is plausible, then it’s equally plausible given an alien-fearing government, since wireheading the human race needn’t get in the way of putting a smart AI in charge of neutralizing potential alien threats.
What I was talking about was “the possibility of a totalitarian world government wireheading itself”. The government wireheading itself isn’t really the same as humans wireheading. However, probably any wireheading increases the chances of being wiped out by less-stupid aliens. Optimizing for happiness and optimizing for survival aren’t really the same thing. As Grove said, only the paranoid survive.
We can model induction in a monistic fashion pretty well—although at the moment the models are somewhat lacking in advanced inductive capacity/compression abilities. The models are good enough to be built and actually work.
Agents wireheading themselves or accidentally performing fatal experiments on themselves will probably be handled in much the same way that biology has handled it to date—e.g. by liberally sprinkling aversive sensors around the creature’s brain. The argument that such approaches do not scale up is probably wrong—designers will always be smarter than the creatures they build—and will successfully find ways to avoid undesirable self-modifications. If there are limits, they are obviously well above the human level—since individual humans have very limited self-brain-surgery abilities. If this issue does prove to be a significant problem, we won’t have to solve it without superhuman machine intelligence.
The vision of an agent improving its own brain is probably wrong: once you have one machine intelligence, you will soon have many copies of it—and a society of intelligent machines. That’s the easiest way to scale up—as has been proved in biological systems again and again. Agents will be produced in factories run by many such creatures. No individual agent is likely to do much in the way of fundamental redesign on itself. Instead groups of agents will design the next generation of agent.
That still leaves the possibility of a totalitarian world government wireheading itself—or performing fatal experiments on itself. However, a farsighted organization would probably avoid such fates—in order to avoid eternal oblivion at the hands of less short-sighted aliens.
Naturalized induction is an open problem in Friendly Artificial Intelligence. The problem, in brief: Our current leading models of induction do not allow reasoners to treat their own computations as processes in the world.
I checked. These models of induction apparently allow reasoners to treat their own computations as modifiable processes:
Orseau L., Ring M. - Self-Modification and Mortality in Artificial Agents, Artificial General Intelligence (AGI) 2011, Springer, 2011.
Ring M., Orseau L. - Delusion, Survival, and Intelligent Agents, Artificial General Intelligence (AGI) 2011, Springer, 2011.
Deutsch is interesting. He seems very close to the LW camp, and I think he’s someone LWers should at least be familiar with.
Deutsch seems pretty clueless in the section quoted below. I don’t see why students should be interested in what he has to say on this topic.
It was a failure to recognise that what distinguishes human brains from all other physical systems is qualitatively different from all other functionalities, and cannot be specified in the way that all other attributes of computer programs can be. It cannot be programmed by any of the techniques that suffice for writing any other type of program. Nor can it be achieved merely by improving their performance at tasks that they currently do perform, no matter by how much.
There never was a bloggingheads—AFAIK. There is: Yudkowsky vs Hanson on the Intelligence Explosion—Jane Street Debate. However, I’d be surprised if Yudkowsky makes the same silly mistake as Deutsch. Yudkowsky knows some things about machine intelligence.
But in reality, only a tiny component of thinking is about prediction at all, let alone prediction of our sensory experiences.
My estimate is 80% prediction, with the rest evaluation and tree pruning.
He also says confusing things about induction being inadequate for creativity which I’m guessing he couldn’t support well in this short essay (perhaps he explains better in his books).
He does—but it isn’t pretty.
Here is my review of The Beginning of Infinity: Explanations That Transform the World.
I remember Eliezer making the same point in a bloggingheads video with Robin Hanson.
A Hanson/Yudkowsky bloggingheads?!? Methinks you are mistaken.
So:
What most humans tell you about their goals should be interpreted as public relations material;
Most humans are victims of memetic hijacking;
To give an example of a survivalist, here’s an individual who proposes that we should be highly prioritizing species-level survival:
As you say, this is not a typical human being—since Nick says he is highly concerned about others.
There are many other survivalists out there, many of whom are much more concerned with personal survival.
If you’re dealing with creatures good enough at modeling the world to predict the future and transfer skills, then you’re dealing with memetic factors as well as genetic. That’s rather beyond the scope of natural selection as typically defined.
What?!? Natural selection applies to both genes and memes.
I suppose there are theoretical situations where that argument wouldn’t apply
I don’t think you presented a supporting argument. You referenced “typical” definitions of natural selection. I don’t know of any definitions that exclude culture. Here’s a classic one from 1970 - which explicitly includes cultural variation. Even Darwin recognised this, saying: “The survival or preservation of certain favoured words in the struggle for existence is natural selection.”
If anyone tells you that natural selection doesn’t apply to cultural variation, they are simply mistaken.
I’m having trouble imagining an animal smart enough to make decisions based on projected consequences more than one selection round out, but too dumb to talk about it.
I recommend not pursuing this avenue.
It isn’t a testable hypothesis. Why would anyone attempt to assign probabilities to it?