Losing information isn’t a crime. The virtues of simple models go beyond Occam’s razor. Often, replacing a complex world with a complex model barely counts as progress—since complex models are hard to use and hard to understand.
timtyler
Parsimony is good except when it loses information, but if you’re losing information you’re not being parsimonious correctly.
So: Hamilton’s rule is not being parsimonious “correctly”?
Shane Legg prepared this graph.
It was enough to convince him that there was some super-exponential synergy:
There’s also a broader point to be made about why evolution would’ve built humans to be able to benefit from better software in the first place, that involves the cognitive niche hypothesis.
I think we understand why humans are built like that. Slow-reproducing organisms often use rapidly-reproducing symbiotes to help them adapt to local environments. Humans using cultural symbionts to adapt to local regions of space-time is a special case of this general principle.
Instead of the cognitive niche, the cultural niche seems more relevant to humans.
On the other hand, I think the evolutionary heuristic casts doubt on the value of many other proposals for improving rationality. Many such proposals seem like things that, if they worked, humans could have evolved to do already. So why haven’t we?
Most such things would have had to evolve by cultural evolution. Organic evolution makes our hardware, cultural evolution makes our software. Rationality is mostly software—evolution can’t program such things in at the hardware level very easily.
Cultural evolution has only just got started. Education is still showing good progress—as manifested in the Flynn effect. Our rationality software isn’t up to speed yet—partly because is hasn’t had enough time to culturally evolve its adaptations yet.
I usually try to avoid the term “moral realism”—due to associated ambiguities—and abuse of the term “realism”.
The thesis says:
more or less any level of intelligence could in principle be combined with more or less any final goal.
The “in principle” still allows for the possibility of a naturalistic view of morality grounding moral truths. For example, we could have the concept of: the morality that advanced evolutionary systems tend to converge on—despite the orthogonality thesis.
It doesn’t say what is likely to happen. It says what might happen in principle. It’s a big difference.
We’re just saying that AGI is an incredibly powerful weapon, and FAI is incredibly difficult. As for “baseless”, well… we’ve spent hundreds of pages arguing this view, and an even better 400-page summary of the arguments is forthcoming in Bostrom’s Superintelligence book.
It’s not mudslinging, it’s Leo Szilard pointing out that nuclear chain reactions have huge destructive potential even if they could also be useful for power plants.
Machine intelligence is important. Who gets to build it using what methodology is also likely to have a significant effect. Similarly, operating systems were important. Their development produced large power concentrations—and a big mountain of F.U.D. from predatory organizations. The outcome set much of the IT industry back many years. I’m not suggesting that the stakes are small.
It is true that there might not be all that much insight needed to get to AGI on top of the insight needed to build a chimpanzee. The problem that Deutsch is neglecting is that we have no idea about how to build a chimpanzee.
Bill Gates presents his rationale for attacking Malaria and Polio here.
I can’t make much sense of it personally—but at least he isn’t working on stopping global warming.
Classified information about supposedly leaked classified information doesn’t seem very credible. If you can’t spill the beans on your sources, why say anything? It just seems like baseless mud-slinging against a perceived competitor.
Note that this has, historically, been a bit of a problem with MIRI. Lots of teams race to create superintelligence. MIRI’s strategy seems to include liberal baseless insinuations that their competiors are going to destroy the world. Consider the “If Novamente should ever cross the finish line, we all die” case. Do you folk really want to get a reputation for mudslinging—and slagging off competitors? Do you think that looks “friendly”?
In I.T., focusing on your competitors’ flaws is known as F.U.D.. I would council taking care when using F.U.D. tactics in public.
Looking into the difference between human genes and chimpanzee genes probably won’t help much with developing machine intelligence. Nor would it be much help in deciding how big the difference is.
The chimpanzee gene pool doesn’t support cumulative cultural evolution, while human gene pool does. However, all that means is that chimpanzees are one side of the cultural “tipping point”—while humans are on the other. Crossing such a threshold may not require additional complex machinery. It might just need an instruction of the form: “delay brain development”—since brains can now develop safely in baby slings.
Indeed, crossing the threshold might not have required gene changes at all—at the time. It probably just required increased population density—e.g. see: High Population Density Triggers Cultural Explosions.
I admit, I don’t feel like I fully grasp all the reasons for the disagreement between Eliezer and myself on this issue. Some of the disagreement, I suspect, comes from slightly different views on the nature of intelligence, though I’m having the trouble pinpointing what those differences might be. But some of the difference, I’m think, comes from the fact that I’ve become convinced humans suffer from a Lone Genius Bias—a tendency to over-attribute scientific and technological progress to the efforts of lone geniuses.
David thinks—contrary to all the evidence—that Goliath will lose? Yawn: news is at eleven.
I suspect the easiest path to AGI is to just throw a ton of bodies and computing power at the problem, build a Kludge AI, and let it stumble its way into recursive self-improvement. This is what Larry Page is trying to do.
Oh, really. Both Google and MIRI are secretive organisations. Outsiders don’t really have much idea about what goes on inside them—because that’s classified. What does come out of them is PR material. When Peter Norvig says: “The goal should be superhuman partnership”, that is propaganda.
The David Deutsch article seems silly—as usual :-(
Deutsch argues of “the target ability” that “the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees”.
That makes no sense. Maybe a bigger brain alone would enable cumulative cultural evolution—and so all that would be needed is some more “add brain here” instructions. Yet: “make more of this” is hardly the secret of intelligence. So: Deutsch’s argument here is not coherent.
That’s an open-ended question which I don’t have many existing public resources to address—but thanks for your interest. Very briefly:
I like evolution, Yukdowsky seems to dislike it. Ethically, Yukdowsky is an intellectual descendant of Huxley, while I see myself as thinking more along the lines of Kropotkin.
Yukdowsky seems to like evolutionary psychology. So far evolutionary psychology has only really looked at human universals. To take understanding of the mind further, it is necessary to move to a framework of gene-meme coevolution. Evolutionary psychology is politically correct—through not examining huamn differences—but is scientifically very limited in what it can say, because of the significance of cultural transmission on human behaviour.
Yudkowsky likes utilitarianism. I view utilitarianism largely as a pretty unrealistic ethical philosophy adopted by ethical philosophers for signalling reasons.
Yukdowsky is an ethical philosopher—and seems to be on a mission to persuade people that giving control a machine that aggregates their preferences will be OK. I don’t have a similar axe to grind.
Smarter minds equal smarter heaps. Why would that trend break?
Utility counterfeitting regularly breaks such systems.
Why didn’t people (apparently?) understand the metaethics sequence?
Perhaps back up a little. Does the metaethics sequence make sense? As I remember it, a fair bit of it was a long, rambling and esoteric bunch of special pleading—frequently working from premises that I didn’t share.
I feel like you’re trying to say we should care about “memetic life” as well as… other life.
I don’t know about ‘should’ - but many humans do act as though they care about their favoured memes.
Catholicism, Islam, patriotism—there are many memes that are literally ‘to die for’.
That doesn’t mean it doesn’t apply! “Knowing the area of applicability” is just some information you can update on after starting with a prior.