“X as a Y” is an academic idiom. Sounds wrong for the target audience.
jmmcd
Not being able to have any children, or as many as you (later realised you) wanted.
The claim is that it was obvious in advance. The whole reason AI-boxing is interesting is that the AI successes were unexpected, in advance.
the thesis was always glaringly obvious to anyone who was even paying attention to what superintelligence meant
I don’t see that it was obvious, given that none of the AI players are actually superintelligent.
This discussion isn’t getting anywhere, so, all the best :)
O.K, demonstrate that the idea of deterrent exists somewhere within their brains.
Evolutionary game theory and punishment of defectors is all the answer you need. You want me to point at a deterrent region, somewhere to the left of Broca’s?
You say that science is useful for truths about the universe, whereas morality is useful for truths useful only to those interested in acting morally. It sounds like you agree with Harris that morality is a subcategory of science.
something can be good science without in any way being moral that Sam Harris would recognise as ‘moral’.
Still, so what? He’s not saying that all science is moral (in the sense of “benevolent” and “good for the world”). That would be ridiculous, and would be orthogonal to the argument of whether science can address questions of morality.
If you claim that evolutionary reasons are a person’s ‘true preferences’
No, of course not. It’s still wrong to say that deterrent is nowhere in their brains.
Concerning the others:
Scientific inquiry percieves facts which are true and useful except for goals which run directly counter to science. Morality perceives ‘facts’ which are only useful to those who wish to follow a moral route.
I don’t see what “goals which run directly counter to science” could mean. Even if you want to destroy all scientists, are you better off knowing some science or not? Anyway, how does this counter anything Harris says?
Although most people would be outraged, they probably wouldn’t call it unscientific.
Again, so what? How does anything here prevent science from talking about morality?
As far as I can tell, Harris does not account for the well-being of animals.
He talks about well-being of conscious beings. It’s not great terminology, but your inference is your own.
I disagree with all your points, but will stick to 4: “Deterrent is nowhere in their brains” is wrong—read about altruism, game theory, and punishment of defectors, to understand where the desire comes from.
Nevertheless, moral questions aren’t (even potentially) empirical, since they’re obviously seeking normative and not factual answers.
You can’t go from an is to an ought. Nevertheless, some people go from the “well-being and suffering” idea to ideas like consequentialism and utilitarianism, and from there the only remaining questions are factual. Other people are prepared to see a factual basis for morality in neuroscience and game theory. These are regular topics of discussion on LW. So calling it “obvious” begs the whole question.
control over the lower level OS allows for significant performance gains
Even if you got a 10^6 speedup (you wouldn’t), that gain is not compoundable. So it’s irrelevant.
access to a comparatively simple OS and tool chain allows the AI to spread to other systems.
Only if those other systems are kind enough to run the O/S you want them to run.
The unstated assumption is that a non-negligible proportion of the difficulty in creating a self-optimising AI has to do with the compiler toolchain. I guess most people wouldn’t agree with that. For one thing, even if the toolchain is a complicated tower of Babel, why isn’t it good enough to just optimise one’s source code at the top level? Isn’t there a limit to how much you can gain by running on top of a perfect O/S?
(BTW the “tower of Babel” is a nice phrase which gets at the sense of unease associated with these long toolchains, (eg) Python—RPython—LLVM - ??? - electrons.)
Agreed, but I think given the kind-of self-deprecating tone elsewhere, this was intended as a jibe at OP’s own superficial knowledge rather than at the transportation systems of developing countries.
Ok, but are we optimising the expected case or the worst case? If the former, then the probability of those things happening with no special steps against them is relevant. To take the easiest example: would postponing the “take over the universe” step for 300 years make a big difference in the expected amount of cosmic commons burned before takeover?
That page mentions “common sense” quite a bit. Meanwhile, this is the latest research in common sense and verbal ability.
I don’t think it’s useful to think about constructing priors in the abstract. If you think about concrete examples, you see lots of cases where a reasonable prior is easy to find (eg coin-tossing, and the typical breast-cancer diagnostic test example). That must leave some concrete examples where good priors are hard to find. What are they?
To be clear, the idea is not that trying to deliberately slow world economic growth would be a maximally effective use of EA resources and better than current top targets; this seems likely to have very small marginal effects, and many such courses are risky. The question is whether a good and virtuous person ought to avoid, or alternatively seize, any opportunities which come their way to help out on world economic growth.
It sounds like status quo bias. If growth was currently 2% higher, should the person then seize on growth-slowing opportunities?
One answer: it could be that any effort is likely to have little success in slowing world growth, but a large detrimental effect on the person’s other projects. Fair enough, but presumably it applies equally to speeding growth.
Another: an organisation that aspires to political respectability shouldn’t be seen to be advocating sabotage of the economy.
Status is far older than Hanson’s take on it, or than Hanson himself. But the idea of seeing status signalling everywhere, as an explanation for everything—that is characteristically Hanson. (Obviously, don’t take my simplification seriously.)
Yes, but the next line mentioned PageRank, which is designed to deal with those types of issues. Lots of inward links doesn’t mean much unless the people (or papers, or whatever, depending on the semantics of the graph) linking to you are themselves highly ranked.
I like the principle, but 5% is “extremely unlikely”? Something that happens on the way to work once every three weeks?