The Metaethics Sequence is notoriously ambiguous: what follows is my best estimate at modelling what Eliezer believes (which is not necessarily what I believe, although I agree on mostly everything).
1) There is a deep psychological uniformity of values shared by every human beings on things like fairness, happiness, freedom, etc.
2) This psychological layer is so ingrained in us that we cannot change it at will, it’s impossible for example to make “fairness” means “distributed according to a geometric series” instead of “distributed equally”.
3) Concepts like morality, should, better, fair, etc. should be understood as the basic computations that motivate actions instantiated by the class of human beings. That is: morality = actions(humans).
4) Morality, being a computation, has the same onthological status of being ‘subjectively objective’: it is a well defined point in the space of all possible algorithms, but it does not become a physical pattern unless there’s an agent performing it.
To answer your questions:
the most important difference with your point of view is that in (what I believe is Eliezer’s metaethics) morality is inevitable: since we cannot go outside our moral framework, is epistemically necessary that whenever we encounter some truly alien entity, a conflict ensues.
that would require a post in itself. I like Eliezer’s approach, although I think it should be relaxed a little, the general framework I believe is correct.
I think that “better” here is doing all the work: by 3, better is to be understood in human terms. You can understand that other species will behave according to a different set of computations, but those computations aren’t moral, and by point 2 it doesn’t even make sense to ascribe a different morality to a paperclip maximizer. His computations might be paperclippy, but surely they aren’t moral. Of course a paperclip maximizer will not be moved by our moral arguments (since it’s paperclippy and not moral), and of course our computation is right and the computation of Clippy is wrong (because right/wrong is a computation that is part of morality).
Does the following make sense: A possible intuition pump would be to imagine a specific superintelligent mind with no consciousness, some kind of maximally stripped-down superintelligent algorithm, and ask whether the contents of its objective function constitute a “morality”. The answer to this question is more obviously “no” than if you vaguely wonder about “possible minds”.
Where “superintelligent” doesn’t mean “anthropomorphic”, but it literally means “a spider that is astronomically more intelligent at being a spider”. Like, building webs that use quantum physics. Using chaos theory to predict the movement of flies. Mad spider science beyond our imagination.
But when it comes to humans, the superintelligent spider cares about them exactly as much as a normal spider, that is not at all, unless they become a threat. Of course the superintelligent spider is better able to imagine how humans could become a threat in the future, and is better at using its spider science to eliminate humans. But it’s still a spider with spider values (not some kind of a human mind trapped in a fluffy eight-legged body). It might overcome the limitations of the spider body, and change its form into a machine or whatever. But its mind would remain (an extrapolation of) a spider mind.
The Metaethics Sequence is notoriously ambiguous: what follows is my best estimate at modelling what Eliezer believes (which is not necessarily what I believe, although I agree on mostly everything).
1) There is a deep psychological uniformity of values shared by every human beings on things like fairness, happiness, freedom, etc.
2) This psychological layer is so ingrained in us that we cannot change it at will, it’s impossible for example to make “fairness” means “distributed according to a geometric series” instead of “distributed equally”.
3) Concepts like morality, should, better, fair, etc. should be understood as the basic computations that motivate actions instantiated by the class of human beings. That is: morality = actions(humans).
4) Morality, being a computation, has the same onthological status of being ‘subjectively objective’: it is a well defined point in the space of all possible algorithms, but it does not become a physical pattern unless there’s an agent performing it.
To answer your questions:
the most important difference with your point of view is that in (what I believe is Eliezer’s metaethics) morality is inevitable: since we cannot go outside our moral framework, is epistemically necessary that whenever we encounter some truly alien entity, a conflict ensues.
that would require a post in itself. I like Eliezer’s approach, although I think it should be relaxed a little, the general framework I believe is correct.
I think that “better” here is doing all the work: by 3, better is to be understood in human terms. You can understand that other species will behave according to a different set of computations, but those computations aren’t moral, and by point 2 it doesn’t even make sense to ascribe a different morality to a paperclip maximizer. His computations might be paperclippy, but surely they aren’t moral.
Of course a paperclip maximizer will not be moved by our moral arguments (since it’s paperclippy and not moral), and of course our computation is right and the computation of Clippy is wrong (because right/wrong is a computation that is part of morality).
Does the following make sense: A possible intuition pump would be to imagine a specific superintelligent mind with no consciousness, some kind of maximally stripped-down superintelligent algorithm, and ask whether the contents of its objective function constitute a “morality”. The answer to this question is more obviously “no” than if you vaguely wonder about “possible minds”.
Or maybe imagine a superintelligent spider.
Where “superintelligent” doesn’t mean “anthropomorphic”, but it literally means “a spider that is astronomically more intelligent at being a spider”. Like, building webs that use quantum physics. Using chaos theory to predict the movement of flies. Mad spider science beyond our imagination.
But when it comes to humans, the superintelligent spider cares about them exactly as much as a normal spider, that is not at all, unless they become a threat. Of course the superintelligent spider is better able to imagine how humans could become a threat in the future, and is better at using its spider science to eliminate humans. But it’s still a spider with spider values (not some kind of a human mind trapped in a fluffy eight-legged body). It might overcome the limitations of the spider body, and change its form into a machine or whatever. But its mind would remain (an extrapolation of) a spider mind.