Judging from his posts and comments here, I conclude that EY is less interested in dialectic than in laying out his arguments so that other people can learn from them and build on them. So I wouldn’t expect critically-minded people to necessarily trigger such a dialectic.
That said, perhaps that’s an artifact of discussion happening with a self-selected crowd of Internet denizens… that can exhaust anybody. So perhaps a different result would emerge if a different group of critically-minded people, people EY sees as peers, got involved. The Hanson/Yudkowsky debate about FOOMing had more of a dialectic structure, for example.
With respect to your example, the discussion here might be a starting place for that discussion, btw. The discussions here and here and here might also be salient.
Incidentally: the anticipated relationship between what humans want, what various subsets of humans want, and what various supersets including humans want, is one of the first questions I asked when I encountered the CEV notion.
I haven’t gotten an explicit answer, but it does seem (based on other posts/discussions) that on EY’s view a nonhuman intelligent species valuing something isn’t something that should motivate our behavior at all, one way or another. We might prefer to satisfy that species’ preferences, or we might not, but either way what should be motivating our behavior on EY’s view is our preferences, not theirs. What matters on this view is what matters to humans; what doesn’t matter to humans doesn’t matter.
I’m not sure if I buy that, but satisfying “all the reasons for action that exist” does seem to be a step in the wrong direction.
Thanks for the links! I don’t know what “satisfying all the reasons for action that exist” is the solution, but I listed it as an example alternative to Eliezer’s theory. Do you have a preferred solution?
Rolling back to fundamentals: reducing questions about right actions to questions about likely and preferred results seems reasonable. So does treating the likely results of an action as an empirical question. So does approaching an individual’s interests empirically, and as distinct from their beliefs about their interests, assuming they have any. The latter also allows for taking into account the interests of non-sapient and non-sentient individuals, which seems like a worthwhile goal.
Extrapolating a group’s collective interests from the individual interests of its members is still unpleasantly mysterious to me, except in the fortuitous special case where individual interests happen to align neatly. Treating this as an optimization problem with multiple weighted goals is the best approach I know of, but I’m not happy with it; it has lots of problems I don’t know how to resolve.
Much to my chagrin, some method for doing this seems necessary if we are to account for individual interests in groups whose members aren’t peers (e.g., children, infants, fetuses, animals, sufferers of various impairments, minority groups, etc., etc., etc.), which seems good to address.
It’s also at least useful to addressing groups of peers whose interests don’t neatly align… though I’m more sanguine about marketplace competition as an alternative way of addressing that.
Something like this may also turn out to be critical for fully accounting for even an individual human’s interests, if it turns out that the interests of the various sub-agents of a typical human don’t align neatly, which seems plausible.
Accounting for the probable interests of probable entities (e.g., aliens) I’m even more uncertain about. I don’t discount them a priori, but without a clearer understanding of such an accounting would actually look like I really don’t know what to say about them. I guess if we have grounds for reliably estimating the probability of a particular interest being had by a particular entity, then it’s just a subset of the general weighting problem, but… I dunno.
I reject accounting for the posited interests of counterfactual entities, although I can see where the line between that and probabilistic entities as above is hard to specify.
Judging from his posts and comments here, I conclude that EY is less interested in dialectic than in laying out his arguments so that other people can learn from them and build on them. So I wouldn’t expect critically-minded people to necessarily trigger such a dialectic.
That said, perhaps that’s an artifact of discussion happening with a self-selected crowd of Internet denizens… that can exhaust anybody. So perhaps a different result would emerge if a different group of critically-minded people, people EY sees as peers, got involved. The Hanson/Yudkowsky debate about FOOMing had more of a dialectic structure, for example.
With respect to your example, the discussion here might be a starting place for that discussion, btw. The discussions here and here and here might also be salient.
Incidentally: the anticipated relationship between what humans want, what various subsets of humans want, and what various supersets including humans want, is one of the first questions I asked when I encountered the CEV notion.
I haven’t gotten an explicit answer, but it does seem (based on other posts/discussions) that on EY’s view a nonhuman intelligent species valuing something isn’t something that should motivate our behavior at all, one way or another. We might prefer to satisfy that species’ preferences, or we might not, but either way what should be motivating our behavior on EY’s view is our preferences, not theirs. What matters on this view is what matters to humans; what doesn’t matter to humans doesn’t matter.
I’m not sure if I buy that, but satisfying “all the reasons for action that exist” does seem to be a step in the wrong direction.
TheOtherDave,
Thanks for the links! I don’t know what “satisfying all the reasons for action that exist” is the solution, but I listed it as an example alternative to Eliezer’s theory. Do you have a preferred solution?
Not really.
Rolling back to fundamentals: reducing questions about right actions to questions about likely and preferred results seems reasonable. So does treating the likely results of an action as an empirical question. So does approaching an individual’s interests empirically, and as distinct from their beliefs about their interests, assuming they have any. The latter also allows for taking into account the interests of non-sapient and non-sentient individuals, which seems like a worthwhile goal.
Extrapolating a group’s collective interests from the individual interests of its members is still unpleasantly mysterious to me, except in the fortuitous special case where individual interests happen to align neatly. Treating this as an optimization problem with multiple weighted goals is the best approach I know of, but I’m not happy with it; it has lots of problems I don’t know how to resolve.
Much to my chagrin, some method for doing this seems necessary if we are to account for individual interests in groups whose members aren’t peers (e.g., children, infants, fetuses, animals, sufferers of various impairments, minority groups, etc., etc., etc.), which seems good to address.
It’s also at least useful to addressing groups of peers whose interests don’t neatly align… though I’m more sanguine about marketplace competition as an alternative way of addressing that.
Something like this may also turn out to be critical for fully accounting for even an individual human’s interests, if it turns out that the interests of the various sub-agents of a typical human don’t align neatly, which seems plausible.
Accounting for the probable interests of probable entities (e.g., aliens) I’m even more uncertain about. I don’t discount them a priori, but without a clearer understanding of such an accounting would actually look like I really don’t know what to say about them. I guess if we have grounds for reliably estimating the probability of a particular interest being had by a particular entity, then it’s just a subset of the general weighting problem, but… I dunno.
I reject accounting for the posited interests of counterfactual entities, although I can see where the line between that and probabilistic entities as above is hard to specify.
Does that answer your question?