Ethics is a human product (though we can discuss how much freedom did we have in creating this product; whether it would be different if we had a different history or biology) and it makes sense only on the human level, not on the level of particles.
I’m happy with the idea that ethics is a human product (since this doesn’t imply that it’s arbitrary or illusory or anything like that). I take this to mean, basically, that ethics concerns the relation of some subsystems with others. There’s no ethical language which makes sense from the ‘top-down’ or from a global perspective. But there’s also nothing to prevent (this is Eliezer’s meaning, I guess) a non-global perspective from being worked out in which ethical language does make sense. And this perspective isn’t arbitrary, because the subsystems working it out have always occupied that perspective as subsystems. To see an algorithm from the inside is to see world as a whole by seeing it as potentially involved in this algorithm. And this is what leads to the confusion between the global, timeless view from the (no less global, in some sense) timeful inside-an-algorithm view.
If that’s all passably normal (as skeptical as I am at the coherence of the idea of ‘adding up to normality’) then the question that remains is what I should do with my idea of things mattering ethically. Maybe the answer here is to see ethical agents as ontologically fundamental or something, though that sounds dangerously anthropocentric. But I don’t know how to justify the idea that physically-fundamental = ontologically-fundamental either.
I’m happy with the idea that ethics is a human product (since this doesn’t imply that it’s arbitrary or illusory or anything like that). I take this to mean, basically, that ethics concerns the relation of some subsystems with others. There’s no ethical language which makes sense from the ‘top-down’ or from a global perspective. But there’s also nothing to prevent (this is Eliezer’s meaning, I guess) a non-global perspective from being worked out in which ethical language does make sense. And this perspective isn’t arbitrary, because the subsystems working it out have always occupied that perspective as subsystems. To see an algorithm from the inside is to see world as a whole by seeing it as potentially involved in this algorithm. And this is what leads to the confusion between the global, timeless view from the (no less global, in some sense) timeful inside-an-algorithm view.
If that’s all passably normal (as skeptical as I am at the coherence of the idea of ‘adding up to normality’) then the question that remains is what I should do with my idea of things mattering ethically. Maybe the answer here is to see ethical agents as ontologically fundamental or something, though that sounds dangerously anthropocentric. But I don’t know how to justify the idea that physically-fundamental = ontologically-fundamental either.