Isn’t it possible that it only looks like “heuristics all the way down” because we haven’t dug deep enough yet?
Everything’s possible, but doesn’t seem plausible at this point, and certainly not at human level. To conclude that something is not a heuristic, but the thing itself, one would need too much certainty to be expected of such a question.
What is the nature of morality according to your approach? You say that a human can’t know what they care about (which I assume you use interchangeably with “right”, correct me if I’m wrong here).
I did use that interchangeably.
Is it because they can’t, in principle, fully unfold the logical definition of right, or is it that they can’t even define “right” in any precise way?
Both (the latter). Having an explicit definition would correspond to “preference” which I discussed in the grandparent comment. But when we talk of merely “precise”, at least in principle we could hope to obtain a significantly more precise description, maybe even on human level, which is what meta-ethics should strive to give us. Every useful heuristic is an element of such a description, and some of the heuristics, such as laws of physics, are very precise.
How is an agent supposed to design a set of heuristics without such a precise definition to guide it?
The current heuristics, its current implementation, which is understood to be fallible.
Also, if the agent itself uses the words “morality” or “right”, what do they refer to?
Don’t know (knowing would give a definition). To the extent it’s known, see the current heuristics (long list), maybe brains.
Essentially, what you’re describing is just the situation that we are actually faced with. I mean, when I use the word “right” I think I mean something but I don’t know what. And I have to use my current heuristics, my current implementation without having a precise theory to guide me.
And you’re saying that this situation is unlikely to change significantly by the time we build an FAI, so the best we can expect to do is equivalent to a group of uploads improving themselves to the best of their abilities.
I tend to agree with this (although I think I assign a higher probability that someone does make a breakthrough than you perhaps do), but it doesn’t really constitute a meta-ethics, at least not in the sense that Eliezer and philosophers use that word.
Essentially, what you’re describing is just the situation that we are actually faced with.
I’m glad it all adds up to normality, given the amount of ink I spilled getting to this point.
And you’re saying that you don’t expect this situation to change significantly by the time we build an FAI, so the best we can do is equivalent to a group of uploads improving themselves to the best of their abilities.
Not necessarily. The uploads construct could in principle be made abstract, with efficient algorithms figuring out the result of the process much quickly than if it’s actually simulated. More specific heuristics could be figured out that make use of computational resources to make better progress, maybe on early stages by the uploads construct.
it doesn’t really constitute a meta-ethics, at least not in the sense that Eliezer and philosophers use that word.
I’m not sure about that. If it’s indeed all we can say about morality right now, then that’s what we have to say, even if it doesn’t belong to the expected literary genre. It’s too easy to invent fake explanations, and absence of conclusions invites that, where a negative conclusion could focus the effort elsewhere.
(Also, I don’t remember particular points on which my current view disagrees with Eliezer’s sequence, although I’d need to re-read it to have a better idea, which I really should, since I only read it as it was posted, when my understanding of the area was zilch.)
Everything’s possible, but doesn’t seem plausible at this point, and certainly not at human level. To conclude that something is not a heuristic, but the thing itself, one would need too much certainty to be expected of such a question.
I did use that interchangeably.
Both (the latter). Having an explicit definition would correspond to “preference” which I discussed in the grandparent comment. But when we talk of merely “precise”, at least in principle we could hope to obtain a significantly more precise description, maybe even on human level, which is what meta-ethics should strive to give us. Every useful heuristic is an element of such a description, and some of the heuristics, such as laws of physics, are very precise.
The current heuristics, its current implementation, which is understood to be fallible.
Don’t know (knowing would give a definition). To the extent it’s known, see the current heuristics (long list), maybe brains.
Essentially, what you’re describing is just the situation that we are actually faced with. I mean, when I use the word “right” I think I mean something but I don’t know what. And I have to use my current heuristics, my current implementation without having a precise theory to guide me.
And you’re saying that this situation is unlikely to change significantly by the time we build an FAI, so the best we can expect to do is equivalent to a group of uploads improving themselves to the best of their abilities.
I tend to agree with this (although I think I assign a higher probability that someone does make a breakthrough than you perhaps do), but it doesn’t really constitute a meta-ethics, at least not in the sense that Eliezer and philosophers use that word.
I’m glad it all adds up to normality, given the amount of ink I spilled getting to this point.
Not necessarily. The uploads construct could in principle be made abstract, with efficient algorithms figuring out the result of the process much quickly than if it’s actually simulated. More specific heuristics could be figured out that make use of computational resources to make better progress, maybe on early stages by the uploads construct.
I’m not sure about that. If it’s indeed all we can say about morality right now, then that’s what we have to say, even if it doesn’t belong to the expected literary genre. It’s too easy to invent fake explanations, and absence of conclusions invites that, where a negative conclusion could focus the effort elsewhere.
(Also, I don’t remember particular points on which my current view disagrees with Eliezer’s sequence, although I’d need to re-read it to have a better idea, which I really should, since I only read it as it was posted, when my understanding of the area was zilch.)