Yeah. This is one of the issues on which my intuitions seem to differ the most with Nesov’s, but I think we each concede that the other might be correct.
Right. If the philosophy/math path turns out to be genuinely impossible in some sense, neuroscience, if correctly interpreted, would be the next best source of detailed knowledge about human value. (I don’t expect this to be feasible without a human-powered WBE singleton vastly stretching the available time before a global catastrophe.)
What do you mean? Are you objecting to seeing a non-math path as even a barely feasible direction (i.e. it must be something handable to an AI, hence not neuroscience), or to the necessity of solving this problem when we already have CEV (i.e. all we need is to hand this problem over to an AI)? Both interpretations seem surprising, so I probably didn’t catch the right one.
Closer to the latter, but I’m not really objecting to anything, so much as just being confused. I assume you’re not saying we have to map out the “thousand shards of desire” before running CEV. I’m not saying there aren’t various aspects of human value that would be useful to understand before we run CEV, but I’m curious what sort of details you had in mind, such that you think they could be figured out with neuroscience and without philosophy but such that you expect doing so to take a very long time.
CEV is way too far from a useful design, or even a big-picture sketch. It’s a non-technical vague description of something that doesn’t automatically fail for obvious reasons, as compared to essentially all other descriptions of human decision problem (friendliness content) published elsewhere. But “running CEV” is like running the sketch of da Vinci’s flying machine.
CEV is a reasonable description on the level where a sketch of a plane doesn’t insist on the plane having a beak or being made entirely out of feathers. It does look very good in comparison to other published sketches. But we don’t have laws of aerodynamics or wind tunnels. “Running the sketch” is never a plan (which provokes protestations such as this). One (much preferable) way of going forward is to figure out the fundamental laws, that’s decision theory/philosophy/math path. Another is to copy a bird in some sense, collecting all of its properties in as much detail as possible (metaphorically speaking, so that it’s about copying goals and not about emulating brains); that’s the neuroscience path, which I expect isn’t viable no matter how much time is given, since we don’t really know how to learn about goals by looking at brains or behavior.
(Perhaps when we figure out the fundamental laws, it’ll turn out that we want a helicopter, and to the dustbin goes the original sketch.)
I agree that CEV needs conceptual and technical fleshing-out; when I said “run CEV”, I meant “run some suitably fleshed-out version of CEV”. You seem to be saying that to do this fleshing-out, we will need knowledge of some large subset of the details of human value. I’m not saying that’s false, but I’m trying to get at what sort of details you think those are; what variables we’re trying to find out the value of. Again, surely it’s not all the details, or we wouldn’t need to run CEV in the first place.
Yeah. This is one of the issues on which my intuitions seem to differ the most with Nesov’s, but I think we each concede that the other might be correct.
Right. If the philosophy/math path turns out to be genuinely impossible in some sense, neuroscience, if correctly interpreted, would be the next best source of detailed knowledge about human value. (I don’t expect this to be feasible without a human-powered WBE singleton vastly stretching the available time before a global catastrophe.)
I’m confused. Isn’t “detailed knowledge about human value” the part of the problem that we’d hand off to an AI implementing CEV?
What do you mean? Are you objecting to seeing a non-math path as even a barely feasible direction (i.e. it must be something handable to an AI, hence not neuroscience), or to the necessity of solving this problem when we already have CEV (i.e. all we need is to hand this problem over to an AI)? Both interpretations seem surprising, so I probably didn’t catch the right one.
Closer to the latter, but I’m not really objecting to anything, so much as just being confused. I assume you’re not saying we have to map out the “thousand shards of desire” before running CEV. I’m not saying there aren’t various aspects of human value that would be useful to understand before we run CEV, but I’m curious what sort of details you had in mind, such that you think they could be figured out with neuroscience and without philosophy but such that you expect doing so to take a very long time.
CEV is way too far from a useful design, or even a big-picture sketch. It’s a non-technical vague description of something that doesn’t automatically fail for obvious reasons, as compared to essentially all other descriptions of human decision problem (friendliness content) published elsewhere. But “running CEV” is like running the sketch of da Vinci’s flying machine.
CEV is a reasonable description on the level where a sketch of a plane doesn’t insist on the plane having a beak or being made entirely out of feathers. It does look very good in comparison to other published sketches. But we don’t have laws of aerodynamics or wind tunnels. “Running the sketch” is never a plan (which provokes protestations such as this). One (much preferable) way of going forward is to figure out the fundamental laws, that’s decision theory/philosophy/math path. Another is to copy a bird in some sense, collecting all of its properties in as much detail as possible (metaphorically speaking, so that it’s about copying goals and not about emulating brains); that’s the neuroscience path, which I expect isn’t viable no matter how much time is given, since we don’t really know how to learn about goals by looking at brains or behavior.
(Perhaps when we figure out the fundamental laws, it’ll turn out that we want a helicopter, and to the dustbin goes the original sketch.)
I agree that CEV needs conceptual and technical fleshing-out; when I said “run CEV”, I meant “run some suitably fleshed-out version of CEV”. You seem to be saying that to do this fleshing-out, we will need knowledge of some large subset of the details of human value. I’m not saying that’s false, but I’m trying to get at what sort of details you think those are; what variables we’re trying to find out the value of. Again, surely it’s not all the details, or we wouldn’t need to run CEV in the first place.
I do hope the value problem turns out to be solvable with philosophy/math instead of cognitive science. The philosophy/math path is much preferred.