now I feel stupid for not doing a google search to see if parts of that sentence were recognised phrases. Of course that’s what it means. In fairness though, this is simply a FAI refinement of my first reading- it doesn’t show what it thinks you want, but somehow scans your utility function and calculates what to show you.
Either way, the Mirror of Erised still seems to be pretty much standard.
Not quite. It won’t show you what you think you want, or even what you really truly want this second—it shows you what you would want, if you were were better, smarter, and more the person you wished to be. It’s coherent—you should never look into the Mirror and go “on second thought, that’s a terrible universe.”
For example, Ron would not see himself becoming Prefect or being Head Boy, because in a decade or less he’ll have outgrown such ambitions.
Well… Dumbledore sees his dead family (well, Quirrell thinking he’s Dumbledore sees Dumbledore’s dead family). Which is like Ron seeing everything he currently wants, rather than utopia.
Not quite. It’s more like Ron seeing what a more mature version of himself would want, but Dumbledore’s pushing 200 and famously wise; he’s not going to get much more mature following the path he’s taken. You could argue that his worldview isn’t self-consistent and that a smarter or less self-deluding version of him would pick up on that, but that seems like it bakes in a conclusion.
I haven’t exactly formalized this, but I have the intuition that CEV would be doing more work in aggregating extrapolated values than in extrapolating values in the first place. We can’t just have it wave a wand (har) and rid ourselves of heuristics and biases to find our true values; too much of human value is wrapped up in those same heuristics and biases, and from an internal viewpoint none of them are any “truer” than any others. We can envision an aggregation process that plays different people’s heuristics and biases against each other in some way to find a least-worst kernel of value; but to do that, it needs those data points.
At first I thought that fact did not seem interesting, since it’s not really expected for more than one person to be looking into the mirror at once.
But then I considered that, as the chapter closed, the mirror appeared to be speaking to BOTH Tom Riddles, and now I’m curious what their collective CEV looks like.
now I feel stupid for not doing a google search to see if parts of that sentence were recognised phrases. Of course that’s what it means. In fairness though, this is simply a FAI refinement of my first reading- it doesn’t show what it thinks you want, but somehow scans your utility function and calculates what to show you.
Either way, the Mirror of Erised still seems to be pretty much standard.
Not quite. It won’t show you what you think you want, or even what you really truly want this second—it shows you what you would want, if you were were better, smarter, and more the person you wished to be. It’s coherent—you should never look into the Mirror and go “on second thought, that’s a terrible universe.”
For example, Ron would not see himself becoming Prefect or being Head Boy, because in a decade or less he’ll have outgrown such ambitions.
Well… Dumbledore sees his dead family (well, Quirrell thinking he’s Dumbledore sees Dumbledore’s dead family). Which is like Ron seeing everything he currently wants, rather than utopia.
Could be because this mirror doesn’t extrapolate very far, could be because Quirrell’s fake Dumbledore doesn’t have full human wish complexity.
Strikes me as most likely explanation by far.
Not quite. It’s more like Ron seeing what a more mature version of himself would want, but Dumbledore’s pushing 200 and famously wise; he’s not going to get much more mature following the path he’s taken. You could argue that his worldview isn’t self-consistent and that a smarter or less self-deluding version of him would pick up on that, but that seems like it bakes in a conclusion.
I haven’t exactly formalized this, but I have the intuition that CEV would be doing more work in aggregating extrapolated values than in extrapolating values in the first place. We can’t just have it wave a wand (har) and rid ourselves of heuristics and biases to find our true values; too much of human value is wrapped up in those same heuristics and biases, and from an internal viewpoint none of them are any “truer” than any others. We can envision an aggregation process that plays different people’s heuristics and biases against each other in some way to find a least-worst kernel of value; but to do that, it needs those data points.
Another part of coherence is that, for groups, it’s supposed to reconcile differing viewpoints—to only act on what’s shared.
Sooo it could show the coherent desires shared between all Tom Riddles?
At first I thought that fact did not seem interesting, since it’s not really expected for more than one person to be looking into the mirror at once.
But then I considered that, as the chapter closed, the mirror appeared to be speaking to BOTH Tom Riddles, and now I’m curious what their collective CEV looks like.