Recursive relevance realization seems to be designed to answer about the “quantum of wisdom”.
When someone succefullly does a wisdom transformation it seems often to take the form of seeing oneself as being a fool. That is you ask whether you are aligned to yourself (ideals, goals etc) and find that your actuality is not coherent with your aim. This tends to come with affordance to set a shaping goal to your actuality (is / when is it appropriate to shape your ideals instead?) Discovering incoherency seems very different from keeping a model on coherence rails. And isn’t there mostly atleast two coherent paths out of an incoherence point? Does the CEV pick one or track both? Or does it refuse to enter into genuine transformation processes and treats them as dead-ends as it refuses to step into incoherencies?
The high concepts seem high quality concept work and when trying to fill in details with imagniation it seems workable. But the details are not in yet. If one could brigde the gap from (something like) bayesian evidence updating that touches the lower points of RRR it woudl pretty much be it. But the details are not in yet.
you ask whether you are aligned to yourself (ideals, goals etc) and find that your actuality is not coherent with your aim
Right! Very often, what it means to become wiser is to discover something within yourself that just doesn’t make sense, and then to in some way resolve that.
Discovering incoherency seems very different from keeping a model on coherence rails
True. Eliezer is quite vague about the term “coherent” in his write-ups, and some more recent discussions of CEV drop it entirely. I think “coherent” was originally about balancing the extrapolated volition of many people by finding the places where they agree. But what exactly that means is unclear.
And isn’t there mostly atleast two coherent paths out of an incoherence point?
Yeah, if the incoherent point is caused by a conflict between two things, then there are at least two coherent paths out, namely dropping one or the other of those two things. I have the sense that you can also drop both of them, or sometimes drop some kind of overarching premise that was putting the two in conflict.
Does the CEV pick one or track both?
It seems that CEV describes a process for resolving incoherencies, rather than a specific formula for which side of an incoherence to pick. That process, very roughly, is to put a model of a person through the kind of transformations that would engender true wisdom if experienced in real life. I do have the sense that this is how living people become wise, but I question whether it can be usefully captured in a model of a person.
Or does it refuse to enter into genuine transformation processes and treats them as dead-ends as it refuses to step into incoherencies?
I think that CEV very much tries to step into a genuine transformation process. Whether it does or not is questionable. Specifically, if it does, then one runs into the four questions from the write-up.
Recursive relevance realization seems to be designed to answer about the “quantum of wisdom”.
When someone succefullly does a wisdom transformation it seems often to take the form of seeing oneself as being a fool. That is you ask whether you are aligned to yourself (ideals, goals etc) and find that your actuality is not coherent with your aim. This tends to come with affordance to set a shaping goal to your actuality (is / when is it appropriate to shape your ideals instead?) Discovering incoherency seems very different from keeping a model on coherence rails. And isn’t there mostly atleast two coherent paths out of an incoherence point? Does the CEV pick one or track both? Or does it refuse to enter into genuine transformation processes and treats them as dead-ends as it refuses to step into incoherencies?
It does! But… does it really answer the question? Curious about your thoughts on this.
The high concepts seem high quality concept work and when trying to fill in details with imagniation it seems workable. But the details are not in yet. If one could brigde the gap from (something like) bayesian evidence updating that touches the lower points of RRR it woudl pretty much be it. But the details are not in yet.
Right! Very often, what it means to become wiser is to discover something within yourself that just doesn’t make sense, and then to in some way resolve that.
True. Eliezer is quite vague about the term “coherent” in his write-ups, and some more recent discussions of CEV drop it entirely. I think “coherent” was originally about balancing the extrapolated volition of many people by finding the places where they agree. But what exactly that means is unclear.
Yeah, if the incoherent point is caused by a conflict between two things, then there are at least two coherent paths out, namely dropping one or the other of those two things. I have the sense that you can also drop both of them, or sometimes drop some kind of overarching premise that was putting the two in conflict.
It seems that CEV describes a process for resolving incoherencies, rather than a specific formula for which side of an incoherence to pick. That process, very roughly, is to put a model of a person through the kind of transformations that would engender true wisdom if experienced in real life. I do have the sense that this is how living people become wise, but I question whether it can be usefully captured in a model of a person.
I think that CEV very much tries to step into a genuine transformation process. Whether it does or not is questionable. Specifically, if it does, then one runs into the four questions from the write-up.