To be meaningful, this requires whole-process feedback: we need to judge thoughts by their entire chain of origination. (This is technically challenging, because the easiest way to implement process-level feedback is to create a separate meta-level which oversees the rest of the system; but then this meta-level would not itself be subject to oversight.)
I’d be interested to hear this elaborated further. It seems to me to be technically challenging but not very; it feels like the sort of thing that we could probably solve with a couple people working on it part-time for a few years. I’m wondering if I’m underestimating the difficulties. At any rate it’s fun to think about architectures. Maybe the system keeps a log of its thoughts, and has a process or subcomponent that reads the log, judges it, and then modifies the system accordingly. This component or process is not exempt from all this and occasionally ends up modifying itself. What would go wrong with this? Well, on a practical level maybe it would be too computationally expensive and/or be vulnerable to accidentally neutering itself or otherwise getting stuck in attractors where it self-modifies away the ability to make further good self-modifications. But both of those problems seem not too insurmountable to me.
I’d be interested to hear this elaborated further. It seems to me to be technically challenging but not very;
I agree; I’m not claiming this is a multi-year obstacle even. Mainly I included this line because I thought “add a meta-level” would be what some readers would think, so, I wanted to emphasize that that’s not a solution.
To elaborate on the difficulty: this is challenging because of the recursive nature of the request. Roughly, you need hypotheses which not only claim things at the object level but also hypothesize a method of hypothesis evaluation ie make claims about process-level feedback. Your belief distribution then needs to incorporate these beliefs. (So how much you endorse a hypothesis can depend on how much you endorse that very hypothesis!) And, on top of that, you need to know how to update that big mess when you get more information. This seems sort of like it has to violate Bayes’ Law, because when you make an observation, it’ll not only shift hypotheses via likelihood ratios to that observation, but also, produce secondary effects where hypotheses get shifted around because other hypotheses which like/dislike them got shifted around. How all of this should work seems quite unclear.
Part of the difficulty is doing this in conjunction with everything else, though. Asking for 1 thing that’s impossible in the standard paradigm might have an easy answer. Asking for several, each might individually have easy answers, but combining those easy answers might not be possible.
Thanks! Well, I for one am feeling myself get nerd-sniped by this agenda. I’m resisting so far (so much else to do! Besides, this isn’t my comparative advantage) but I’ll definitely be reading your posts going forward and if you ever want to bounce ideas off me in a call I’d be down. :)
I’d be interested to hear this elaborated further. It seems to me to be technically challenging but not very; it feels like the sort of thing that we could probably solve with a couple people working on it part-time for a few years. I’m wondering if I’m underestimating the difficulties. At any rate it’s fun to think about architectures. Maybe the system keeps a log of its thoughts, and has a process or subcomponent that reads the log, judges it, and then modifies the system accordingly. This component or process is not exempt from all this and occasionally ends up modifying itself. What would go wrong with this? Well, on a practical level maybe it would be too computationally expensive and/or be vulnerable to accidentally neutering itself or otherwise getting stuck in attractors where it self-modifies away the ability to make further good self-modifications. But both of those problems seem not too insurmountable to me.
I agree; I’m not claiming this is a multi-year obstacle even. Mainly I included this line because I thought “add a meta-level” would be what some readers would think, so, I wanted to emphasize that that’s not a solution.
To elaborate on the difficulty: this is challenging because of the recursive nature of the request. Roughly, you need hypotheses which not only claim things at the object level but also hypothesize a method of hypothesis evaluation ie make claims about process-level feedback. Your belief distribution then needs to incorporate these beliefs. (So how much you endorse a hypothesis can depend on how much you endorse that very hypothesis!) And, on top of that, you need to know how to update that big mess when you get more information. This seems sort of like it has to violate Bayes’ Law, because when you make an observation, it’ll not only shift hypotheses via likelihood ratios to that observation, but also, produce secondary effects where hypotheses get shifted around because other hypotheses which like/dislike them got shifted around. How all of this should work seems quite unclear.
Part of the difficulty is doing this in conjunction with everything else, though. Asking for 1 thing that’s impossible in the standard paradigm might have an easy answer. Asking for several, each might individually have easy answers, but combining those easy answers might not be possible.
Thanks! Well, I for one am feeling myself get nerd-sniped by this agenda. I’m resisting so far (so much else to do! Besides, this isn’t my comparative advantage) but I’ll definitely be reading your posts going forward and if you ever want to bounce ideas off me in a call I’d be down. :)