That either of these would be surprising to people in the field? Or that resolving that dilemma would be useful to build from?
Both.
I think the framing as a trilemma suggests you want to dismiss (1) - is that right?
Yup!
I perceive many of your points as not really grappling with the key arguments in the post, so I’ll step through them. My remarks may come off as aggressive, and I do not mean them as such. I have not yet gained the skill of disagreeing frankly and bluntly without seeming chilly, so I will preface this comment with goodwill!
I think about half of your bullets are probably (1), except via rough proxies (power, scamming, family, status, maybe cheating)
I think that you’re saying “rough proxies” and then imagining it solved, somehow, but I don’t see that step?
Whenever I imagine try to imagine a “proxy”, I get stuck. What, specifically, could the proxy be? Such that it will actually reliably entangle itself with the target learned-concept (e.g. “someone’s cheating me”), such that the imagined proxy explains why people care so robustly about punishing cheaters. Whenever I generate candidate proxies (e.g. detecting physiological anger, or just scanning the brain somehow), the scheme seems pretty implausible to me.
Do you disagree?
One clue is that people have quite specific physiological responses to some of these things. Another is that various of these are characterised by different behaviour in different species.
I don’t presently see why “a physiological response is produced” is more likely to come out true in worlds where the genome solves information inaccessibility, than in worlds where it doesn’t.
why proxies? It stands to reason, like you’re pointing out here, it’s hard and expensive to specify things exactly. Further, lots of animal research demonstrates hardwired proxies pointing to runtime-learned concepts
Note that all of the imprinting examples rely on direct sensory observables. This is not (1): Information inaccessibility is solved by the genome—these imprinting examples aren’t inaccessible to begin with.
(Except “limbic imprinting”, I can’t make heads or tails of that one. I couldn’t quickly understand what a concrete example would be after skimming a few resources.)
Rather I think they emerge from failure of imagination due to bounded compute.
My first pass is “I don’t feel less confused after reading this potential explanation.” More in detail—“bounded compute” a priori predicts many possible observations, AFAICT it does not concentrate probability onto specific observed biases (like sunk cost or framing effect). Rather, “bounded compute” can, on its own, explain a vast range of behavior. Since AFAICT this explanation assigns relatively low probability to observed data, it loses tons of probability mass compared to other hypotheses which more strongly predict the data.
ontological shifts are just supplementary world abstractions being installed which happen to overlap with preexisting abstractions… they’re just abstractions and we have machinery which forms and manipulates them
This machinery is also presently magic to me. But your quoted portion doesn’t (to my eyes) explainhow ontological shifts get handled; this hypothesis seems (to me) to basically be “somehow it happens.” But it, of course, has to happen somehow, by some set of specific mechanisms, and I’m saying that the genome probably isn’t hardcoding those mechanisms (resolution (1)), that the genome is not specifying algorithms by which we can e.g. still love dogs after learning they are made of cells.
Not just because it sounds weird to me. I think it’s just really really hard to pull off, for the same reasons it seems hard to write a priori code which manages ontological shifts for big ML models trained online. Where would one begin? Why should code like that exist, in generality across possible models?
Both.
Yup!
I perceive many of your points as not really grappling with the key arguments in the post, so I’ll step through them. My remarks may come off as aggressive, and I do not mean them as such. I have not yet gained the skill of disagreeing frankly and bluntly without seeming chilly, so I will preface this comment with goodwill!
I think that you’re saying “rough proxies” and then imagining it solved, somehow, but I don’t see that step?
Whenever I imagine try to imagine a “proxy”, I get stuck. What, specifically, could the proxy be? Such that it will actually reliably entangle itself with the target learned-concept (e.g. “someone’s cheating me”), such that the imagined proxy explains why people care so robustly about punishing cheaters. Whenever I generate candidate proxies (e.g. detecting physiological anger, or just scanning the brain somehow), the scheme seems pretty implausible to me.
Do you disagree?
I don’t presently see why “a physiological response is produced” is more likely to come out true in worlds where the genome solves information inaccessibility, than in worlds where it doesn’t.
Note that all of the imprinting examples rely on direct sensory observables. This is not (1): Information inaccessibility is solved by the genome—these imprinting examples aren’t inaccessible to begin with.
(Except “limbic imprinting”, I can’t make heads or tails of that one. I couldn’t quickly understand what a concrete example would be after skimming a few resources.)
My first pass is “I don’t feel less confused after reading this potential explanation.” More in detail—“bounded compute” a priori predicts many possible observations, AFAICT it does not concentrate probability onto specific observed biases (like sunk cost or framing effect). Rather, “bounded compute” can, on its own, explain a vast range of behavior. Since AFAICT this explanation assigns relatively low probability to observed data, it loses tons of probability mass compared to other hypotheses which more strongly predict the data.
This machinery is also presently magic to me. But your quoted portion doesn’t (to my eyes) explain how ontological shifts get handled; this hypothesis seems (to me) to basically be “somehow it happens.” But it, of course, has to happen somehow, by some set of specific mechanisms, and I’m saying that the genome probably isn’t hardcoding those mechanisms (resolution (1)), that the genome is not specifying algorithms by which we can e.g. still love dogs after learning they are made of cells.
Not just because it sounds weird to me. I think it’s just really really hard to pull off, for the same reasons it seems hard to write a priori code which manages ontological shifts for big ML models trained online. Where would one begin? Why should code like that exist, in generality across possible models?