For the record, I do think this is something worth mathematically formalizing. Perhaps someday you should come back to this, or restart this, or even “dump” your notes/thinking on this in an unedited form.
This is a terrible framework/approach to it. Very terrible, I don’t often link to this post when I link to alignment stuff I wrote up. I think I was off base. Genealogy/lineage is not the right meta-approach/framework. A lot of premature rigour to it that is now useless.
I now have different intuitions about how to approach it and have some sketches (on my shortform the rough thoughts about formalising optimisation) laying some groundwork for it, but I doubt I’ll complete that groundwork anytime soon.
Formalising returns in cognitive reinvestment is not a current research priority for me, but the groundwork does factor through research I see as highly promising for targeting the hard problems of alignment, and once the groundwork is complete, this part would be pretty easy.
It’s also important for formalising my thinking/arguments re: takeoff dynamics (which aren’t relevant to the hard problems, but are very important for governance/strategy).
For the record, I do think this is something worth mathematically formalizing. Perhaps someday you should come back to this, or restart this, or even “dump” your notes/thinking on this in an unedited form.
This is a terrible framework/approach to it. Very terrible, I don’t often link to this post when I link to alignment stuff I wrote up. I think I was off base. Genealogy/lineage is not the right meta-approach/framework. A lot of premature rigour to it that is now useless.
I now have different intuitions about how to approach it and have some sketches (on my shortform the rough thoughts about formalising optimisation) laying some groundwork for it, but I doubt I’ll complete that groundwork anytime soon.
Formalising returns in cognitive reinvestment is not a current research priority for me, but the groundwork does factor through research I see as highly promising for targeting the hard problems of alignment, and once the groundwork is complete, this part would be pretty easy.
It’s also important for formalising my thinking/arguments re: takeoff dynamics (which aren’t relevant to the hard problems, but are very important for governance/strategy).
Good to be improving your thinking on this and targeting the harder subproblems!