Applause for putting your thoughts out there, and applause for updating. Also maybe worth saying: It’s maybe worth “steelmanning” your past self; maybe the intuitions you expressed in the post are still saying something relevant that wasn’t integrated into the picture, even if it wasn’t exactly “actually some humans are literally IGF maximizers”. Like, you said something true about X, and you thought that IGF meant X, but now you don’t think IGF means X, but you still maybe said something worthwhile about X.
I really appreciate that thought! I think there were a few things going on:
Definitons and Degrees: I think in common speech and intuitions it is the case that failing to pick the optimal option doesn’t mean something is not an optimizer. I think this goes back to the definition confusion, where ‘optimizer’ in CS or math literally picks the best option to maximize X no matter the other concerns. While in daily life, if one says they optimize on X then trading off against lower concerns at some value greater than zero is still considered optimizing. E.g. someone might optimize their life for getting the highest grades in school by spending every waking moment studying or doing self-care but they also spend one evening a week with a romantic partner. I think in regular parlance and intuitions, this person is said to be an optimizer cause the concept is weighed in degrees (you are optimizing more on X) instead of absolutes (you are disregarding everything else except X).
unrepresented internal experience: I do actually experience something related to conscious IGF optimization drive. All the responses and texts I’ve read so far are from people that say that they don’t, which made me assume the missing piece was people’s awareness of people like myself. I’m not a perfect optimizer (see above definitional considerations) but there are a lot of experiences and motivations that seemed to not be covered in the original essay or comments. E.g. I experience a strong sense of identity shift where, since I have children, I experience myself as a sort of intergenerational organism. My survival and flourishing related needs internally feel secondary to that of the aggregate of the blood line I’m part of. This shift happened to me during my first pregnancy and is quite a disorienting experience. It seems to point so strongly at IGF optimization that claiming we don’t do that seemed patently wrong. From examples I can now see that it’s still a matter of degrees and I still wouldn’t take every possible action to maximize the number of copies of my genes in the next generation.
where we are now versus where we might end up: people did agree we might end up being IGF maximizers eventually. I didn’t see this point made in the original article and I thought the concern was that training can never work to create inner alignment. Apparently that wasn’t the point haha.
Does that make sense? Curious to hear your thoughts.
I think this goes back to the definition confusion, where ‘optimizer’ in CS or math literally picks the best option to maximize X no matter the other concerns.
I wouldn’t say “picks the best option” is the most interesting thing in the conceptual cluster around “actual optimizer”. A more interesting thing is “runs an ongoing, open-ended, creative, recursive, combinatorial search for further ways to greatly increase X”.
E.g. I experience a strong sense of identity shift where, since I have children, I experience myself as a sort of intergenerational organism ... This shift happened to me during my first pregnancy and is quite a disorienting experience. It seems to point so strongly at IGF optimization that claiming we don’t do that seemed patently wrong.
I mean certainly this is pointing at something deep and important. But the shift here I would say couldn’t be coming from agentic IGF maximization, because agentic IGF maximization would have already, before your pregnancy, cared in the same qualitative way, with the same orientation to the intergenerational organism, though about 1/8th as much, about your cousins, and 1/16th as much about the children of your cousins. Like, of course you care about those people, maybe in a similar way as you care about your children, and maybe connected to IGF in some way; but something got turned on, which looks a lot like a genetically programmed mother-child caring, which wouldn’t be an additional event if you’d been an IGF maxer. (One could say, you care about your children mostly intrinsically, not mostly because of an IGF calculation. Yes this intrinsic care is in some sense put there by evolution for IGF reasons, but that doesn’t make them your reasons.)
where we are now versus where we might end up: people did agree we might end up being IGF maximizers eventually. I didn’t see this point made in the original article and I thought the concern was that training can never work to create inner alignment. Apparently that wasn’t the point haha.
Hm. I don’t agree that this is very plausible; what I agreed with was that human evolution is closer to an IGF maxer, or at least some sort of myopic https://www.lesswrong.com/tag/myopia IGF maxer, in the sense that it only “takes actions” according to the criterion of IGF.
It’s a little plausible. I think it would have to look like a partial Baldwinization https://en.wikipedia.org/wiki/Baldwin_effect of pointers to the non-genetic memeplex of explicit IGF maximization; I don’t think evolution would be able to assemble brainware that reliably in relative isolation does IGF, because that’s an abstract calculative idea whose full abstractly calculated implications are weird and not pointed to by soft, accessible-to-evolution stuff (Chomskyists notwithstanding); like how evolution can’t program the algorithm to take the square of a number, and instead would program something like “be interested in playing around with moving and stacking physical objects” so that you learn on your own to have a sense of how many rocks you need to cover the floor of your hut. Like, you’d literally breed people to be into Mormonism specifically, or something like that (I mean, breed them to imprint heavily on some cues that are reliably associated with Mormonism, like how humans are already programmed to imprint heavily on what other human-faced-and-bodied things in the world are doing). Or maybe the Amish would do better if they have better “walkabout” protocols; over time they get high fertility and also high retention into the memeplex that gives high fertility.
I wouldn’t say “picks the best option” is the most interesting thing in the conceptual cluster around “actual optimizer”. A more interesting thing is “runs an ongoing, open-ended, creative, recursive, combinatorial search for further ways to greatly increase X”.
Like, “actual optimizer” does mean “picks the best option”. But “actual bounded optimizer” https://en.wikipedia.org/wiki/Bounded_rationality can’t mean that exactly, while still being interesting and more relevant to humans, while very much (goes the claim) not looking like how humans act. Humans might take a visible opportunity to have another child, and would take visible opportunities to prevent a rock from hitting their child, but they mostly don’t sit around thinking of creative new ways to increase IGF. They do some versions of this, such as sitting around worrying about things that might harm their children. One could argue that this is because the computational costs of increasing IGF in weird ways are too high. But this isn’t actually plausible (cf. sperm bank example). What’s plausible is that that was the case in the ancestral environment; so the ancestral environment didn’t (even if it could have) select for people who sat around trying to think of wild ways to increase IGF.
Applause for putting your thoughts out there, and applause for updating. Also maybe worth saying: It’s maybe worth “steelmanning” your past self; maybe the intuitions you expressed in the post are still saying something relevant that wasn’t integrated into the picture, even if it wasn’t exactly “actually some humans are literally IGF maximizers”. Like, you said something true about X, and you thought that IGF meant X, but now you don’t think IGF means X, but you still maybe said something worthwhile about X.
I really appreciate that thought! I think there were a few things going on:
Definitons and Degrees: I think in common speech and intuitions it is the case that failing to pick the optimal option doesn’t mean something is not an optimizer. I think this goes back to the definition confusion, where ‘optimizer’ in CS or math literally picks the best option to maximize X no matter the other concerns. While in daily life, if one says they optimize on X then trading off against lower concerns at some value greater than zero is still considered optimizing. E.g. someone might optimize their life for getting the highest grades in school by spending every waking moment studying or doing self-care but they also spend one evening a week with a romantic partner. I think in regular parlance and intuitions, this person is said to be an optimizer cause the concept is weighed in degrees (you are optimizing more on X) instead of absolutes (you are disregarding everything else except X).
unrepresented internal experience: I do actually experience something related to conscious IGF optimization drive. All the responses and texts I’ve read so far are from people that say that they don’t, which made me assume the missing piece was people’s awareness of people like myself. I’m not a perfect optimizer (see above definitional considerations) but there are a lot of experiences and motivations that seemed to not be covered in the original essay or comments. E.g. I experience a strong sense of identity shift where, since I have children, I experience myself as a sort of intergenerational organism. My survival and flourishing related needs internally feel secondary to that of the aggregate of the blood line I’m part of. This shift happened to me during my first pregnancy and is quite a disorienting experience. It seems to point so strongly at IGF optimization that claiming we don’t do that seemed patently wrong. From examples I can now see that it’s still a matter of degrees and I still wouldn’t take every possible action to maximize the number of copies of my genes in the next generation.
where we are now versus where we might end up: people did agree we might end up being IGF maximizers eventually. I didn’t see this point made in the original article and I thought the concern was that training can never work to create inner alignment. Apparently that wasn’t the point haha.
Does that make sense? Curious to hear your thoughts.
I wouldn’t say “picks the best option” is the most interesting thing in the conceptual cluster around “actual optimizer”. A more interesting thing is “runs an ongoing, open-ended, creative, recursive, combinatorial search for further ways to greatly increase X”.
I mean certainly this is pointing at something deep and important. But the shift here I would say couldn’t be coming from agentic IGF maximization, because agentic IGF maximization would have already, before your pregnancy, cared in the same qualitative way, with the same orientation to the intergenerational organism, though about 1/8th as much, about your cousins, and 1/16th as much about the children of your cousins. Like, of course you care about those people, maybe in a similar way as you care about your children, and maybe connected to IGF in some way; but something got turned on, which looks a lot like a genetically programmed mother-child caring, which wouldn’t be an additional event if you’d been an IGF maxer. (One could say, you care about your children mostly intrinsically, not mostly because of an IGF calculation. Yes this intrinsic care is in some sense put there by evolution for IGF reasons, but that doesn’t make them your reasons.)
Hm. I don’t agree that this is very plausible; what I agreed with was that human evolution is closer to an IGF maxer, or at least some sort of myopic https://www.lesswrong.com/tag/myopia IGF maxer, in the sense that it only “takes actions” according to the criterion of IGF.
It’s a little plausible. I think it would have to look like a partial Baldwinization https://en.wikipedia.org/wiki/Baldwin_effect of pointers to the non-genetic memeplex of explicit IGF maximization; I don’t think evolution would be able to assemble brainware that reliably in relative isolation does IGF, because that’s an abstract calculative idea whose full abstractly calculated implications are weird and not pointed to by soft, accessible-to-evolution stuff (Chomskyists notwithstanding); like how evolution can’t program the algorithm to take the square of a number, and instead would program something like “be interested in playing around with moving and stacking physical objects” so that you learn on your own to have a sense of how many rocks you need to cover the floor of your hut. Like, you’d literally breed people to be into Mormonism specifically, or something like that (I mean, breed them to imprint heavily on some cues that are reliably associated with Mormonism, like how humans are already programmed to imprint heavily on what other human-faced-and-bodied things in the world are doing). Or maybe the Amish would do better if they have better “walkabout” protocols; over time they get high fertility and also high retention into the memeplex that gives high fertility.
Like, “actual optimizer” does mean “picks the best option”. But “actual bounded optimizer” https://en.wikipedia.org/wiki/Bounded_rationality can’t mean that exactly, while still being interesting and more relevant to humans, while very much (goes the claim) not looking like how humans act. Humans might take a visible opportunity to have another child, and would take visible opportunities to prevent a rock from hitting their child, but they mostly don’t sit around thinking of creative new ways to increase IGF. They do some versions of this, such as sitting around worrying about things that might harm their children. One could argue that this is because the computational costs of increasing IGF in weird ways are too high. But this isn’t actually plausible (cf. sperm bank example). What’s plausible is that that was the case in the ancestral environment; so the ancestral environment didn’t (even if it could have) select for people who sat around trying to think of wild ways to increase IGF.