Asking “Are we being poisoned?” sets up the issue is an supotimal way. It frames very complex interaction as if they would be simple and with agency of an hostile agent.
I want to know what, specifically, is driving this problem (doesn’t everyone) and
Part of being a rationalist is accepting that there are questions where the available evidence doesn’t clearly answer the question.
David Chapman’s Nutrition: the Emperor has no clothes is worth reading to be aware of how irrationally the issue of nutrition is commonly approached. If you want to do better, understand it is valuable.
what an ordinary person can do to minimize the risk of serious illness.
I don’t know what you mean with the term “ordinary person”. Solution for someone with bad body awareness and who has a program salary are quite different then for someone with awesome body awareness who lives on minimum wage.
For many people, internal drives get them to eat iron if they are deficient at it. The dynamic is strong enough that Alicorn wrote. Experiential Pica. If you pay good attention for the internal drives and understand your drives you are going to eat enough iron.
For myself, I have effervescent tablets for Magnesium, Calcium and Iron. With Magnesium in particular there are days where I clearly feel like a Magnesium tablet would be tasty and days I don’t. Since I switched my diet, I don’t find them appealing anymore.
On the other hand, for someone who doesn’t really trust their impulses and has the cash to throw around (or healthcare that provides them with free access to testing), doing blood testing and supplementing iron if you need it is a great idea, as you can also do blood testing as Elizabeth lays out.
There are both ways that leverage high self awareness and ways that leverage having big financial resources.
If you want to apply this to alignment, the next question would be, is there something in the human nature that causes this, or would an AGI likely be drawn to similar effects?
If you gather knowledge about what’s intrinsically motivating AGI’s as well, that would be valuable for alignment research, because it’s about creating motivations for AGI’s to do things.
You can reframe that question as, “Is this aspect of the Beatles songs aligned with the desires of the audience?”
Both of your examples about what people or agents in general value. AI alignment is about how to align what humans and AGIs value. Understanding something about the nature of value, seems applicable to AI alignment.