the potential difficulty of the concepts necessary to formulate the solution
As I see it, there might be considerable difficulty of concepts in formulating even the exact problem statement. For instance, given that we want a ‘friendly’ AI; our problem statement very much depends on our notion of friendliness; hence the necessity of including psychology.
Going further, considering that SI aims to minimize AI risk, we need to be clear on which AI behavior is said to constitute a ‘risk’. If I remember correctly, the AI in the movie “I-robot” inevitably concludes that killing the human race is the only way to save the planet. The definition of risk in such a scenario is a very delicate problem.
That our health and body chemistry affects our mental processes, is not unreasonable to expect. More interesting would be if this goes the other way.. do our belief systems and rationality have a profound impact on our body chemistry?
For instance, I wonder if being rational and self-aware drives our digestive system to become clever over time.. consider that we may have a hoard of gastric juices which our body tries and tests on various kinds of foods, keeps a track of which works better, and adapts accordingly. It may also try to create newer juices and see how they work.. At the extreme end, we would be leading our body to set up a gastrochemistry lab in our guts.
Another example: I hope that studying computer science may lead one’s own brain to apply those concepts to optimize one’s neural connections in some way.. give us ‘speedup’, so to say.