No. FAI is about figuring out how to implement precise preference, not an approximation of it appropriate for non-magical environments. Requires completely different tools.
It seems that to work on FAI, one has to become mathematician and theoretical computer scientist (whatever the actual career).
I gave a link! A non-magical environment gives limited expressive power, so there are few surprising situations that given heuristics don’t capture. With enough testing and debugging, you may get your weakly intelligent robot to behave. Where more possibilities are open, you have to get preference exactly, or the decisions will be obviously wrong (see The Hidden Complexity of Wishes).
No. FAI is about figuring out how to implement precise preference, not an approximation of it appropriate for non-magical environments. Requires completely different tools.
It seems that to work on FAI, one has to become mathematician and theoretical computer scientist (whatever the actual career).
What do you mean by “non-magical environments”?
I gave a link! A non-magical environment gives limited expressive power, so there are few surprising situations that given heuristics don’t capture. With enough testing and debugging, you may get your weakly intelligent robot to behave. Where more possibilities are open, you have to get preference exactly, or the decisions will be obviously wrong (see The Hidden Complexity of Wishes).
Your terminology was unclear but this definition is not—I would tend to call it an “organic” environment.