The “cookbook with practical FAI advice” is in some way an optimization of a general solution. Adding constraints corresponding to limited resources / specific approaches. Making the task harder. Like adding friction to a “calculate where the trains will collide” problem.
It seems like a good idea to have a solution to the general problem, something which provably has the properties we want it to, before we deal with how that something survives the transition to actual AGI implementations.
Skipping this step (the theoretical foundation) would be tantamount to “this trick seems like it could work, for ill-defined values of ‘could’”.
Also, for such a cookbook to be taken seriously, and not just be more speculating, a “this derives from a general safety theorem, see the greek alphabet soup (sterile) in the appendix” would provide a much larger incentive to take the actual guidelines seriously.
In Zen, you first must know how to grow the tree before you alpha-beta prune it.
ETA: There is a case to be made that no general solution can exist (a la Halting Problem) or is practically unattainable or cannot be ‘dumbed down’ to work for actual approaches, and that we therefore must focus on solving only specific problems. We’re not yet at that point though, IME.
The “cookbook with practical FAI advice” is in some way an optimization of a general solution. Adding constraints corresponding to limited resources / specific approaches. Making the task harder. Like adding friction to a “calculate where the trains will collide” problem.
It seems like a good idea to have a solution to the general problem, something which provably has the properties we want it to, before we deal with how that something survives the transition to actual AGI implementations.
Skipping this step (the theoretical foundation) would be tantamount to “this trick seems like it could work, for ill-defined values of ‘could’”.
Also, for such a cookbook to be taken seriously, and not just be more speculating, a “this derives from a general safety theorem, see the greek alphabet soup (sterile) in the appendix” would provide a much larger incentive to take the actual guidelines seriously.
In Zen, you first must know how to grow the tree before you alpha-beta prune it.
ETA: There is a case to be made that no general solution can exist (a la Halting Problem) or is practically unattainable or cannot be ‘dumbed down’ to work for actual approaches, and that we therefore must focus on solving only specific problems. We’re not yet at that point though, IME.
See my later reply to Nate here:
http://lesswrong.com/lw/l7o/miri_research_guide/bl0n