One of the open problems MIRI is working on for FAI is exactly this type of logical uncertainty. It should be able to modify itself if it finds out the logic underlying it’s basic programming is incorrect.
One of the open problems MIRI is working on for FAI is exactly this type of logical uncertainty. It should be able to modify itself if it finds out the logic underlying it’s basic programming is incorrect.