How do you reconcile the obvious conflict between the rationalization for why you persist in solving the hardest problem in all the cosmos, and the probability of it not being completed in time.
To unpack that, you’re pursuit of a theory of provably correct, recursively self-improving seed AGI is a daunting task, to say the least,
How do you reconcile the obvious conflict between the rationalization for why you persist in solving the hardest problem in all the cosmos, and the probability of it not being completed in time.
To unpack that, you’re pursuit of a theory of provably correct, recursively self-improving seed AGI is a daunting task, to say the least,