Yes, I think this objection captures something important.
I have proven that aligned AI must exist and also that it must be practically implementable.
But some kind of failure, i.e. a “near miss” on achieving a desired goal can happen even if success was possible.
I will address these near misses in future posts.
Yes, I think this objection captures something important.
I have proven that aligned AI must exist and also that it must be practically implementable.
But some kind of failure, i.e. a “near miss” on achieving a desired goal can happen even if success was possible.
I will address these near misses in future posts.