That’s something like my objection to CEV—I currently believe that some fraction of important knowledge is gained by blundering around and (or?) that the universe is very much more complex than any possible theory about it.
This means that you can’t fully know what your improved (by what standard?) self is going to be like.
That’s something like my objection to CEV—I currently believe that some fraction of important knowledge is gained by blundering around and (or?) that the universe is very much more complex than any possible theory about it.
This means that you can’t fully know what your improved (by what standard?) self is going to be like.
It’s the difference between the algorithm and its output, and the local particulars of portions of that output.