The point in the linked post doesn’t deal substantially with the limits of arbitrarily large computers. It is just an intuition pump for the idea that a fast moderately bright intelligence could be dangerous.
Is it a good intuition pump? To me it is like using a TM as an intuition pump about how much memory we might have in the future.
We will never have anywhere near infinite memory. We will have a lot more than what we have at the moment, but the concept of the TM is not useful in gauging the scope and magnitude.
I’m trying to find the other post that annoyed me in this fashion. Something to do with simulating universes.
The point in the linked post doesn’t deal substantially with the limits of arbitrarily large computers. It is just an intuition pump for the idea that a fast moderately bright intelligence could be dangerous.
Is it a good intuition pump? To me it is like using a TM as an intuition pump about how much memory we might have in the future.
We will never have anywhere near infinite memory. We will have a lot more than what we have at the moment, but the concept of the TM is not useful in gauging the scope and magnitude.
I’m trying to find the other post that annoyed me in this fashion. Something to do with simulating universes.