>Thanks to ongoing technology changes, both of these constraints are becoming more and more slack over time—compute and information are both increasingly abundant and cheap.
>Immediate question: what happens in the limit as the prices of both compute and information go to zero?
>Essentially, we get omniscience: our software has access to a perfect, microscopically-detailed model of the real world.
Nope. A finite sized computer cannot contain a fine-grained representation of the entire universe. Note, that while the *marginal* cost of processing and storage might approach zero, that doesn’t mean that you can have infinite computers for free, because marginal costs rise with scale. It would be extremely *expensive* to build a planet sized computer.
A finite sized computer cannot contain a fine-grained representation of the entire universe.
e−x cannot ever be zero for finite x, yet it approaches zero in the limit of large x. The OP makes exactly the same sort of claim: our software approaches omniscience in the limit.
It takes more than one atom to represent one atom computationally, so the limit can’t be reached. Really, the issue is going beyond human cognitive limitations.
That’s a marginal cost curve at a fixed time. Its shape is not directly relevant to the long-run behavior; what’s relevant is how the curve moves over time. If any fixed quantity becomes cheaper and cheaper over time, approaching (but never reaching) zero as time goes on, then the price goes to zero in the limit.
Consider Moore’s law, for example: the marginal cost curve for compute looks U-shaped at any particular time, but over time the cost of compute falls like e−kt, with k around ln(2)/(18 months).
>Thanks to ongoing technology changes, both of these constraints are becoming more and more slack over time—compute and information are both increasingly abundant and cheap.
>Immediate question: what happens in the limit as the prices of both compute and information go to zero?
>Essentially, we get omniscience: our software has access to a perfect, microscopically-detailed model of the real world.
Nope. A finite sized computer cannot contain a fine-grained representation of the entire universe. Note, that while the *marginal* cost of processing and storage might approach zero, that doesn’t mean that you can have infinite computers for free, because marginal costs rise with scale. It would be extremely *expensive* to build a planet sized computer.
e−x cannot ever be zero for finite x, yet it approaches zero in the limit of large x. The OP makes exactly the same sort of claim: our software approaches omniscience in the limit.
It takes more than one atom to represent one atom computationally, so the limit can’t be reached. Really, the issue is going beyond human cognitive limitations.
Of course the limit can’t be reached, that’s the entire reason why people use the phrase “in the limit”.
But it can’t be approached like e^−x either, because the marginal cost of hardware starts to rise once you get low on resources.
Edit:
Exponential decay looks like this
Whereas he marginal cost curve looks like this
That’s a marginal cost curve at a fixed time. Its shape is not directly relevant to the long-run behavior; what’s relevant is how the curve moves over time. If any fixed quantity becomes cheaper and cheaper over time, approaching (but never reaching) zero as time goes on, then the price goes to zero in the limit.
Consider Moore’s law, for example: the marginal cost curve for compute looks U-shaped at any particular time, but over time the cost of compute falls like e−kt, with k around ln(2)/(18 months).
Until you hit a hard limit, like lack of resources.