Where do you even put the 10^100 objects you’re iterating through? What made you pick 10^100 as the scale of difficulty? I mean, even though you’ve ignored parallelism and the sheer number of processing pipelines available to simultaneously handle things, that’s only a dozen orders of magnitude, not 100. Exponents go up fast.
So, to answer your title, “no, I cannot”. Fortunately, I CAN abstract and model that many objects, if they’re similar in most of the ways that matter to me. The earth, for instance, has about 10^50 atoms (note: that’s not half the size of your example, it’s 1/10^50 the size). And I can make a fair number of predictions about it. And there’s a LOT of behavior I can’t.
Yes, abstraction is the right thing to think about. That is the context in which I was considering this computation. In this post I describe a sort of planning abstraction that you can do if you have an extremely regular environment. It does not yet talk about how to store this environment, but you are right that this can of course also be done similarly efficiently.
Do you need to store information about each object? If so, do you need to do so before or after the operation.
If you need to store information about each object before processing (let’s say 1 bit per object, for simplicity), the Landauer limit says you need 10100bits×2.6×10−23Jbit×1kg9×1016J≈3×1060kg of mass to store that information (at the current cosmic microwave background temperature of 2.7ºK). That’s about a factor of 107 more than the mass of the observable universe, so the current universe could not even store 10100 bits of information for you to perform your operation on in the first place.
I think if you’re willing to use all the mass in the universe and wait a trillion billion years or so for the universe to cool off, you might be able store one bit of output per operation for 10100 operations, assuming you can do some sort of clever reversible computing thing to make the operations themselves approximately free.
Is there some specific computation you are thinking of that is useful if you can do it 10100 times but not useful if you can only do it 1050 times?
In this post, I describe a toy setup, where I have a graph of 10100 vertices. I would like to compute for any two vertices A and B how to get from A to B, i.e. compute a path from A to B.
The point is that if we have a very special graph structure we can do this very efficiently. O(n) where n is the plan length.
In that post, you say that you have a graph of 10100 vertices with a particular structure. In that scenario, where is that structured graph of 10100 vertices coming from? Presumably there’s some way you know the graph looks like this
rather than looking like this
If you know that your graph is a nice sparse graph that has lots of symmetries, you can take advantage of those properties to skip redundant parts of the computation (and when each of your 10100 nodes has at most 100 inbound edges and 100 outbound edges, then you only have on the order of a trillion distinct nodes (if we consider e.g.(0, 0, 0, ..., 0, 0, 1) to be identical to (0, 0, 0, ..., 0, 1, 0, ..., 0, 0, 0)).
It’s probably worth looking at the process which is generating this graph, and figuring out if we can translate the output of that process directly to a coordinate in our 100-dimensional space without going through the “translate the output to a graph, and then embed that graph” intermediate step.
The point is that you are just given some graph. This graph is expected to have subgraphs which are lattice graphs. But you don’t know where they are. And the graph is so big that you can’t iterate the entire graph to find these lattices. Therefore you need a way to embed the graph without traversing it fully.
Can you iterate through 10^100 objects?
If you have a 1GHz CPU you can do 1,000,000,000 operations per second. Let’s assume that iterating through one one object takes only one operation.
In a year you can do 10^16 operations. That means it would take 10^84 years to iterate through 10^100 verticies.
The big bang was 1.4*10^10 years ago.
Where do you even put the 10^100 objects you’re iterating through? What made you pick 10^100 as the scale of difficulty? I mean, even though you’ve ignored parallelism and the sheer number of processing pipelines available to simultaneously handle things, that’s only a dozen orders of magnitude, not 100. Exponents go up fast.
So, to answer your title, “no, I cannot”. Fortunately, I CAN abstract and model that many objects, if they’re similar in most of the ways that matter to me. The earth, for instance, has about 10^50 atoms (note: that’s not half the size of your example, it’s 1/10^50 the size). And I can make a fair number of predictions about it. And there’s a LOT of behavior I can’t.
Yes, abstraction is the right thing to think about. That is the context in which I was considering this computation. In this post I describe a sort of planning abstraction that you can do if you have an extremely regular environment. It does not yet talk about how to store this environment, but you are right that this can of course also be done similarly efficiently.
Do you need to store information about each object? If so, do you need to do so before or after the operation.
If you need to store information about each object before processing (let’s say 1 bit per object, for simplicity), the Landauer limit says you need 10100bits×2.6×10−23Jbit×1kg9×1016J≈3×1060kg of mass to store that information (at the current cosmic microwave background temperature of 2.7ºK). That’s about a factor of 107 more than the mass of the observable universe, so the current universe could not even store 10100 bits of information for you to perform your operation on in the first place.
I think if you’re willing to use all the mass in the universe and wait a trillion billion years or so for the universe to cool off, you might be able store one bit of output per operation for 10100 operations, assuming you can do some sort of clever reversible computing thing to make the operations themselves approximately free.
Is there some specific computation you are thinking of that is useful if you can do it 10100 times but not useful if you can only do it 1050 times?
In this post, I describe a toy setup, where I have a graph of 10100 vertices. I would like to compute for any two vertices A and B how to get from A to B, i.e. compute a path from A to B.
The point is that if we have a very special graph structure we can do this very efficiently. O(n) where n is the plan length.
In that post, you say that you have a graph of 10100 vertices with a particular structure. In that scenario, where is that structured graph of 10100 vertices coming from? Presumably there’s some way you know the graph looks like this
rather than looking like this
If you know that your graph is a nice sparse graph that has lots of symmetries, you can take advantage of those properties to skip redundant parts of the computation (and when each of your 10100 nodes has at most 100 inbound edges and 100 outbound edges, then you only have on the order of a trillion distinct nodes (if we consider e.g.
(0, 0, 0, ..., 0, 0, 1)
to be identical to(0, 0, 0, ..., 0, 1, 0, ..., 0, 0, 0)
).It’s probably worth looking at the process which is generating this graph, and figuring out if we can translate the output of that process directly to a coordinate in our 100-dimensional space without going through the “translate the output to a graph, and then embed that graph” intermediate step.
The point is that you are just given some graph. This graph is expected to have subgraphs which are lattice graphs. But you don’t know where they are. And the graph is so big that you can’t iterate the entire graph to find these lattices. Therefore you need a way to embed the graph without traversing it fully.