Do you need to store information about each object? If so, do you need to do so before or after the operation.
If you need to store information about each object before processing (let’s say 1 bit per object, for simplicity), the Landauer limit says you need 10100bits×2.6×10−23Jbit×1kg9×1016J≈3×1060kg of mass to store that information (at the current cosmic microwave background temperature of 2.7ºK). That’s about a factor of 107 more than the mass of the observable universe, so the current universe could not even store 10100 bits of information for you to perform your operation on in the first place.
I think if you’re willing to use all the mass in the universe and wait a trillion billion years or so for the universe to cool off, you might be able store one bit of output per operation for 10100 operations, assuming you can do some sort of clever reversible computing thing to make the operations themselves approximately free.
Is there some specific computation you are thinking of that is useful if you can do it 10100 times but not useful if you can only do it 1050 times?
In this post, I describe a toy setup, where I have a graph of 10100 vertices. I would like to compute for any two vertices A and B how to get from A to B, i.e. compute a path from A to B.
The point is that if we have a very special graph structure we can do this very efficiently. O(n) where n is the plan length.
In that post, you say that you have a graph of 10100 vertices with a particular structure. In that scenario, where is that structured graph of 10100 vertices coming from? Presumably there’s some way you know the graph looks like this
rather than looking like this
If you know that your graph is a nice sparse graph that has lots of symmetries, you can take advantage of those properties to skip redundant parts of the computation (and when each of your 10100 nodes has at most 100 inbound edges and 100 outbound edges, then you only have on the order of a trillion distinct nodes (if we consider e.g.(0, 0, 0, ..., 0, 0, 1) to be identical to (0, 0, 0, ..., 0, 1, 0, ..., 0, 0, 0)).
It’s probably worth looking at the process which is generating this graph, and figuring out if we can translate the output of that process directly to a coordinate in our 100-dimensional space without going through the “translate the output to a graph, and then embed that graph” intermediate step.
The point is that you are just given some graph. This graph is expected to have subgraphs which are lattice graphs. But you don’t know where they are. And the graph is so big that you can’t iterate the entire graph to find these lattices. Therefore you need a way to embed the graph without traversing it fully.
Do you need to store information about each object? If so, do you need to do so before or after the operation.
If you need to store information about each object before processing (let’s say 1 bit per object, for simplicity), the Landauer limit says you need 10100bits×2.6×10−23Jbit×1kg9×1016J≈3×1060kg of mass to store that information (at the current cosmic microwave background temperature of 2.7ºK). That’s about a factor of 107 more than the mass of the observable universe, so the current universe could not even store 10100 bits of information for you to perform your operation on in the first place.
I think if you’re willing to use all the mass in the universe and wait a trillion billion years or so for the universe to cool off, you might be able store one bit of output per operation for 10100 operations, assuming you can do some sort of clever reversible computing thing to make the operations themselves approximately free.
Is there some specific computation you are thinking of that is useful if you can do it 10100 times but not useful if you can only do it 1050 times?
In this post, I describe a toy setup, where I have a graph of 10100 vertices. I would like to compute for any two vertices A and B how to get from A to B, i.e. compute a path from A to B.
The point is that if we have a very special graph structure we can do this very efficiently. O(n) where n is the plan length.
In that post, you say that you have a graph of 10100 vertices with a particular structure. In that scenario, where is that structured graph of 10100 vertices coming from? Presumably there’s some way you know the graph looks like this
rather than looking like this
If you know that your graph is a nice sparse graph that has lots of symmetries, you can take advantage of those properties to skip redundant parts of the computation (and when each of your 10100 nodes has at most 100 inbound edges and 100 outbound edges, then you only have on the order of a trillion distinct nodes (if we consider e.g.
(0, 0, 0, ..., 0, 0, 1)
to be identical to(0, 0, 0, ..., 0, 1, 0, ..., 0, 0, 0)
).It’s probably worth looking at the process which is generating this graph, and figuring out if we can translate the output of that process directly to a coordinate in our 100-dimensional space without going through the “translate the output to a graph, and then embed that graph” intermediate step.
The point is that you are just given some graph. This graph is expected to have subgraphs which are lattice graphs. But you don’t know where they are. And the graph is so big that you can’t iterate the entire graph to find these lattices. Therefore you need a way to embed the graph without traversing it fully.