Your comment about cities is not actually true. It is (if I recall correctly), an NP-complete problem. It is quick to check, but unless all NP problems are quick to solve, this one will take a long time [100 is relatively large for the problem] unless you exploit very specific parts of the problem’s structure [usually via A* search which uses a very particular kind of heuristic], and even then, you have to be willing to calculate for a long time, or accept an approximate answer if things don’t line up just right. Each extra path between them that needs to be checked adds time, and there are a lot of paths. You could not even store a lookup table for this kind of problem easily unless you were certain of getting this exact problem [and creating this lookup table would not be easy.]
Your other example seems suspect as well [unless you actually checked the millionth digit of Pi]. A lookup table would work for this problem though [and lookup tables are trivial for Pi.]
It is in fact useful to think about the scaling on small problems as well [mostly because some algorithms are very difficult, and become infeasible well before large sizes.] Sometimes it is correct to use an algorithm that doesn’t scale well when the input size is small. For instance, hybrid quicksorts are superior to pure ones, despite using algorithms that don’t scale well when sizes are small.
There is a difference between solving the route for these specific 100 cities and 100 unknown or freely changable cities. In order to have an interesting problem at all we need to have that scaling built in.
NP-completeness would mean that any problem structure part would transfer. You might be thinking about not bothering to solve it exactly but settling for the “most cases” being allowed to miss some options which is how problems known to be hard in their teorethical pure forms can be tamed to have practically fast “solutions”.
You fail to see the issue. There are 100 cities, and an extreme number of paths to get between them [literally infinite if you include cycles, which you have to convince the algorithm to exclude.] You do not know the length of these composite paths between cities, so you have to calculate them.
Theoretically, you need to know the length of every path to be sure you have the shortest path. In practice, we can exclude cycles, and use heuristics [such as map distance] to narrow down which paths to compute, but it is still an overly difficult problem. (It is an equally difficult variant of the traveling salesman problem in computer science). When I just googled it, a team in Japan was being lauded for 16 ‘cities’ using a different kind of processor in 2020. (I don’t know how links work here. https://www.popularmechanics.com/science/a30655520/scientists-solve-traveling-salesman-problem/ )
I agree we are disagreeing where the core of the issues are.
Sure it explodes pretty heavily. But if we are using constant cities then we could tailor about pattern and knowledge about the composite paths, we would essentially know them.
It is a different task to multiply two numbers together rathetr than to calculate 1234*5678. In order for the solution to the second question to be a valid solution to the first problem it needs to be insensitive to the specific numbers used. Timothy Johnsons main answer was about that no matter how hard the problem is is the scope of the instances to be covered is 1 then the answer will/can consist of the just the value without any actual computation being involved. For an interesting answer the computation aids on how the variations of the cases to be covered can be handled, how the digits of the numbers provided affects what calculations needed to be done to compute the product. But that has the character of scaling.
It might be useful to specify some of the effects that are specific. For instance:
Driving between a set of cities might be more Euclidean than the Traveling Salesman problem (which is more general).
In theory, it could be more difficult if driving costs are asymmetric. Perhaps in practice this would take the form of 1 or 2 roads which are closed for construction, which, by shaping the solution might make computation to find an optimal path faster.
Ditch the quest for the optimal path, say 3 times it in length is acceptable, and the problem gets easier.
If you’re talking about how in practice, the running time is a lot better than theory(’s worst case), then say that.
Note: I already responded to him directly about his reply to me.
The fact that the specific and general differ is unimportant to my point. You don’t have the answer to start with, and so you have to calculate it. The calculation is what computation is. You can’t just assume that you already know the answer, and claim that makes computing it trivial.
The cities being constant changes nothing in this discussion, since you still had to compute it before putting the answer in the lookup table, and knowing which cities it was is only a precondition, not an answer. Memoization (the technical term for keeping track of the intermediate results of your computation) is useful, but not a panacea.
If I am a programmer and I can do the calculation on the behalf of my program beforehand at compile time and avoid any runtime computation that is still significant. We can’t have the compute time include everything about understanding the question otherwise we need to include kindergarden time about learning what the word “city” means. Thus while “global compute” is inescapable we are proper to just focus on time spent after the algorithm has been designed and frozen in place.
Calculating at compile time is still obviously computation! Obviously, if you can, it is better to do so most of the time, but it is also irrelevant to the point. This isn’t something that simply takes a long time to calculate, but if you run it for a few hours or days when creating the program it can be recorded. You cannot, in fact, calculate this beforehand because it is computationally infeasible. (In some cases, where the heuristics mentioned earlier work well enough, it can be computed, but that relies on the structure of the problem, and still requires a lot of computation.)
Obviously, we are just talking past each other, so I’ll stop responding here.
Sorry, you misunderstood my point. Perhaps I’m being a little pedantic and unclear.
For the cities example, the point is that when the problem domain is restricted to a single example (the top 100 cities in the US), there is some program out there that outputs the list of cities in the correct order.
You can imagine the set of all possible programs, similar to The Library of Babel—Wikipedia. Within that set, there are programs that print the 100 cities in every possible order. One of them is the correct answer. I don’t need to know which program that is to know that it exists.
This is why the OP’s suggestion to define “fundamental units of computation” doesn’t really make sense, and why the standard tools of computational complexity theory are (as far as I know) the only reasonable approach.
If that’s what you meant, it is rather unclear in the initial comment. It is, in fact, very important that we do not know what the sequence is. You could see it as the computation is to determine which book in the library of Babel to look at. There is only one correct book [though some are close enough], and we have to find that one [thus, it is a search problem.] How difficult this search is, is actually a well defined problem, but it simply has multiple ways of being done [for instance, by a specialist algorithm, or a general one.]
Of course, I do agree that a lookup table can make some problems trivial, but that doesn’t work for this sort of thing [and a lookup table of literally everything is basically is what the Library of Babel would be.] Pure dumb search doesn’t work that well, especially when the table is infinite.
Edit: You can consider finding it randomly the upper bound on computational difficulty, but lowering bound requires an actual algorithm [or at least a good description of the kind of thing it is], not just the fact that there is an algorithm. The Library of Babel proves very little in this regard. (Note: I had to edit my edit due to writing something incorrect.)
Your comment about cities is not actually true. It is (if I recall correctly), an NP-complete problem. It is quick to check, but unless all NP problems are quick to solve, this one will take a long time [100 is relatively large for the problem] unless you exploit very specific parts of the problem’s structure [usually via A* search which uses a very particular kind of heuristic], and even then, you have to be willing to calculate for a long time, or accept an approximate answer if things don’t line up just right. Each extra path between them that needs to be checked adds time, and there are a lot of paths. You could not even store a lookup table for this kind of problem easily unless you were certain of getting this exact problem [and creating this lookup table would not be easy.]
Your other example seems suspect as well [unless you actually checked the millionth digit of Pi]. A lookup table would work for this problem though [and lookup tables are trivial for Pi.]
It is in fact useful to think about the scaling on small problems as well [mostly because some algorithms are very difficult, and become infeasible well before large sizes.] Sometimes it is correct to use an algorithm that doesn’t scale well when the input size is small. For instance, hybrid quicksorts are superior to pure ones, despite using algorithms that don’t scale well when sizes are small.
There is a difference between solving the route for these specific 100 cities and 100 unknown or freely changable cities. In order to have an interesting problem at all we need to have that scaling built in.
NP-completeness would mean that any problem structure part would transfer. You might be thinking about not bothering to solve it exactly but settling for the “most cases” being allowed to miss some options which is how problems known to be hard in their teorethical pure forms can be tamed to have practically fast “solutions”.
You fail to see the issue. There are 100 cities, and an extreme number of paths to get between them [literally infinite if you include cycles, which you have to convince the algorithm to exclude.] You do not know the length of these composite paths between cities, so you have to calculate them.
Theoretically, you need to know the length of every path to be sure you have the shortest path. In practice, we can exclude cycles, and use heuristics [such as map distance] to narrow down which paths to compute, but it is still an overly difficult problem. (It is an equally difficult variant of the traveling salesman problem in computer science). When I just googled it, a team in Japan was being lauded for 16 ‘cities’ using a different kind of processor in 2020. (I don’t know how links work here. https://www.popularmechanics.com/science/a30655520/scientists-solve-traveling-salesman-problem/ )
I agree we are disagreeing where the core of the issues are.
Sure it explodes pretty heavily. But if we are using constant cities then we could tailor about pattern and knowledge about the composite paths, we would essentially know them.
It is a different task to multiply two numbers together rathetr than to calculate 1234*5678. In order for the solution to the second question to be a valid solution to the first problem it needs to be insensitive to the specific numbers used. Timothy Johnsons main answer was about that no matter how hard the problem is is the scope of the instances to be covered is 1 then the answer will/can consist of the just the value without any actual computation being involved. For an interesting answer the computation aids on how the variations of the cases to be covered can be handled, how the digits of the numbers provided affects what calculations needed to be done to compute the product. But that has the character of scaling.
It might be useful to specify some of the effects that are specific. For instance:
Driving between a set of cities might be more Euclidean than the Traveling Salesman problem (which is more general).
In theory, it could be more difficult if driving costs are asymmetric. Perhaps in practice this would take the form of 1 or 2 roads which are closed for construction, which, by shaping the solution might make computation to find an optimal path faster.
Ditch the quest for the optimal path, say 3 times it in length is acceptable, and the problem gets easier.
If you’re talking about how in practice, the running time is a lot better than theory(’s worst case), then say that.
Note: I already responded to him directly about his reply to me.
The fact that the specific and general differ is unimportant to my point. You don’t have the answer to start with, and so you have to calculate it. The calculation is what computation is. You can’t just assume that you already know the answer, and claim that makes computing it trivial.
The cities being constant changes nothing in this discussion, since you still had to compute it before putting the answer in the lookup table, and knowing which cities it was is only a precondition, not an answer. Memoization (the technical term for keeping track of the intermediate results of your computation) is useful, but not a panacea.
If I am a programmer and I can do the calculation on the behalf of my program beforehand at compile time and avoid any runtime computation that is still significant. We can’t have the compute time include everything about understanding the question otherwise we need to include kindergarden time about learning what the word “city” means. Thus while “global compute” is inescapable we are proper to just focus on time spent after the algorithm has been designed and frozen in place.
Calculating at compile time is still obviously computation! Obviously, if you can, it is better to do so most of the time, but it is also irrelevant to the point. This isn’t something that simply takes a long time to calculate, but if you run it for a few hours or days when creating the program it can be recorded. You cannot, in fact, calculate this beforehand because it is computationally infeasible. (In some cases, where the heuristics mentioned earlier work well enough, it can be computed, but that relies on the structure of the problem, and still requires a lot of computation.)
Obviously, we are just talking past each other, so I’ll stop responding here.
Sorry, you misunderstood my point. Perhaps I’m being a little pedantic and unclear.
For the cities example, the point is that when the problem domain is restricted to a single example (the top 100 cities in the US), there is some program out there that outputs the list of cities in the correct order.
You can imagine the set of all possible programs, similar to The Library of Babel—Wikipedia. Within that set, there are programs that print the 100 cities in every possible order. One of them is the correct answer. I don’t need to know which program that is to know that it exists.
This is why the OP’s suggestion to define “fundamental units of computation” doesn’t really make sense, and why the standard tools of computational complexity theory are (as far as I know) the only reasonable approach.
If that’s what you meant, it is rather unclear in the initial comment. It is, in fact, very important that we do not know what the sequence is. You could see it as the computation is to determine which book in the library of Babel to look at. There is only one correct book [though some are close enough], and we have to find that one [thus, it is a search problem.] How difficult this search is, is actually a well defined problem, but it simply has multiple ways of being done [for instance, by a specialist algorithm, or a general one.]
Of course, I do agree that a lookup table can make some problems trivial, but that doesn’t work for this sort of thing [and a lookup table of literally everything is basically is what the Library of Babel would be.] Pure dumb search doesn’t work that well, especially when the table is infinite.
Edit: You can consider finding it randomly the upper bound on computational difficulty, but lowering bound requires an actual algorithm [or at least a good description of the kind of thing it is], not just the fact that there is an algorithm. The Library of Babel proves very little in this regard. (Note: I had to edit my edit due to writing something incorrect.)