I’m not sure what that limit is intended to represent in terms of a proposition about how easy it is to find consistent preferences nearby.
Why focus on the size of the set of graphs at minimal edit distance? The only thing I can think of is some idea that to find one you need to search for it “randomly” in some sense, and if there was a large set of target graphs then they would be easier to find in that way.
It should be very much easier to find such graphs than that. Any cycles in G must be broken for example, and cycle-finding is very much easier than breadth-first searching for graphs without such cycles. Similarly there are more efficient algorithms for the other required properties. I’m also not sure why an upper bound on size of the set of graphs equal to the number of nodes is relevant at all, and this may indicate that I’m completely missing something.
If the target set of graphs was random, we could conclude that the size of the set of the targets at minimal distance increases roughly like n^2. Each edit multiplies the set of reachable graphs by a factor of roughly n^2, and so the median edit distance that first includes a target will typically include O(n^2) of them. Path graphs as the target set are not random however, and merely having a random starting graph does not guarantee this property.
Edit: The size of minimal distance target graphs actually depends upon how you take limits. If you take some limits in one order you get quadratic behaviour, for another you get linear, and for mixed ordering you get no convergence. Such are the perils of trying to do things like probability on countable sets.
I’m somewhat interested in this mainly for nerd-sniped reasons, so take that into account :-)
If the size set of path graphs with minimal graph edit distance for our current preferences is 1, we are lucky and we don’t have to worry about making our preferences consistent in the “wrong” way (whatever that would mean). Maybe a way to conceptualize the size of that set is that is a measure of how confused our preferences are: if we already have consistent preferences, we have 0 confusion about our preferences, if the set of consistent preferences that could correspond to ours has size n! (if we have n nodes in our graph), we are maximally confused.
I’ll have to think about your last paragraph—from the data I’ve collected it seems roughly correct, so props to you :-)
I’m not sure what that limit is intended to represent in terms of a proposition about how easy it is to find consistent preferences nearby.
Why focus on the size of the set of graphs at minimal edit distance? The only thing I can think of is some idea that to find one you need to search for it “randomly” in some sense, and if there was a large set of target graphs then they would be easier to find in that way.
It should be very much easier to find such graphs than that. Any cycles in G must be broken for example, and cycle-finding is very much easier than breadth-first searching for graphs without such cycles. Similarly there are more efficient algorithms for the other required properties. I’m also not sure why an upper bound on size of the set of graphs equal to the number of nodes is relevant at all, and this may indicate that I’m completely missing something.
If the target set of graphs was random, we could conclude that the size of the set of the targets at minimal distance increases roughly like n^2. Each edit multiplies the set of reachable graphs by a factor of roughly n^2, and so the median edit distance that first includes a target will typically include O(n^2) of them.Path graphs as the target set are not random however, and merely having a random starting graph does not guarantee this property.Edit: The size of minimal distance target graphs actually depends upon how you take limits. If you take some limits in one order you get quadratic behaviour, for another you get linear, and for mixed ordering you get no convergence. Such are the perils of trying to do things like probability on countable sets.
I’m somewhat interested in this mainly for nerd-sniped reasons, so take that into account :-)
If the size set of path graphs with minimal graph edit distance for our current preferences is 1, we are lucky and we don’t have to worry about making our preferences consistent in the “wrong” way (whatever that would mean). Maybe a way to conceptualize the size of that set is that is a measure of how confused our preferences are: if we already have consistent preferences, we have 0 confusion about our preferences, if the set of consistent preferences that could correspond to ours has size n! (if we have n nodes in our graph), we are maximally confused.
I’ll have to think about your last paragraph—from the data I’ve collected it seems roughly correct, so props to you :-)