If we start with 5 and start squaring we get 5, 25, 625, 390625, 152587890625.… Note how some of the end digits are staying the same each time. If we continue this process we get a number …8212890625 which is a solution of x^2 = x. We get another solution by subtracting this from 1 to get …1888119476.
Oscar_Cunningham
if not, then we’ll essentially need another way to define determinant for projective modules because that’s equivalent to defining an alternating map?
There’s a lot of cases in mathematics where two notions can be stated in terms of each other, but it doesn’t tell us which order to define things in.
The only other thought I have is that I have to use the fact that is projective and finitely generated. This is equivalent to being dualisable. So the definition is likely to use somewhere.
I’m curious about this. I can see a reasonable way to define in terms of sheaves of modules over : Over each connected component, has some constant dimension , so we just let be over that component.
If we call this construction then the construction I’m thinking of is . Note that is locally -dimensional, so my construction is locally isomorphic to yours but globally twisted. It depends on via more than just its local dimension. Also note that with this definition we will get that is always isomorphic to .
But it sounds like you might not like this definition,
Right. I’m hoping for a simple definition that captures what the determinant ‘really means’ in the most general case. So it would be nice if it could be defined with just linear algebra without having to bring in the machinery of the spectrum.
and I’d be interested to know if you had a better way of defining (which will probably end up being equivalent to this).
I’m still looking for a nice definition. Here’s what I’ve got so far.
If we pick a basis of then it induces a bijection between and . So we could define a map to be ‘alternating’ if and only if the corresponding map is alternating. The interesting thing I noticed about this definition is that it doesn’t depend on which basis you pick for . So I have some hope that since this construction isn’t basis dependent, I might be able to write down a basis-independent definition of it. Then it would apply equally well with replaced with , whereupon we can define as the universal alternating map out of .
[Edit: Perhaps something in terms of generators and relations, with the generators being linear maps ?]
Yeah exactly. That’s probably a simpler way to say what I was describing above. One embarrassing thing is that I don’t even know how to describe the simplest relations, i.e. what the s should be.
I agree that ‘credence’ and ‘frequency’ are different things. But round here the word ‘probability’ does refer to credence rather than frequency. This isn’t a mistake; it’s just the way we’re using words.
Another thing to say is if then
.
My intuition for is that it tells you how an infinitesimal change accumulates over finite time (think compound interest). So the above expression is equivalent to . Thus we should think ‘If I perturb the identity matrix, then the amount by which the unit cube grows is proportional to the extent to which each vector is being stretched in the direction it was already pointing’.
Thank you for that intuition into the trace! That also helps make sense of .
This is close to one thing I’ve been thinking about myself. The determinant is well defined for endomorphisms on finitely-generated projective modules over any ring. But the ‘top exterior power’ definition doesn’t work there because such things do not have a dimension. There are two ways I’ve seen for nevertheless defining the determinant.
View the module as a sheaf of modules over the spectrum of the ring. Then the dimension is constant on each connected component, so you can take the top exterior power on each and then glue them back together.
Use the fact that finitely-generated projective modules are precisely those which are the direct summands of a free module. So given an endomorphism you can write and then define .
These both give the same answer. However, I don’t like the first definition because it feels very piecemeal and nonuniform, and I don’t like the second because it is effectively picking a basis. So I’ve been working on by own definition where instead of defining for natural numbers we instead define for finitely-generated projective modules . Then the determinant is defined via .
I think the determinant is more mathematically fundamental than the concept of volume. It just seems the other way around because we use volumes in every day life.
https://www.lesswrong.com/posts/AAqTP6Q5aeWnoAYr4/?commentId=WJ5hegYjp98C4hcRt
I don’t dispute what you say. I just suggest that the confusing term “in the worst case” be replaced by the more accurate phrase “supposing that the environment is an adversarial superintelligence who can perfectly read all of your mind except bits designated ‘random’”.
In this case P is the cumulative distribution function, so it has to approach 1 at infinity, rather than the area under the curve being 1. An example would be 1/(1+exp(-x)).
A simple way to do this is for ROB to output the pair of integers {n, n+1} with probability K((K-1)/(K+1))^|n|, where K is some large number. Then even if you know ROB’s strategy the best probability you have of winning is 1⁄2 + 1/(2K).
If you sample an event N times the variance in your estimate of its probability is about 1/N. So if we pick K >> √N then our probability of success will be statistically indistinguishable from 1⁄2.
The only difficulty is implementing code to sample from a geometric distribution with a parameter so close to 1.
The original problem doesn’t say that ROB has access to your algorithm, or that ROB wants you to lose.
Note that Eliezer believes the opposite: https://www.lesswrong.com/posts/GYuKqAL95eaWTDje5/worse-than-random.
Right, I should have chosen a more Bayesian way to say it, like ‘suceeds with probability greater than 1/2’.
The intended answer for this problem is the Frequentist Heresy in which ROB’s decisions are treated as nonrandom even though they are unknown, while the output of our own RNG is treated as random, even though we know exactly what it is, because it was the output of some ‘random process’.
Instead, use the Bayesian method. Let P({a,b}) be your prior for ROB’s choice of numbers. Let x be the number TABI gives you. Compute P({a,b}|x) using Bayes’ Theorem. From this you can calculate P(x=max(a,b)|x). Say that you have the highest number if this is over 1⁄2, and the lowest number otherwise. This is guaranteed to succeed more than 1⁄2 of the time, and to be optimal given your state of knowledge about ROB.
Some related discussions: 1. https://www.conwaylife.com/forums/viewtopic.php?f=2&t=979 2. https://www.conwaylife.com/forums/viewtopic.php?f=7&t=2877 3. https://www.conwaylife.com/forums/viewtopic.php?p=86140#p86140
My own thoughts.
-
Patterns in GoL are generally not robust. Typically changing anything will cause the whole pattern to disintegrate in a catastrophic explosion and revert to the usual ‘ash’ of randomly placed small still lifes and oscillators along with some escaping gliders.
-
The pattern Eater 2 can eat gliders along 4 adjacent lanes.
-
The Highway Robber can eat gliders travelling along a lane right at the edge of the pattern, such that gliders on the next lane pass by unaffected. So one can use several staggered highway robbers to make a wall which eats any gliders coming at it from a given direction along multiple adjacent lanes. The wall will be very oblique and will fail if two gliders come in on the same lane too close together.
-
The block is robust to deleting any one of its live cells, but is not robust to placing single live cells next to it.
-
The maximum speed at which GoL patterns can propagate into empty space is 1 cell every 2 generations, measured in the L_1 norm. Spaceships which travel at this speed limit (such as the glider, XWSS, and Sir Robin) are therefore robust to things happening behind them, in the sense that nothing can catch up with them.
-
It’s long been hypothesised that it should be possible to make a pattern which can eat any single incoming glider. My method for doing this would be to design a wall around the pattern which is designed to fall apart in a predictable way whenever it is hit by a glider. This collapse would then trigger construction machinery on the interior of the pattern that rebuilds the wall. The trick would be to make sure that the collapse of the wall didn’t emit any escaping gliders and whose debris didn’t depend on where the glider hit it. That way the construction machinery would have a reliable blank slate on which to rebuild.
-
If one did have a pattern with the above property that it could eat any glider that hit it, one could then arrange several copies of this pattern in a ring around any other pattern to make it safe from any single glider. Of course such a pattern would not be safe against other perturbations, and the recovery time would be so slow that it would not even be safe against two gliders a million generations apart.
-
It’s an open problem whether there exists a pattern that recovers if any single cell is toggled.
-
I think the most promising approach is the recently discovered methods in this thread. These methods are designed to clear large areas of the random ash that Life tends to evolve into. One could use these methods to create a machine that clears the area around itself and then builds copies of itself into the cleared space. As this repeated it would spread copies of itself across the grid. The replicators could build walls of random ash between themselves and their children, so that if one of them explodes the explosion does not spread to all copies. If one of these copies hit something it couldn’t deal with, it would explode (hopefully also destroying the obstruction) and then be replaced by a new child of the replicators behind it. Thus such a pattern would be truly robust. If one wanted the pattern to be robust and not spread, one could make every copy keep track of its coordinates relative to the first copy, and not replicate if it was outside a certain distance. I think this would produce what you desire: a bounded pattern that is robust to many of the contexts it could be placed in. However, there are many details still to be worked out. The main problem is that the above cleaning methods are not guaranteed to work on every arrangement of ash. So the question is whether they can clear a large enough area before they hit something that makes them explode. We only need each replicator to succeed often enough that their population growth rate is positive.
- Nov 8, 2021, 4:27 PM; 4 points) 's comment on Optimization Concepts in the Game of Life by (
-
Yes, I’m a big fan of the Entropic Uncertainty Principle. One thing to note about it is that the definition of entropy only uses the measure space structure of the reals, whereas the definition of variance also uses the metric on the reals as well. So Heisenberg’s principle uses more structure to say less stuff. And it’s not like the extra structure is merely redundant either. You can say useful stuff using the metric structure, like Hardy’s Uncertainty Principle. So Heisenberg’s version is taking useful information and then just throwing it away.
I’d almost support teaching the Entropic Uncertainty Principle instead of Heisenberg’s to students first learning quantum theory. But unfortunately its proof is much harder. And students are generally more familiar with variance than entropy.
With regards to Eliezer’s original point, the distibutions |f|^2 and |g|^2 don’t actually describe our uncertainty about anything. We have perfect knowledge of the wavefunction; there is no uncertainty. I suppose you could say that H(|f|^2) and H(|g|^2) quantify the uncertainty you would have if you were to measure the position and momentum (in Eliezer’s point of view this would be indexical uncertainty about which world we were in), although you can’t perform both of these measurements on the same particle.
A good source for the technology available in the Game of Life is the draft of Nathaniel Johnston and Dave Greene’s new book “Conway’s Game of Life: Mathematics and Construction”.
You just have to carefully do the algebra to get an inductive argument. The fact that the last digit is 5 is used directly.
Suppose n is a number that ends in 5, and such that the last N digits stay the same when you square it. We want to prove that the last N+1 digits stay the same when you square n^2.
We can write n = m*10^N + p, where p has N digits, and so n^2 = m^2*10^(2N) + 2mp*10^N + p^2. Note that since 2p ends in 0, the term 2mp*10^N actually divides by 10^(N+1). Then since the two larger terms divide by 10^N+1, n^2 agrees with p^2 on its last N+1 digits, and so p^2 agrees with p on at least its last N digits. So we may write p^2 = q*10^N + p. Hence n^2 = m^2*10^(2N) + 2mp*10^N + q*10^N + p.
Squaring this yields (n^2)^2 = (terms dividing by 10^(N+1)) + 2qp*10^N + p^2. Again, 2p ends in 0, so 2qp*10^N also divides by 10^(N+1). So the last N+1 digits of this agree with p^2, which we previously established also agrees with n^2. QED
A similar argument shows that the number generated in this way is the only 10-adic number that ends in 5 and squares to itself. So we also have that one minus this number is the only 10-adic ending in 6 that squares to itself. You can also prove that 0 and 1 are the only numbers ending in 0 and 1 that square to themselves. The other digits can’t square to themselves. So x^2 = x has precisely four solutions.