So the idea is that if you’ve got a linear differential equation, like dx/dt=2x, then all the solutions look like x=x0*exp(2t).
And you could imagine that there’s an operator which says “Where will this point end up in 1 second?”, call it O(1), which looks like x0->x*exp(2). I.e. it just gets multiplied.
If you know that operator, then you know the operator that represents “Where will the point end up in 2 seconds”, because you can say “where will it end up one second later” then “where will that point end up in another second.
So this gives you a law that O(2) is O(1) done twice. And O(3) is O(2) followed by O(1). So these operators are a group (the identity is ‘where will it be after I run the equation for no seconds’).
In the one-dimensional ODE case, this is all so trivial that it’s not worth mentioning. Almost a pun.
But it generalises nicely to dx/dt=Ax, where x is a vector and A is a matrix, and that gives us a way of defining exp(At), i.e., how to exponentiate a matrix, by saying, “well what if we run this matrix equation for a second and see where it puts all the vectors?”
And in fact we can generalize it further, to differential equations on linear operators on infinite-dimensional spaces. Which is another way of looking at partial differential equations. (A function is an infinite dimensional vector, a vector is a function on a finite set).
So you can talk by analogy about how to exponentiate the diffusion equation, and a time delay operator that takes wiggly functions to less wiggly functions.
But you can’t always run pdes backwards, e.g. the diffusion equation won’t run in reverse, so you don’t always get a full group.
I think that’s the intuition for semigroups, I hope that’s what you were talking about! (And sorry if it’s not clear or just wrong, I haven’t been a mathematician for thirty years.)
If you think about this for long enough, you should suddenly understand why e to the i pi is minus one. In fact it’s just a really obvious thing that has to be true. At that point you’ve probably got it.
If you think about this for long enough, you should suddenly understand why e to the i pi is minus one. In fact it’s just a really obvious thing that has to be true. At that point you’ve probably got it.
Be careful about the illusion of transparency here. This all strikes me as the sort of stuff that is unlikely to be obvious.
Sorry, obvious to a mathematician who thinks about dz/dt=iz and realises that “exponentiation is time-evolution”. At that point it’s just “if you rotate for just long enough to turn half-way round, you’ll be pointing backwards”.
That’s a good layman description—a semi group is basically the exponential of some linear operator. The problem is that I’m supposed to be a bit more than a layman.
It was, a little. Truth is I know the basic definition but I’ve yet to build up enough knowledge and intuition around them to really use them in my research. Thinks analytic vs bounded semigroup, L^\infty calculus, angular sectors and so on.
So the idea is that if you’ve got a linear differential equation, like dx/dt=2x, then all the solutions look like x=x0*exp(2t).
And you could imagine that there’s an operator which says “Where will this point end up in 1 second?”, call it O(1), which looks like x0->x*exp(2). I.e. it just gets multiplied.
If you know that operator, then you know the operator that represents “Where will the point end up in 2 seconds”, because you can say “where will it end up one second later” then “where will that point end up in another second.
So this gives you a law that O(2) is O(1) done twice. And O(3) is O(2) followed by O(1). So these operators are a group (the identity is ‘where will it be after I run the equation for no seconds’).
In the one-dimensional ODE case, this is all so trivial that it’s not worth mentioning. Almost a pun.
But it generalises nicely to dx/dt=Ax, where x is a vector and A is a matrix, and that gives us a way of defining exp(At), i.e., how to exponentiate a matrix, by saying, “well what if we run this matrix equation for a second and see where it puts all the vectors?”
And in fact we can generalize it further, to differential equations on linear operators on infinite-dimensional spaces. Which is another way of looking at partial differential equations. (A function is an infinite dimensional vector, a vector is a function on a finite set).
So you can talk by analogy about how to exponentiate the diffusion equation, and a time delay operator that takes wiggly functions to less wiggly functions.
But you can’t always run pdes backwards, e.g. the diffusion equation won’t run in reverse, so you don’t always get a full group.
I think that’s the intuition for semigroups, I hope that’s what you were talking about! (And sorry if it’s not clear or just wrong, I haven’t been a mathematician for thirty years.)
If you think about this for long enough, you should suddenly understand why e to the i pi is minus one. In fact it’s just a really obvious thing that has to be true. At that point you’ve probably got it.
Be careful about the illusion of transparency here. This all strikes me as the sort of stuff that is unlikely to be obvious.
Sorry, obvious to a mathematician who thinks about dz/dt=iz and realises that “exponentiation is time-evolution”. At that point it’s just “if you rotate for just long enough to turn half-way round, you’ll be pointing backwards”.
That’s a good layman description—a semi group is basically the exponential of some linear operator. The problem is that I’m supposed to be a bit more than a layman.
Then perhaps this was a little hyperbolic?
It was, a little. Truth is I know the basic definition but I’ve yet to build up enough knowledge and intuition around them to really use them in my research. Thinks analytic vs bounded semigroup, L^\infty calculus, angular sectors and so on.