The way it works normally is that you have a state ρ, and its acted on by some operator, a, which you can write as aρ. But this doesn’t give a number, it gives a new state like the old ρ but different. (For example if a was the anhilation operator the new state is like the old state but with one fewer photons). This is how (for example) an operator acts on the state of the system to change that state. (Its a density matrix to density matrix map).
In dimensions terms this is: (1,1) = (1, 1) * (1,1)
(Two square matrices of size N multiply to give another square matrix of size N).
However, to get the expected outcome of a measurement on a particular state you take :Tr(aρ) where Tr is the trace. The trace basically gets the “plug” at the left hand side of a matrix and twists it around to plug it into the right hand side. So overall what is happening is that the operators a and ρ, each have shapes (1,1) and what we do is:
Tr( (1,1) * (1,1)) = Tr( (1, 1) ) = number.
The “inward facing” dimensions of each matrix get plugged into one another because the matrices multiply, and the outward facing dimensions get redirected by the trace operation to also plug into one another. (The Trace is like matrix multiplication but on paper that has been rolled up into a cylinder, so each of the two matrices inside sees the other on both sides). The net effect is exactly the same as if they had originally been organized into the shapes you suggest of (2,0) and (0,2) respectively.
So if the two “ports” are called A and B your way of doing it gives:
(AB, 0) * (0, AB) = (0, 0) IE number
The traditional way:
Tr( (A, B) * (B, A) ) = Tr( (A, A) ) = (0, 0) , IE number.
I haven’t looked at tensors much but I think that in tensor-land this Trace operation takes the role of a really boring metric tensor that is just (1,1,1,1...) down the diagonal.
So (assuming I understand right) your way of doing it is cleaner and more elegant for getting the expectation value of a measurement. But the traditional system works more elegantly for applying an operator too a state to evolve it into another state.
Yes, applying a (0, 2) tensor to a (2, 0) tensor is like taking the trace of their composition if they were both regarded as linear maps.
Anyway for operators that are supposed to modify a state, like annihilation/creation or time-evolution, I would be inclined to model it as linear maps/(1, 1)-tensors like in the OP. It was specifically for observables that I meant it seemed most natural to use (0, 2) tensors.
Its a density matrix to density matrix map
I thought they were typically wavefunction to wavefunction maps, and they need some sort of sandwiching to apply to density matrices?
I thought they were typically wavefunction to wavefunction maps, and they need some sort of sandwiching to apply to density matrices?
Yes, this is correct. My mistake, it does indeed need the sandwiching like this ρnew=aρolda†.
From your talk on tensors, I am sure it will not surprise you at all to know that the sandwhich thing itself (mapping from operators to operators) is often called a superoperator.
I think the reason it is as it is is their isn’t a clear line between operators that modify the state and those that represent measurements. For example, the Hamiltonian operator evolves the state with time. But, taking the trace of the Hamiltonian operator applied to the state gives the expectation value of the energy.
From your talk on tensors, I am sure it will not surprise you at all to know that the sandwhich thing itself (mapping from operators to operators) is often called a superoperator.
Oh it does surprise me, superoperators are a physics term but I just know linear algebra and dabble in physics, so I didn’t know that one. Like I’d think of it as the functor over vector spaces that maps V↦V⊗V.
I think the reason it is as it is is their isn’t a clear line between operators that modify the state and those that represent measurements. For example, the Hamiltonian operator evolves the state with time. But, taking the trace of the Hamiltonian operator applied to the state gives the expectation value of the energy.
Hm, I guess it’s true that we’d usually think of the matrix exponential as mapping V⊸V to V⊸V, rather than as mapping V⊗V⊸C to V⊸V. I guess it’s easy enough to set up a differential equation for the latter, but it’s much less elegant than the usual form.
In some papers people write density operators using an enhanced “double ket” Dirac notation, where eg. density operators are written to look like |x>>, with two “>”’s. They do this exactly because the differential equations look more elegant.
I think in this notation measurements look like <<m|, but am not sure about that. The QuTiP software (which is very common in quantum modelling) uses something like this under-the-hood, where operators (eg density operators) are stored internally using 1d vectors, and the super-operators (maps from operators to operators) are stored as matrices.
So structuring the notation in other ways does happen, in ways that look quite reminiscent of your tensors (maybe the same).
The way it works normally is that you have a state ρ, and its acted on by some operator, a, which you can write as aρ. But this doesn’t give a number, it gives a new state like the old ρ but different. (For example if a was the anhilation operator the new state is like the old state but with one fewer photons). This is how (for example) an operator acts on the state of the system to change that state. (Its a density matrix to density matrix map).
In dimensions terms this is: (1,1) = (1, 1) * (1,1)
(Two square matrices of size N multiply to give another square matrix of size N).
However, to get the expected outcome of a measurement on a particular state you take :Tr(aρ) where Tr is the trace. The trace basically gets the “plug” at the left hand side of a matrix and twists it around to plug it into the right hand side. So overall what is happening is that the operators a and ρ, each have shapes (1,1) and what we do is:
Tr( (1,1) * (1,1)) = Tr( (1, 1) ) = number.
The “inward facing” dimensions of each matrix get plugged into one another because the matrices multiply, and the outward facing dimensions get redirected by the trace operation to also plug into one another. (The Trace is like matrix multiplication but on paper that has been rolled up into a cylinder, so each of the two matrices inside sees the other on both sides). The net effect is exactly the same as if they had originally been organized into the shapes you suggest of (2,0) and (0,2) respectively.
So if the two “ports” are called A and B your way of doing it gives:
(AB, 0) * (0, AB) = (0, 0) IE number
The traditional way:
Tr( (A, B) * (B, A) ) = Tr( (A, A) ) = (0, 0) , IE number.
I haven’t looked at tensors much but I think that in tensor-land this Trace operation takes the role of a really boring metric tensor that is just (1,1,1,1...) down the diagonal.
So (assuming I understand right) your way of doing it is cleaner and more elegant for getting the expectation value of a measurement. But the traditional system works more elegantly for applying an operator too a state to evolve it into another state.
Yes, applying a (0, 2) tensor to a (2, 0) tensor is like taking the trace of their composition if they were both regarded as linear maps.
Anyway for operators that are supposed to modify a state, like annihilation/creation or time-evolution, I would be inclined to model it as linear maps/(1, 1)-tensors like in the OP. It was specifically for observables that I meant it seemed most natural to use (0, 2) tensors.
I thought they were typically wavefunction to wavefunction maps, and they need some sort of sandwiching to apply to density matrices?
Yes, this is correct. My mistake, it does indeed need the sandwiching like this ρnew=aρolda†.
From your talk on tensors, I am sure it will not surprise you at all to know that the sandwhich thing itself (mapping from operators to operators) is often called a superoperator.
I think the reason it is as it is is their isn’t a clear line between operators that modify the state and those that represent measurements. For example, the Hamiltonian operator evolves the state with time. But, taking the trace of the Hamiltonian operator applied to the state gives the expectation value of the energy.
Oh it does surprise me, superoperators are a physics term but I just know linear algebra and dabble in physics, so I didn’t know that one. Like I’d think of it as the functor over vector spaces that maps V↦V⊗V.
Hm, I guess it’s true that we’d usually think of the matrix exponential as mapping V⊸V to V⊸V, rather than as mapping V⊗V⊸C to V⊸V. I guess it’s easy enough to set up a differential equation for the latter, but it’s much less elegant than the usual form.
In some papers people write density operators using an enhanced “double ket” Dirac notation, where eg. density operators are written to look like |x>>, with two “>”’s. They do this exactly because the differential equations look more elegant.
I think in this notation measurements look like <<m|, but am not sure about that. The QuTiP software (which is very common in quantum modelling) uses something like this under-the-hood, where operators (eg density operators) are stored internally using 1d vectors, and the super-operators (maps from operators to operators) are stored as matrices.
So structuring the notation in other ways does happen, in ways that look quite reminiscent of your tensors (maybe the same).