Some days ago, I was thinking on a problem, which states that AB−BA=I does not have a solution in Mn×n(R) and Mn×n(C). (Here Mn×n(F) denotes the set of all n×n matrices with entries from the field F and I is the identity matrix.)
Although I couldn’t solve the problem, I came up with this problem:
Does there exist a field F for which that equation AB−BA=I has a solution in Mn×n(F)?
I’d really appreciate your help.
Answer
Let k be a field. The first Weyl algebra A1(k) is the free associative kalgebra generated by two letters x and y subject to the relation xy−yx=1, which is usually called the Heisenberg or Weyl commutation relation. This is an extremely important example of a noncommutative ring which appears in many places, from the algebraic theory of differential operators to quantum physics (the equation above is Heisenberg’s indeterminacy principle, in a sense) to the pinnacles of Lie theory to combinatorics to pretty much anything else.
For us right now, this algebra shows up because
an A1(k)modules are essentially the same thing as solutions to the equation PQ−QP=I with P and Q endomorphisms of a vector space.
Indeed:

if M is a left A1(k)module then M is in particular a kvector space and there is an homomorphism of kalgebras ϕM:A1(k)→homk(M,M) to the endomorphism algebra of M viewed as a vector space. Since x and y generate the algebra A1(k), ϕM is completely determined by the two endomorphisms P=ϕM(x) and Q=ϕM(y); moreover, since ϕM is an algebra homomorphism, we have PQ−QP=ϕ1(xy−yx)=ϕ1(1A1(k))=idM. We thus see that P and Q are endomorphisms of the vector space M which satisfy our desired relation.

Conversely, if M is a vector space and P, Q:M→M are two linear endomorphisms, then one can show more or less automatically that there is a unique algebra morphism ϕM:A1(k)→homk(M,M) such that ϕM(x)=P and ϕM(y)=Q. This homomorphism turns M into a left A1(k)module.

These two constructions, one going from an A1(k)module to a pair (P,Q) of endomorphisms of a vector space M such that PQ−QP=idM, and the other going the other way, are mutually inverse.
A conclusion we get from this is that your question
for what fields k do there exist n≥1 and matrices A, B∈Mn(k)
such that AB−BA=I?
is essentially equivalent to
for what fields k does A1(k) have finite dimensional modules?
Now, it is very easy to see that A1(k) is an infinite dimensional algebra, and that in fact the set {xiyj:i,j≥0} of monomials is a kbasis.
Two of the key properties of A1(k) are the following:
Theorem. If k is a field of characteristic zero, then A1(k) is a simple algebra—that is, A1(k) does not have any nonzero proper bilateral ideals. Its center is trivial: it is simply the 1dimensional subspace spanned by the unit element.
An immediate corollary of this is the following
Proposition. If k is a field of characteristic zero, the A1(k) does not have any nonzero finite dimensional modules. Equivalently, there do not exist n≥1 and a pair of matrices P, Q∈Mn(k) such that PQ−QP=I.
Proof. Suppose M is a finite dimensional A1(k)module. Then we have an algebra homomorphism ϕ:A1(k)→homk(M,M) such that ϕ(a)(m)=am for all a∈A1(k) and all m∈M. Since A1(k) is infinite dimensional and homk(M,M) is finite dimensional (because M is finite dimensional!) the kernel I=kerϕ cannot be zero —in fact, it must hace finite codimension. Now I is a bilateral ideal, so the theorem implies that it must be equal to A1(k). But then M must be zero dimensional, for 1∈A1(k) acts on it at the same time as the identity and as zero. ◻
This proposition can also be proved by taking traces, as everyone else has observed on this page, but the fact that A1(k) is simple is an immensely more powerful piece of knowledge (there are examples of algebras which do not have finite dimensional modules and which are not simple, by the way 🙂 )
Now let us suppose that k is of characteristic p>0. What changes in term of the algebra? The most significant change is
Observation. The algebra A1(k) is not simple. Its center Z is generated by the elements xp and yp, which are algebraically independent, so that Z is in fact isomorphic to a polynomial ring in two variables. We can write Z=k[xp,yp].
In fact, once we notice that xp and yp are central elements —and this is proved by a straightforward computation— it is easy to write down nontrivial bilateral ideals. For example, (xp) works; the key point in showing this is the fact that since xp is central, the left ideal which it generates coincides with the bilateral ideal, and it is very easy to see that the left ideal is proper and nonzero.
Moreover, a little playing with this will give us the following. Not only does A1(k) have bilateral ideals: it has bilateral ideals of finite codimension. For example, the ideal (xp,yp) is easily seen to have codimension p2; more generally, we can pick two scalars a, b∈k and consider the ideal Ia,b=(xp−a,yp−b), which has the same codimension p2. Now this got rid of the obstruction to finding finitedimensional modules that we had in the characteristic zero case, so we can hope for finite dimensional modules now!
More: this actually gives us a method to produce pairs of matrices satisfying the Heisenberg relation. We just can pick a proper bilateral ideal I⊆A1(k) of finite codimension, consider the finite dimensional kalgebra B=A1(k)/I and look for finitely generated Bmodules: every such module is provides us with a finite dimensional A1(k)module and the observations above produce from it pairs of matrices which are related in the way we want.
So let us do this explicitly in the simplest case: let us suppose that k is algebraically closed, let a, b∈k and let I=Ia,b=(xp−a,yp−b). The algebra B=A1(k)/I has dimension p2, with {xiyj:0≤i,j<p} as a basis. The exact same proof that the Weyl algebra is simple when the ground field is of characteristic zero proves that B is simple, and in the same way the same proof that proves that the center of the Weyl algebra is trivial in characteristic zero shows that the center of B is k; going from A1(k) to B we have modded out the obstruction to carrying out these proofs. In other words, the algebra B is what's called a (finite dimensional) central simple algebra. Wedderburn's theorem now implies that in fact B≅Mp(k), as this is the only semisimple algebra of dimension p2 with trivial center. A consequence of this is that there is a unique (up to isomorphism) simple Bmodule S, of dimension p, and that all other finite dimensional Bmodules are direct sums of copies of S.
Now, since k is algebraically closed (much less would suffice) there is an α∈k such that αp=a. Let V=kp and consider the p×pmatrices Q=(0b101010⋱⋱) which is all zeroes expect for 1s in the first subdiagonal and a b on the top right corner, and P=(−α1−α2−α3⋱⋱−αp−1−α). One can show that Pp=aI, Qp=bI and that PQ−QP=I, so they provide us us a morphism of algebras B→homk(kp,kp), that is, they turn kp into a Bmodule. It must be isomorphic to S, because the two have the same dimension and there is only one module of that dimension; this determines all finite dimensional modules, which are direct sums of copies of S, as we said above..
This generalizes the example Henning gave, and in fact one can show that this procedure gives all pdimensional A1(k)modules can be constructed from quotients by ideals of the form Ia,b. Doing direct sums for various choices of a and b, this gives us lots of finite dimensional A1(k)modules and, then, of pairs of matrices satisfying the Heisenberg relation. I think we obtain in this way all the semisimple finite dimensional A1(k)modules but I would need to think a bit before claiming it for certain.
Of course, this only deals with the simplest case. The algebra A1(k) has nonsemisimple finitedimensional quotients, which are rather complicated (and I think there are pretty of wild algebras among them...) so one can get many, many more examples of modules and of pairs of matrices.
Attribution
Source : Link , Question Author : Goodarz Mehr , Answer Author : KCd