In categorical terms, why is there no canonical isomorphism from a finite dimensional vector space to its dual?

I’ve read in several places that one motivation for category theory was to be able to give precise meaning to statements like, “finite dimensional vector spaces are canonically isomorphic to their double duals; they are isomorphic to their duals as well, but not canonically.”

I’ve finally sat down to work through this, and –

Okay, yes, it is easy to see that the “canonical isomorphism” from V to V is a functor that has a natural isomorphism (in the sense of category theory) to the identity functor.

Also, I see that there is no way that the functor VV could have a natural isomorphism to the identity functor, because it is contravariant whereas the identity functor is covariant. My question amounts to:

Is contravariance the whole problem?

To elaborate:

I was initially disappointed by the realization that the definition of natural isomorphism doesn’t apply to a pair of functors one of which is covariant and the other contravariant, because I was hoping that the lack of a canonical isomorphism VV would feel more like a theorem as opposed to an artifact of the inapplicability of a definition.

Then I tried to create a definition of a natural transformation from a covariant functor F:AB to a contravariant functor G:AB. It seems to me that this definition should be that all objects AA get a morphism mA:F(A)G(A) such that for all morphisms f:AA of A, the following diagram (in B) commutes:

F(A) @>m_A>> G(A)\\
@VF(f)VV @AAG(f)A\\
F(A’) @>>m_{A’}> G(A’)

This is much more stringent a demand on the m_A than the typical definition of a natural transformation. Indeed, it is asking that m_A=G(f)\circ m_{A’}\circ F(f), regardless of how f or A’ may vary. Taking \mathscr{A}=\mathscr{B}=\text{f.d.Vec}_k, F the identity functor and G the dualizing functor, it is clear that this definition can never be satisfied unless m_V is the zero map for all V\in\text{f.d.Vec}_k (because take f to be the zero map). In particular, it cannot be satisfied if m_V is required to be an isomorphism.

Is this the right way to understand (categorically) why there is no natural isomorphism V\rightarrow V^*?

As an aside, are there any interesting cases of some kind of analog (the above definition or another) of natural transformations from covariant to contravariant functors?

Note: I have read a number of math.SE answers regarding why V^* is not naturally isomorphic to V. None that I have found are addressed to what I’m asking here, which is about how categories make the question and answer precise. (This one was closest.) Hence my question here.


Congratulations, you have reinvented the notion of a dinatural transformation (see for instance MacLane’s Categories for the working mathematician, section IX.4). And your proof, that every dinatural transformation from the identity functor to the dualization functor is zero, is correct. And I agree that this is one (and perhaps the only) way to make precise that a f.d. vector space is not canonically isomorphic to its dual. By the way, for euclidean vector spaces, there is a canonical isomorphism, given by V \mapsto V^*, v \mapsto \langle v,- \rangle.

1st Edit: In the comments, Mariano has suggested to restrict to isomorphisms as morphisms. This comes down to: If n \in \mathbb{N}, is there some M \in \mathrm{GL}_n(K), such that for all A \in \mathrm{GL}_n(K) we have M = A^T \cdot M \cdot A? By taking A to be some diagonal matrix we immediately see that this is only possible for the trivial case n=0 or when K=\mathbb{F}_2.

2nd Edit: Let us look more closely at the case K=\mathbb{F}_2. For n=1 we can take M=(1). As mentioned by ACL (in Mariano’s link in the comments), for n=2 we can take M=\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}.

Thus, for 2-dimensional \mathbb{F}_2-vector spaces V there is a canonical isomorphism V \cong V^* which is natural with respect to isomorphisms. It is induced by the unique(!) alternating 2-form on V.

For n=3 this is not possible: By taking \small A=\begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} it follows that M_{11}=M_{13}=0, and by taking \small A=\begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} it follows M_{12}=0, so that M is not invertible. A similar reasoning works for all n \geq 3.

Source : Link , Question Author : Ben Blum-Smith , Answer Author : Martin Brandenburg

Leave a Comment