Why are vector spaces not isomorphic to their duals?

Assuming the axiom of choice, set F to be some field (we can assume it has characteristics 0).

I was told, by more than one person, that if κ is an infinite cardinal then the vector space V=F(κ) (that is an infinitely dimensional space with basis of cardinality κ) is not isomorphic (as a vector space) to the algebraic dual, V.

I have asked several professors in my department, and this seems to be completely folklore. I was directed to some book, but could not have find it in there as well.

The Wikipedia entry tells me that this is indeed not a cardinality issue, for example R<ω (that is all the eventually zero sequences of real numbers) has the same cardinality as its dual Rω but they are not isomorphic.

Of course being of the same cardinality is necessary but far from sufficient for two vector spaces to be isomorphic.

What I am asking, really, is whether or not it is possible when given a basis and an embedding of a basis of V into V, to say "This guy is not in the span of the embedding"?

Edit: I read the answers in the link given by Qiaochu. They did not satisfy me too much.

My main problem is this: suppose κ is our basis then V consists of {f:κF||f1[F{0}]|<} (that is finite support), while V={f:κF} (that is all the functions).

In particular, the basis for V is given by fα(x)=δαx (i.e. 1 on α, and 0 elsewhere), while V needs a much larger basis. Why can't there by other linear functionals on V?

Edit II: After the discussions in the comments and the answers, I have a better understanding of my question to begin with. I have no qualms that under the axiom of choice given an infinite set κ there are a lot more functions from κ into F, than functions with finite support from κ into F. It is also clear to me that the basis of a vector space is actually the set of δ functions, whereas the basis for the dual is a subset of characteristic functions.

My problem is, if so, why is the dual space composed from all functions from A into F?

(And if possible, not to just show by cardinality games that the basis is much larger but actually show the algorithm for the diagonalization.)

Answer

This is just Bill Dubuque's sci.math proof (see Google Groups or MathForum) mentioned in the comments, expanded.

Edit. I'm also reorganizing this so that it flows a bit better.

Let F be a field, and let V be the vector space of dimension κ.

Then V is naturally isomorphic to iκF, the set of all functions f:κF of finite support. Let ϵi be the element of V that sends i to 1 and all ji to 0 (that is, you can think of it as the κ-tuple with coefficients in F that has 1 in the ith coordinate, and 0s elsewhere).

Lemma 1. If dim(V)=κ, and either κ or |F| are infinite, then |V|=κ|F|=max.

Proof. If \kappa is finite, then V=F^{\kappa}, so |V|=|F|^{\kappa}=|F|=|F|\kappa, as |F| is infinite here and the equality holds.

Assume then that \kappa is infinite. Each element of V can be represented uniquely as a linear combination of the \epsilon_i. There are \kappa distinct finite subsets of \kappa; and for a subset with n elements, we have |F|^n distinct vectors in V.

If \kappa\leq |F|, then in particular F is infinite, so |F|^n=|F|. Hence you have |F| distinct vectors for each of the \kappa distinct subsets (even throwing away the zero vector), so there is a total of \kappa|F| vectors in V.

If |F|\lt\kappa, then |F|^n\lt\kappa since \kappa is infinite; so there are at most \kappa vectors for each subset, so there are at most \kappa^2 = \kappa vectors in V. Since the basis has \kappa elements, \kappa\leq|V|\leq\kappa, so |V|=\kappa=\max\{\kappa,|F|\}. QED

Now let V^* be the dual of V. Since V^* = \mathcal{L}(V,F) (where \mathcal{L}(V,W) is the vector space of all F-linear maps from V to W), and V=\mathop{\oplus}\limits_{i\in\kappa}F, then again from abstract nonsense we know that
V^*\cong \prod_{i\in\kappa}\mathcal{L}(F,F) \cong \prod_{i\in\kappa}F.
Therefore, |V^*| = |F|^{\kappa}.


Added. Why is it that if A is the basis of a vector space V, then V^* is equivalent to the set of all functions from A to the ground field?

A functional f\colon V\to F is completely determined by its value on a basis (just like any other linear transformation); thus, if two functionals agree on A, then they agree everywhere. Hence, there is a natural injection, via restriction, from the set of all linear transformations V\to F (denoted \mathcal{L}(V,F)) to the set of all functions A\to F, F^A\cong \prod\limits_{a\in A}F. Moreover, given any function g\colon A\to F, we can extend g linearly to all of V: given \mathbf{x}\in V, there exists a unique finite subset \mathbf{a}_1,\ldots,\mathbf{a}_n (pairwise distinct) of A and unique scalars \alpha_1,\ldots,\alpha_n, none equal to zero, such that \mathbf{x}=\alpha_1\mathbf{a}_1+\cdots\alpha_n\mathbf{a}_n (that's from the definition of basis as a spanning set that is linearly independent; spanning ensures the existence of at least one such expression, linear independence guarantees that there is at most one such expression); we define g(\mathbf{x}) to be
g(\mathbf{x})=\alpha_1g(\mathbf{a}_1)+\cdots \alpha_ng(\mathbf{a}_n).
(The image of \mathbf{0} is the empty sum, hence equal to 0).
Now, let us show that this is linear.

First, note that \mathbf{x}=\beta_1\mathbf{a}_{i_1}+\cdots\beta_m\mathbf{a}_{i_m} is any expression of \mathbf{x} as a linear combination of pairwise distinct elements of the basis A, then it must be the case that this expression is equal to the one we already had, plus some terms with coefficient equal to 0. This follows from the linear independence of A: take
\mathbf{0}=\mathbf{x}-\mathbf{x} = (\alpha_1\mathbf{a}_1+\cdots\alpha_n\mathbf{a}_n) - (\beta_1\mathbf{a}_{i_1}+\cdots+\beta_m\mathbf{a}_{i_m}).
After any cancellation that can be done, you are left with a linear combination of elements in the linearly independent set A equal to \mathbf{0}, so all coefficients must be equal to 0. That means that we can likewise define g as follows: given any expression of \mathbf{x} as a linear combination of elements of A, \mathbf{x}=\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m, with \mathbf{a}_i\in A, not necessarily distinct, \gamma_i scalars not necessarily equal to 0, we define
g(\mathbf{x}) = \gamma_1g(\mathbf{a}_1)+\cdots+\gamma_mg(\mathbf{a}_m).
This will be well-defined by the linear independence of A. And now it is very easy to see that g is linear on V: if \mathbf{x}=\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m and \mathbf{y}=\delta_{1}\mathbf{a'}_1+\cdots+\delta_n\mathbf{a'}_n are expressions for \mathbf{x} and \mathbf{y} as linear combinations of elements of A, then
\begin{align*}
g(\mathbf{x}+\lambda\mathbf{y}) &= g\Bigl(\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m+\lambda(\delta_{1}\mathbf{a'}_1+\cdots+\delta_n\mathbf{a'}_n)\Bigr)\\
&= g\Bigl(\gamma_1\mathbf{a}_1+\cdots+\gamma_m\mathbf{a}_m+ \lambda\delta_{1}\mathbf{a'}_1+\cdots+\lambda\delta_n\mathbf{a'}_n\\
&= \gamma_1g(\mathbf{a}_1) + \cdots \gamma_mg(\mathbf{a}_m) + \lambda\delta_1g(\mathbf{a'}_1) + \cdots + \lambda\delta_ng(\mathbf{a'}_n)\\
&= g(\mathbf{x})+\lambda g(\mathbf{y}).
\end{align*}

Thus, the map \mathcal{L}(V,F)\to F^A is in fact onto, giving a bijection.

This is the "linear-algebra" proof. The "abstract nonsense proof" relies on the fact that if A is a basis for V, then V is isomorphic to \mathop{\bigoplus}\limits_{a\in A}F, a direct sum of |A| copies of A, and on the following universal property of the direct sum:

Definition. Let \mathcal{C} be an category, let \{X_i\}{i\in I} be a family of objects in \mathcal{C}. A coproduct of the X_i is an object C of \mathcal{C} together with a family of morphisms \iota_j\colon X_j\to C such that for every object X and ever family of morphisms g_j\colon X_j\to X, there exists a unique morphism \mathbf{f}\colon C\to X such that for all j, g_j = \mathbf{g}\circ \iota_j.

That is, a family of maps from each element of the family is equivalent to a single map from the coproduct (just like a family of maps into the members of a family is equivalent to a single map into the product of the family). In particular, we get that:

Theorem. Let \mathcal{C} be a category in which the sets of morphisms are sets; let \{X_i\}_{i\in I} be a family of objects of \mathcal{C}, and let (C,\{\iota_j\}_{j\in I}) be their coproduct. Then for every object X of \mathcal{C} there is a natural bijection
\mathrm{Hom}_{\mathcal{C}}(C,X) \longleftrightarrow \prod_{j\in I}\mathrm{Hom}_{\mathcal{C}}(X_j,X).

The left hand side is the collection of morphisms from the coproduct to X; the right hand side is the collection of all families of morphisms from each element of \{X_i\}_{i\in I} into X.

In the vector space case, the fact that a linear transformation is completely determined by its value on a basis is what establishes that a vector space V with basis A is the coproduct of |A| copies of the one-dimensional vector space F. So we have that
\mathcal{L}(V,W) \leftrightarrow \mathcal{L}\left(\mathop{\oplus}\limits_{a\in A}F,W\right) \leftrightarrow \prod_{a\in A}\mathcal{L}(F,W).
But a linear transformation from F to W is equivalent to a map from the basis \{1\} of F into W, so \mathcal{L}(F,W) \cong W. Thus, we get that if V has a basis of cardinality \kappa (finite or infinite), we have:
\mathcal{L}(V,F) \leftrightarrow \mathcal{L}\left(\mathop{\oplus}_{i\in\kappa}F,F\right) \leftrightarrow \prod_{i\in\kappa}\mathcal{L}(F,F) \leftrightarrow \prod_{i\in\kappa}F = F^{\kappa}.


Lemma 2. If \kappa is infinite, then \dim(V^*)\geq |F|.

Proof. If F is finite, then the inequality is immediate. Assume then that F is infinite. Let c\in F, c\neq 0. Define \mathbf{f}_c\colon V\to F by \mathbf{f}_c(\epsilon_n) = c^n if n\in\omega, and \mathbf{f}_c(\epsilon_i)=0 if i\geq\omega. These are linearly independent:

Suppose that c_1,\ldots,c_m are pairwise distinct nonzero elements of F, and that \alpha_1\mathbf{f}_{c_1} + \cdots + \alpha_m\mathbf{f}_{c_m} = \mathbf{0}. Then for each i\in\omega we have
\alpha_1 c_1^i + \cdots + \alpha_m c_m^i = 0.
Viewing the first m of these equations as linear equations in the \alpha_j, the corresponding coefficient matrix is the Vandermonde matrix,
\left(\begin{array}{cccc}
1 & 1 & \cdots & 1\\
c_1 & c_2 & \cdots & c_m\\
c_1^2 & c_2^2 & \cdots & c_m^2\\
\vdots & \vdots & \ddots & \vdots\\
c_1^{m-1} & c_2^{m-1} & \cdots & c_m^{m-1}
\end{array}\right),

whose determinant is \prod\limits_{1\leq i\lt j\leq m}(c_j-c_i)\neq 0. Thus, the system has a unique solution, to wit \alpha_1=\cdots=\alpha_m = 0.

Thus, the |F| linear functionals \mathbf{f}_c are linearly independent, so \dim(V^*)\geq |F|. QED

To recapitulate: Let V be a vector space of dimension \kappa over F, with \kappa infinite. Let V^* be the dual of V. Then V\cong\mathop{\bigoplus}\limits_{i\in\kappa}F and V^*\cong\prod\limits_{i\in\kappa}F.

Let \lambda be the dimension of V^*. Then by Lemma 1 we have |V^*| = \lambda|F|.

By Lemma 2, \lambda=\dim(V^*)\geq |F|, so |V^*| = \lambda. On the other hand, since V^*\cong\prod\limits_{i\in\kappa}F, then |V^*|=|F|^{\kappa}.

Therefore, \lambda= |F|^{\kappa}\geq 2^{\kappa} \gt \kappa. Thus, \dim(V^*)\gt\dim(V), so V is not isomorphic to V^*.


Added{}^{\mathbf{2}}. Some results on vector spaces and bases.

Let V be a vector space, and let A be a maximal linearly independent set (that is, A is linearly independent, and if B is any subset of V that properly contains A, then B is linearly dependent).

In order to guarantee that there is a maximal linearly independent set in any vector space, one needs to invoke the Axiom of Choice in some manner, since the existence of such a set is, as we will see below, equivalent to a basis; however, here we are assuming that we already have such a set given. I believe that the Axiom of Choice is not involved in any of what follows.

Proposition. \mathrm{span}(A) = V.

Proof. Since A\subseteq V, then \mathrm{span}(A)\subseteq V. Let \mathbf{v}\in V. If v\in A, then v\in\mathrm{span}(A). If v\notin A, then B=V\cup\{v\} is linearly dependent by maximality. Therefore, there exists a finite subset a_1,\ldots,a_m in A and scalars \alpha_1,\ldots,\alpha_n, not all zero, such that \alpha_1a_1+\cdots+\alpha_ma_m=\mathbf{0}. Since A is linearly independent, at least one of the a_i must be equal to v; say a_1. Moreover, v must occur with a nonzero coefficient, again by the linear independence of A. So \alpha_1\neq 0, and we can then write
v = a_1 = \frac{1}{\alpha_1}(-\alpha_2a_2 -\cdots - \alpha_na_n)\in\mathrm{span}(A).
This proves that V\subseteq \mathrm{span}(A). \Box

Proposition. Let V be a vector space, and let X be a linearly independent subset of V. If v\in\mathrm{span}(X), then any two expressions of v as linear combinations of elements of X differ only in having extra summands of the form 0x with x\in X.

Proof. Let v = a_1x_1+\cdots a_nx_n = b_1y_1+\cdots+b_my_m be two expressions of v as linear combinations of X.

We may assume without loss of generality that n\leq m. Reordering the x_i and the y_j if necessary, we may assume that x_1=y_1, x_2=y_2,\ldots,x_{k}=y_k for some k, 0\leq k\leq n, and x_1,\ldots,x_k,x_{k+1},\ldots,x_n,y_{k+1},\ldots,y_m are pairwise distinct. Then
\begin{align*}
\mathbf{0} &= v-v\\
&=(a_1x_1+\cdots+a_nx_n)-(b_1y_1+\cdots+b_my_m)\\
&= (a_1-b_1)x_1 + \cdots + (a_k-b_k)x_k + a_{k+1}x_{k+1}+\cdots + a_nx_n - b_{k+1}y_{k+1}-\cdots - b_my_m.
\end{align*}

As this is a linear combination of pairwise distinct elements of X equal to \mathbf{0}, it follows from the linear independence of X that a_{k+1}=\cdots=a_n=0, b_{k+1}=\cdots=b_m=0, and a_1=b_1, a_2=b_2,\ldots,a_k=b_k. That is, the two expressions of v as linear combinations of elements of X differ only in that there are extra summands of the form 0x with x\in X in them. QED

Corollary. Let V be a vector space, and let A be a maximal independent subset of V. If W is a vector space, and f\colon A\to W is any function, then there exists a unique linear transformation T\colon V\to W such that T(a)=f(a) for each a\in A.

Proof. Existence. Given v\in V, then v\in\mathrm{span}{A}. Therefore, we can express v as a linear combination of elements of A,
v = \alpha_1a_1+\cdots+\alpha_na_n. Define
T(v) = \alpha_1f(a_1)+\cdots+\alpha_nf(a_n).
Note that T is well-defined: if v = \beta_1b_1+\cdots+\beta_mb_m is any other expression of v as a linear combination of elements of A, then by the lemma above the two expressions differ only in summands of the form 0x; but these summands do not affect the value of T.

Note also that T is linear, arguing as above. Finally, since a\in A can be expressed as a=1a, then T(a) = 1f(a) = f(a), so the restriction of T to A is equal to f.

Uniqueness. If U is any linear transformation V\to W such that U(a)=f(a) for all a\in A, then for every v\in V, write v=\alpha_1a_1+\cdots+\alpha_na_n with a_i\in A. Then.
\begin{align*}
U(v) &= U(\alpha_1a_1+\cdots + \alpha_na_n)\\
&= \alpha_1U(a_1) + \cdots + \alpha_n U(a_n)\\
&= \alpha_1f(a_1)+\cdots + \alpha_n f(a_n)\\
&= \alpha_1T(a_1) + \cdots + \alpha_n T(a_n)\\
&= T(\alpha_1a_1+\cdots+\alpha_na_n)\\
&= T(v).\end{align*}

Thus, U=T. QED

Attribution
Source : Link , Question Author : Asaf Karagila , Answer Author :
9 revs, 4 users 99%

Leave a Comment