# Is there a definition of determinants that does not rely on how they are calculated?

In the few linear algebra texts I have read, the determinant is introduced in the following manner;

“Here is a formula for what we call $detA$. Here are some other formulas. And finally, here are some nice properties of the determinant.”

For example, in very elementary textbooks it is introduced by giving the co-factor expansion formula. In Axler’s “Linear Algebra Done Right” it is defined, for $T\in L(V)$ to be $(-1)^{dimV}$ times the constant term in the characteristic polynomial of $T$.

However I find this somewhat unsatisfactory. Its like the real definition of the determinant is hidden. Ideally, wouldn’t the determinant be defined in the following manner:

“Given a matrix $A$, let $detA$ be an element of $\mathbb{F}$ such that x, y and z.”

Then one would proceed to prove that this element is unique, and derive the familiar formulae.

So my question is: Does a definition of the latter type exist, is there some minimal set of properties sufficient to define what a determinant is? If not, can you explain why?

Let $V$ be a vector space of dimension $n$. For any $p$, the construction of the exterior power $\Lambda^p(V)$ is functorial in $V$: it is the universal object for alternating multilinear functions out of $V^p$, that is, functions

$$\phi : V^p \to W$$

where $W$ is any other vector space satisfying $\phi(v_1, … v_i + v, … v_p) = \phi(v_1, … v_i, … v_p) + \phi(v_1, … v_{i-1}, v, v_{i+1}, … v_p)$ and $\phi(v_1, … v_i, … v_j, … v_p) = – \phi(v_1, … v_j, … v_i, … v_p)$. What this means is that there is a map $\psi : V^p \to \Lambda^p(V)$ (the exterior product) which is alternating and multilinear which is universal with respect to this property; that is, given any other map $\phi$ as above with the same properties, $\phi$ factors uniquely as $\phi = f \circ \psi$ where $f : \Lambda^p(V) \to W$ is linear.

Intuitively, the universal map $\psi : V^p \to \Lambda^p(V)$ is the universal way to measure the oriented $p$-dimensional volumes of paralleletopes defined by $p$-tuples of vectors in $V$, the point being that for geometric reasons oriented $p$-dimensional volume is alternating and multilinear. (It is instructive to work out how this works when $n = 2, 3$ by explicitly drawing some diagrams.)

Functoriality means the following: if $T : V \to W$ is any map between two vector spaces, then there is a natural map $\Lambda^p T : \Lambda^p V \to \Lambda^p W$ between their $p^{th}$ exterior powers satisfying certain natural conditions. This natural map comes in turn from the natural action $T(v_1, … v_p) = (Tv_1, … Tv_p)$ defining a map $T : V^p \to W^p$ which is compatible with the passing to the exterior powers.

The top exterior power $\Lambda^n(V)$ turns out to be one-dimensional. We then define the determinant $T : V \to V$ to be the scalar $\Lambda^n T : \Lambda^n(V) \to \Lambda^n(V)$ by which $T$ acts on the top exterior power. This is equivalent to the intuitive definition that $\det T$ is the constant by which $T$ multiplies oriented $n$-dimensional volumes. But it requires no arbitrary choices, and the standard properties of the determinant (for example that it is multiplicative, that it is equal to the product of the eigenvalues) are extremely easy to verify.

In this definition of the determinant, all the work that would normally go into showing that the determinant is the unique function with such-and-such properties goes into showing that $\Lambda^n(V)$ is one-dimensional. If $e_1, … e_n$ is a basis, then $\Lambda^n(V)$ is in fact spanned by $e_1 \wedge e_2 \wedge … \wedge e_n$. This is not so hard to prove; it is essentially an exercise in row reduction.

Note that this definition does not even require a definition of oriented $n$-dimensional volume as a number. Abstractly such a notion of volume is given by a choice of isomorphism $\Lambda^n(V) \to k$ where $k$ is the underlying field, but since $\Lambda^n(V)$ is one-dimensional its space of endomorphisms is already canonically isomorphic to $k$.

Note also that just as the determinant describes the action of $T$ on the top exterior power $\Lambda^n(V)$, the $p \times p$ minors of $T$ describe the action of $T$ on the $p^{th}$ exterior power $\Lambda^p(V)$. In particular, the $(n-1) \times (n-1)$ minors (which form the matrix of cofactors) describe the action of $T$ on the second-to-top exterior power $\Lambda^{n-1}(V)$. This exterior power has the same dimension as $V$, and with the right extra data can be identified with $V$, and this leads to a quick and natural proof of the explicit formula for the inverse of a matrix.

As an advance warning, the determinant is sometimes defined as an alternating multilinear function on $n$-tuples of vectors $v_1, … v_n$ satisfying certain properties; this properly defines a linear transformation $\Lambda^n(V) \to k$, not a determinant of a linear transformation $T : V \to V$. If we fix a basis $e_1, … e_n$, then this function can be thought of as the determinant of the linear transformation sending $e_i$ to $v_i$, but this definition is basis-dependent.