From what I understand, a covector is an object that takes a vector and returns a number. So given a vector v∈V and a covector ϕ∈V∗, you can act on v with ϕ to get a real number ϕ(v). Is that “it” or is there more to it?

I find this simple realization hard to reconcile with an example given in my textbook: “Geometric Measure Theory” by Frank Morgan, where he explains that given Rn, the dual space Rn∗ has basis dx1,dx2…. So lets say we are in R2, so dx is a function on any vector, that will return a scalar? I can’t imagine what dx([1,2]) is. Can someone explain to me this concept of covectors reconciled with the infinitesimal case?

Edit: It has something to do with biorthogonality? I am really stumped on these concepts, like I’m ramming my head against a wall.

**Answer**

**Addition**: in a new answer I have briefly continued the explanation below to define general tensors and not just differential forms (covariant tensors).

Many of us got very confused with the notions of tensors in differential geometry not because of its algebraic structure or definition, but because of confusing old notation. The motivation for the notation dxi is perfectly justified once one is introduced to exterior differentiation of differential forms, which are just antisymmetric multilinear covectors (i.e. take a number of vectors and give a number, changing sign if we reorder its input vectors). The confusion in your case is higher because your are using xi for your basis of vectors, and xi are actually your local coordinate functions.

**CONSTRUCTION & NOTATION:**

Let V be a finite vector space over a field k with basis →e1,...,→en, let V∗:=Hom(V,k) be its dual vector space, i.e. the linear space formed by linear functionals ˜ω:V→k which eat vectors and give scalars. Now the tensor product ⨂pi=1V∗=:(V∗)⊗p is just the vector space of multilinear functionals ˜ω(p):Vk→k, e.g. ˜ω(→v1,...,→vk) is a scalar and linear in its input vectors. By alternating those spaces, one gets the multilinear functionals mentioned before, which satisfy ω(→v1,...,→vk)=sgn(π)⋅ω(→vπ(1),...,→vπ(k)) for any permutation of the k entries. The easiest example is to think in row vectors and matrices: if your vectors are columns, think of covectors as row vectors which by matrix product give you a scalar (actually its typical scalar product!), they are called one-forms; similarly any matrix multiplied by a column vector on the right and by a row vector on the left gives you a scalar. An anti-symmetric matrix would work similarly but with the alternating property, and it is called a two-form. This generalizes for inputs of any k vectors.

Now interestingly, there is only a finite number of those alternating spaces: k≅V0,V∗,(V∗)∧2:=V∗∧V∗,...,(V∗)∧dimV.

If you consider your V-basis to be →e1,...,→en, by construction its dual space of covectors has a basis of the same dimension given by the linear forms ˜e1,...,˜en which satisfy ˜ei(→ek)=δik, that is, they are the functionals that give 1 only over its dual vectors and 0 on the rest (covectors are usually indexed by superindices to use the Einstein summation convention). That is why any covector acting on a vector →v=∑ivi→ei can be written ˜ω=n∑i=1ωi˜ei⇒˜ω(→v)=n∑i=1ωi˜ei(→v)=n∑i=1ωi⋅vi.

Finally, when you are working in differential manifolds, you endow each point p with a tangent space TpM of tangent vectors and so you get a dual space T∗pM of covectors and its alternating space Ω1p(M) of alternating 1-forms (which in this case coincide). Analogously you define your k-forms on tangent vectors defining Ωkp(M). Since these spaces depend from point to point, you start talking about *fields* of tangent vectors and *fields* of differential k-forms, which vary in components (with respect to your chosen basis field) from point to point.

Now the interesting fact is that the constructive intrinsic definition of tangent vectors is based on directional derivatives at every point. Think of your manifold M of dimension n as given locally charted around a point p by coordinates: x1,...,xn. If you have a curve γ:[0,1]⊂R→M over your manifold passing through your point p of interest, you can intrinsically define tangent vectors at p by looking at the directional derivatives at p of any smooth function f:M→R which are given by the rate of change of f over any curve passing through p. That is, in local coordinates your directional derivative at point p along (in the tangent direction of) a curve γ passing through it (γ(t0)=p) is given by (using the chain rule for differentiation with the local coordinates for γ(t)): df(γ(t))dt|t0=n∑i=1∂f∂xi|pdxi(γ(t))dt|t0=n∑i=1Xi|p∂f∂xi|p.

This is how that equation must be understood: at every point p, different curves have different “tangent vectors” X with local components Xi|p∈R given by the parametric differentiation of the curve’s local equations (so actually you only have to care about equivalence classes of curves having the same direction at each point); the directional derivative at any point of any function in direction X is thus expressible as the operator: X=n∑i=1Xi∂∂xi, for all the possible components Xi such that are smooth scalar fields over the manifold. In this way you have attached a tangent space TpM≅Rn at every point with canonical basis →ei=∂∂xi=:∂i for every local chart of coordinates xi. This seems to be far from the visual “arrow”-like notion of vectors and tangent vectors as seen in surfaces and submanifolds of Rn, but it can proved to be equivalent to the definition of the geometric tangent space by immersing your manifold in Rn (which can always be done by Whitney’s embedding theorem) and restricting your “arrow”-space at every point to the “arrow”-subspace of vectors tangent to M as submanifold, in the same way that you can think of the tangent planes of a surface in space. Besides this is confirmed by the transformation of components, if two charts x and y contain point p, then their coordinate vectors transform with the Jacobian of the coordinate transformation x↦y: ∂∂yi=n∑j=1∂xj∂yi∂∂xj, which gives the old-fashioned transformation rule for vectores (and tensors) as defined in theoretical physics. Now it is an easy exercise to see that, if a change of basis in V from →ei to →e′j is given by the invertible matrix Λij′ (which is always the case), then the corresponding dual basis are related by the inverse transformation Λj′i:=(Λij′)−1:

→e′j=n∑i=1Λij′→ei⇒˜e′j=n∑i=1(Λij′)−1˜ei=:n∑i=1Λj′i˜ei, where ∑iΛk′iΛij′=δk′j′.

Thus, in our manifold, the cotangent space is defined to be T∗pM with coordinate basis given by the dual functionals ωix(∂/∂xj)=δij, ωiy(∂/∂yj)=δij. By the transformation law of tangent vectors and the discussion above on the dual transformation, we must have that tangent covectors transform with the inverse of the previously mentioned Jacobian:

ωiy=n∑j=1∂yi∂xjωjx.

But this is precisely the transformation rule for differentials by the chain rule!

dyi=n∑j=1∂yi∂xjdxj.

Therefore is conventional to use the notation ∂/∂xi and dxi for the tangent vector and covector coordinate basis in a chart x:M→Rn.

Now, from the previous point of view dxi are regarded as the classical differentials, just with the new perspective of being the functional duals of the differential operators ∂/∂xi. To make more sense of this, one has to turn to differential forms, which are our k-forms on M. A 1-form ω∈T∗pM=Ω1p(M) is then just

ω=n∑i=1ωidxi,

with ωi(x) varying with p, smooth scalar fields over the manifold. It is standard to consider Ω0(M) the space of smooth scalar fields. After defining wedge alternating products we get the k-form basis dxi∧dxj∈Ω2(M),dxi∧dxj∧dxk∈Ω3(M),...,dx1∧⋯∧dxn∈Ωn(M), (in fact not all combinations of indices for each order are independent because of the antisymmetry, so the bases have fewer elements than the set of products). All these “cotensor” linear spaces are nicely put together into a ring with that wedge ∧ alternating product, and a nice differentiation operation can be defined for such objects: the exterior derivative d:Ωk(M)→Ωk+1(M) given by dω(k):=∑i0<...<ik∂ωi1,...,ik∂xi0dxi0∧dxi1∧⋯∧dxik.

This differential operation is a generalization of, and reduces to, the usual vector calculus operators grad,curl,div in Rn. Thus, we can apply d to smooth scalar fields f∈Ω0(M) so that df=∑i∂ifdxi∈Ω1(M) (note that not all covectors ω(1) come from some df, a necessary and sufficient condition for this "exactness of forms" is that ∂iωj=∂jωi for any pair of components; this is the beginning of de Rham's cohomology). In particular the coordinate functions xi of any chart satisfy dxi=n∑j=1∂xi∂xjdxj=n∑j=1δijdxj=dxi.

THIS final equality establishes the correspondence between the infinitesimal differentials dxi and the exterior derivatives dxi. Since we could have written any other symbol for the basis of Ω1p(M), it is clear that dxi are a basis for the dual tangent space. In practice, notation is reduced to d=d as we are working with isomorphic objects at the level of their linear algebra. All this is the reason why dxi(∂j)=δij, since once proved that the dxi form a basis of T∗pM it is straightforward without resorting to the component transformation laws. On the other hand, one could start after the definition of tangent spaces as above, with coordinate basis ∂/∂xi and dualize to its covector basis ˜ei such that ˜ei(∂j)=δij; after that, one defines wedge products and exterior derivatives as usual from the cotangent spaces; then it is a theorem that for any tangent vector field X and function f on M X(f)=n∑i=1Xi∂f∂xi=df(X),

so in particular we get as a corollary that our original dual coordinate basis is in fact the exterior differential of the coordinate functions: (→ei)∗=˜ei=dxi∈Ω1(M):=T∗(M).

(That is true at any point for any coordinate basis from the possible charts, so it is true as covector fields over the manifold). In particular the evaluation of covectors dxi on infinitesimal vectors Δxj∂j is dxi(Δxj∂j)=∑nj=1δijΔxj, so when Δxi→0 we can see the infinitesimal differentials as the evaluations of infinitesimal vectors by coordinate covectors.

**MEANING, USAGE & APPLICATIONS:**

Covectors are the essential structure for differential forms in differential topology/geometry and most of the important developments in those fields are formulated or use them in one way or another. They are central ingredients to major topics such as:

Linear dependence of vectors, determinants and hyper-volumes, orientation, integration of forms (Stokes' theorem generalizing the fundamental theorem of calculus), singular homology and de Rham cohomology groups (Poincaré's lemma, de Rham's theorem, Euler characteristics and applications as invariants for algebraic topology), exterior covariant derivative, connection and curvature of vector and principal bundles, characteristic classes, Laplacian operators, harmonic functions & Hodge decomposition theorem, index theorems (Chern-Gauß-Bonnet, Hirzebruch-Riemann-Roch, Atiyah-Singer...) and close relation to modern topological invariants (Donaldson-Thomas, Gromow-Witten, Seiberg-Witten, Soliton equations...).

In particular, from a mathematical physics point of view, differential forms (covectors and tensors in general) are fundamental entities for the formulation of physical theories in geometric terms. For example, Maxwell's equations of electromagnetism are just two equations of differential forms in Minkowski's spacetime:

dF=0,⋆d⋆F=J,

where F is a 2-form (an antysymmetric 4x4 matrix) whose components are the electric and magnetivc vector field components Ei,Bj, J is a spacetime vector of charge-current densities and ⋆ is the Hodge star operator (which depends on the metric of the manifold so it requires additional structure!). In fact the first equation is just the differential-geometric formulation of the classical Gauß' law for the magnetic field and the Faraday's induction law. The other equation is the dynamic Gauß' law for the electric field and Ampère's circuit law. The continuity law of conservation of charge becomes just d⋆J=0. Also, by Poincaré's lemma, F can always be solved as F=dA with A a covector in spacetime called the electromagnetic potential (in fact, in electrical engineering and telecommunications they always solve the equations by these potentials); since exterior differentiation satisfies d∘d=0, the potentials are underdetermined by A′=A+dϕ, which is precisely the simplest "gauge invariance" which surrounds the whole field of physics. In theoretical physics the electromagnetic force comes from a U(1) principal bundle over the spacetime; generalizing this for Lie groups SU(2) and SU(3) one arrives at the mathematical models for the weak and strong nuclear forces (electroweak theory and chromodynamics). The Faraday 2-form F is actually the local curvature of the gauge connection for those fiber bundles, and similarly for the other forces. This is the framework of Gauge Theory working in arbitrary manifolds for your spacetime. The only other force remaining is gravity, and again general relativity can be written in the Cartan formalism as a curvature of a Lorentz spin connection over a SO(1,3) gauge bundle, or equivalently as the (Riemannian) curvature of the tangent space bundle.

**Attribution***Source : Link , Question Author : Mike Flynn , Answer Author : Daniel Pfeffer*