## Dual spaces

There is one glaring omission in our Linear Algebra curriculum – it avoids talking about the dual space of a vector space. This makes talking about relationship between subspaces and equations that define them exceedingly difficult. Better late than never, so here it comes.

Let $V$ be a vector space over a field $\mathbb{F}.$ Denote by $V^*$ the set of linear functions $V\to \mathbb{F}.$

Examples
Let $V=C[a,b],$ the space of continuous functions on $[a,b].$ Then the function $\int: V\to V$ given by $f\mapsto \int_a^b f(x) dx$ is linear on $V.$

Let $V=\mathbb{R}[x]$ be the vector space of polynomials with real coefficients.
Then the function $V\to V$ given by $f\mapsto \frac{df}{dx}(0)$ is linear on $V.$

Note that as $f\in V^*$ is linear, one has $f(\alpha v)=\alpha f(v)$ for any $v\in V, \alpha\in \mathbb{F}.$ Thus we have $m_\alpha:V^*\to V^*$ defined by $m_\alpha(f)(v)=f(\alpha v)$ so that $m_\alpha(m_\beta(f))=(m_\alpha m_\beta)(f).$ To simplify notation, we will write $\alpha f$ instead $m_\alpha(f).$ As well, we can define $(f+g)(v)=f(v)+g(v)$ for any $f,g\in V^*,$ and more generally, $(\alpha f+\beta g)(v)=\alpha f(v)+\beta g(v).$ And there is the zero function $0(v)=0$ for any $v\in V.$ Thus we have all the ingredients of a vector space, as can be easily checked.

$V^*$ is a vector space over $\mathbb{F}.$ It is called the dual space of $V.$

So far, we haven’t used the linearity of our functions at all (we actually did not need the fact that $\alpha f(v)=f(\alpha v)$). Indeed, any closed under addition and multiplication set of functions $V\to V$ would form a vector space.
What makes the dual space so special is that to define $f\in V^*$ it suffices to define $f(e_i)$ on a basis $\{e_i\}$ of $V.$ Indeed, $f(\sum_i \alpha_i e_i)=\sum_i \alpha_i f(e_i),$ so we can compute $f(v)$ for any $v=\sum_i \alpha_i e_i,$ once we know the $f(e_i)$‘s.

Thus for a finite-dimensional vector space $V$ one sees a (dependent upon the choice of a basis in $V$) bijection between $V$ and $V^*.$ This bijection, that is even an isomorphism of vector spaces, is defined by the dual basis of $V^*$ given by coordinate functions $x_i=\epsilon_i(x),$ where $x_i$‘s are the coefficients of $x\in V$ is the decomposition of $x$ in the basis $\{ e_i\}.$

Finite-dimensionality is crucial here. E.g. let us consider the vector space of polynomials $\mathbb{Z}[x].$ It is a countable space: one can view it as the set of infinite 0-1 strings, with only finitely many 1’s occurring in each string. On the other hand, $V^*$ can be viewed as the set of all the infinite 0-1 strings, which is uncountable, so there cannot be a bijection between $V$ and $V^*.$

Given $v\in V,$ one can define a function $f_v:V^*\to \mathbb{F},$ as follows: $f_v(g):=g(v).$ It is linear, as $f_v(\alpha g+\beta h)=\alpha g(v)+\beta h(v)=\alpha f_v(g)+\beta f_v(h).$ Here we do not see any dependence on the choice of a basis in $V,$ and we have

The vector space $V^{**}$ of linear functions on $V^*$ is (canonically) isomorphic to $V,$ via the mapping $v\mapsto f_v.$

Indeed, we see immediately that $f_{\alpha v+\beta w}=\alpha f_v+\beta f_w,$ and so we need only to check that this mapping is bijective. Let $\{e_i\}$ be a basis in $V$ and $\{\epsilon_i\}$ its dual basis in $V^*.$ Then $f_{e_i}(\epsilon_j)=1$ if $i=j$ and 0 otherwise. Thus $\{ f_{e_i}\}$ is the basis of $V^{**},$ which is dual to the basis $\{\epsilon_i\}$ of $V^*,$ and the mapping $v\mapsto f_v$ sends the vector with coordinates $v_i$ to the vector with the same coordinates in
the basis $\{ f_{e_i}\}$ of $V^{**}.$ Hence the latter is bijective.

In view of the latter, we can identify $V$ with $V^{**},$ and write $v(g)$ instead $f_v(g).$ The set of $g\in V^*$ such that $v(g)=0$ is a subspace, called annihilator of $v,$ of dimension $n-1=dim(V)-1.$ More generally, the following holds.

Let $U$ be a subspace of $V,$ and $U^0:=\{g\in V^*\mid g(u)=0\text{ for all } u\in U\}.$ Then the annihilator $U^0$ of $U$ is a subspace of $V^*$ of dimension $dim(V)-dim(U).$

Indeed, we can choose a basis $\{e_i\}$ in $V$ so that $\{e_1,\dots,e_{k}$ is a basis of $U,$ where $dim(U)=k.$ Then we have the dual basis $\{\epsilon_i\}$ of $V^*,$ and $U^0$ is the subspace with the basis $\{e_{k+1},\dots,e_n\}.$

In view of this, each $U$ can be obtained as the set of solutions of a system of homogeneous linear equations $g(u)=0,$ for $g\in U^0,$ of rank $dim(V)-dim(U).$

Dual spaces and annihilators under a basis change
Let $X\in GL_n(\mathbb{F})$ be a linear transformation of $V,$ and $U$ a subspace of $V.$ Then $X(U)=\{ Xu\mid u\in U\}$ is a subspace. How can one look at $g(U^0)?$ By writing out $u=\sum_i u_i e_i$ in a basis $\{e_i\},$ for any $g=\sum_i g_i\epsilon_i\in U^0$ in the dual basis $\{\epsilon_i\},$ we get equation $\sum_i g_i u_i =0.$ Thus, considering $X$ as a matrix, we get $g^T YX u=0,$ where $Y$ denotes the action of $X$ on $V^*.$ It follows that $YX=1_{GL_n},$ i.e. $Y=X^{-1}.$ We have, considering that $Y$ acts on $V^*$ by right multiplication, and not by left ones, to take the transpose, too.

$X\in GL_n(\mathbb{F})$ acts on $V^*$ as $(X^{-1})^T.$

An example.
Let $V=\mathbb{F}^3.$ We work in the standard basis $\{e_1,e_2,e_3\}$ of $V.$ Then the dual basis of $V^*$ is $\{\epsilon_1,\epsilon_2,\epsilon_3\}$, so that $\epsilon_i((u_1,u_2,u_3)^T)=u_i.$
Let $G$ be the group of matrices $G=\left\{ \begin{pmatrix} 1&x&y\\ 0&z&u\\ 0&t&w \end{pmatrix} \mid x,y,z,u,t,w\in\mathbb{F} \right\} It fixes, in its left action on $V$ by multiplication, the vector $e_1=(100)^T.$. Let $U$ be the 1-dimensional subspace of $V$ generated by $e_1.$ Then $U^0$ is generated by $\epsilon_2$ and $\epsilon_3.$ The group $G$ preserves $U^0,$ in its action on $V^*.$ As $U_0$ is 2-dimensional, there should be a nontrivial kernel in this action, and indeed, it consists of the elements of the form $\begin{pmatrix} 1&x&y\\ 0&1&0\\ 0&0&1 \end{pmatrix}.$

A particularly simple case is $\mathbb{F} = \mathbb{Z}_2.$ Then $G$ is isomorphic to $S_4,$ the symmetric group on 4 letters, as can be seen in its action on the 4 elements of $V^*$ outside $U^0.$ On the other hand, it acts on the 3 nonzero elements of $U^0$ as $S_3.$