## Archive for April, 2009

### Prime ideals and rings of fractions

10 April, 2009

The concept of prime ideal in a commutative ring $R$ with $1$ is one of several natural generalisations of the concept of prime integer number.

An ideal $I\subset R$ is called prime if for any $u,v\in R$ the following holds:
$uv\in I$ implies that at least one of these elements, $u$ and $v,$ is in $I.$

E.g. the principal ideal $(p)\subset \mathbb{Z}$ is prime if and only if $p$ is prime.

A maximal ideal $I\subset R$ is prime.

Indeed, let $uv\in I.$ Assume that $u\not\in I.$ We need to show that then $v\in I.$ If this were not the case then $u+I$ and $v+I$ are two non-zero elements in the ring $R/I$ such that $(u+I)(v+I)\subseteq I.$ But this is not possible, as $R/I$ is a field. Thus $v\in I,$ as claimed.

Analysing this proof, one can easily see that

If $I\subset R$ is prime then $R/I$ has no zero-divisors, i.e. it is an integral domain.

Further important property of prime ideals is that they are radical, i.e.
$I=\sqrt{I},$ where the radical $\sqrt{I}$ of the ideal $I$ ideal is $\sqrt{I}=\{x\in R\mid \exists k\geq 1 : x^k\in I\}.$ Indeed, $x^k\in I$ implies that either $x$ or $x^{k-1}$ is in $I,$ and we derive $x\in I$ by applying this reduction.

Yet another interesting observation is that $R\setminus I$ is multiplicatively closed.

Rings of fractions

A subset $S\subset R$ is called multiplicatively closed (or multiplicative) if $0\not\in S,$ $1\in S,$ and $uv\in S$ for any $u,v\in S.$

Given a multiplicatively closed set $S\subset R,$ one defines
a relation on $R\times S,$ as follows:

${\displaystyle (y,t)\equiv (x,s)\quad\text{iff there exists }u\in S: uys=uxt.}$

It is not hard to show that $\equiv$ is an equivalence relation.
To simplify the notation, write its equivalence class with representative $(x,s)$ as $\frac{x}{s}.$ We define

$S^{-1}R=(R\times S)/\equiv$ is the ring of fractions of $R$ w.r.t. $S,$ with addition and multiplication given by the rules
$\frac{x}{s}+\frac{y}{t}=\frac{xt+ys}{st},$ $\frac{x}{s}\cdot\frac{y}{t}=\frac{xy}{st}.$

It is easy to check that this is well-defined. We also have

$\phi: R\to S^{-1}R,$ so that $\phi(x)=\frac{x}{1},$ is a ring homomorphism.

Note that $\phi$ need not be injective, i.e. $\phi(R)\cong R$ need not hold. Indeed, if $x\in R$ is a zero-divisor such that $xu=0$ for $u\in S$ then $(x,1)\equiv (0,1),$ and so $\phi(x)=\frac{x}{1}=\frac{0}{1}=0_{S^{-1}R}.$

The most well-known example is the case $R$ being an integral domain, and $S=R\setminus\{0\}.$ Then $S^{-1}R$ is a field, called the field of fractions of $R.$

Examples:

• $\mathbb{Q}$ is the field of fractions of $\mathbb{Z}$
• for the ring of polynomials $\mathbb{F}[T]$ over a field $\mathbb{F}$ the field $\mathbb{F}(T)$ is the field of rational functions over $\mathbb{F} .$
• Let $x\in R$ be non-nilpotent. Then $S=\{1,x,x^2,x^3,\dots\}$ isa multiplicative set. Moreover, then $S^{-1}R\cong R[T]/(Tx-1)$ (it is not completely trivial to prove this, though). Intuitively, we make the variable $T$ behave like the inverse of $x,$ as $Tx=1$ in this ring.

Now let us look at the case $S=R\setminus I,$ for $I$ a nonzero prime ideal. In this case $S^{-1}R$ is denoted by $R_I$ and called the localisation of $R$ at $I.$

Example: Let $\mathbb{F}[T]$ be the ring of polynomials over a field $\mathbb{F},$ and $a\in \mathbb{F}.$ Then $I=(T-a)$ is prime, and $R_I=\mathbb{F}[T]_{(T-a)}$ is equal to $\{\frac{f}{g}\in \mathbb{F}(T)\mid (T-a)\not|\, g \}.$

The ring $R_I$ has unique maximal ideal $IR_I.$

It suffices to show that $\frac{x}{s}$ is invertible in $R_I$ iff ${x}\not\in I.$ Indeed, if $x\in S$ then $\frac{x}{s}\frac{s}{x}=1.$ On the other hand, if $\frac{x}{s}\frac{y}{t}=1$ then there exists $u\in S$ such that $uxy=ust.$ Thus $uxy\not\in I,$ and so $x\not\in I$ (if it was, $uxy$ would be in $I,$ as $I$ is an ideal).

### Dual spaces

5 April, 2009

There is one glaring omission in our Linear Algebra curriculum – it avoids talking about the dual space of a vector space. This makes talking about relationship between subspaces and equations that define them exceedingly difficult. Better late than never, so here it comes.

Let $V$ be a vector space over a field $\mathbb{F}.$ Denote by $V^*$ the set of linear functions $V\to \mathbb{F}.$

Examples
Let $V=C[a,b],$ the space of continuous functions on $[a,b].$ Then the function $\int: V\to V$ given by $f\mapsto \int_a^b f(x) dx$ is linear on $V.$

Let $V=\mathbb{R}[x]$ be the vector space of polynomials with real coefficients.
Then the function $V\to V$ given by $f\mapsto \frac{df}{dx}(0)$ is linear on $V.$

Note that as $f\in V^*$ is linear, one has $f(\alpha v)=\alpha f(v)$ for any $v\in V, \alpha\in \mathbb{F}.$ Thus we have $m_\alpha:V^*\to V^*$ defined by $m_\alpha(f)(v)=f(\alpha v)$ so that $m_\alpha(m_\beta(f))=(m_\alpha m_\beta)(f).$ To simplify notation, we will write $\alpha f$ instead $m_\alpha(f).$ As well, we can define $(f+g)(v)=f(v)+g(v)$ for any $f,g\in V^*,$ and more generally, $(\alpha f+\beta g)(v)=\alpha f(v)+\beta g(v).$ And there is the zero function $0(v)=0$ for any $v\in V.$ Thus we have all the ingredients of a vector space, as can be easily checked.

$V^*$ is a vector space over $\mathbb{F}.$ It is called the dual space of $V.$

So far, we haven’t used the linearity of our functions at all (we actually did not need the fact that $\alpha f(v)=f(\alpha v)$). Indeed, any closed under addition and multiplication set of functions $V\to V$ would form a vector space.
What makes the dual space so special is that to define $f\in V^*$ it suffices to define $f(e_i)$ on a basis $\{e_i\}$ of $V.$ Indeed, $f(\sum_i \alpha_i e_i)=\sum_i \alpha_i f(e_i),$ so we can compute $f(v)$ for any $v=\sum_i \alpha_i e_i,$ once we know the $f(e_i)$‘s.

Thus for a finite-dimensional vector space $V$ one sees a (dependent upon the choice of a basis in $V$) bijection between $V$ and $V^*.$ This bijection, that is even an isomorphism of vector spaces, is defined by the dual basis of $V^*$ given by coordinate functions $x_i=\epsilon_i(x),$ where $x_i$‘s are the coefficients of $x\in V$ is the decomposition of $x$ in the basis $\{ e_i\}.$

Finite-dimensionality is crucial here. E.g. let us consider the vector space of polynomials $\mathbb{Z}[x].$ It is a countable space: one can view it as the set of infinite 0-1 strings, with only finitely many 1’s occurring in each string. On the other hand, $V^*$ can be viewed as the set of all the infinite 0-1 strings, which is uncountable, so there cannot be a bijection between $V$ and $V^*.$

Given $v\in V,$ one can define a function $f_v:V^*\to \mathbb{F},$ as follows: $f_v(g):=g(v).$ It is linear, as $f_v(\alpha g+\beta h)=\alpha g(v)+\beta h(v)=\alpha f_v(g)+\beta f_v(h).$ Here we do not see any dependence on the choice of a basis in $V,$ and we have

The vector space $V^{**}$ of linear functions on $V^*$ is (canonically) isomorphic to $V,$ via the mapping $v\mapsto f_v.$

Indeed, we see immediately that $f_{\alpha v+\beta w}=\alpha f_v+\beta f_w,$ and so we need only to check that this mapping is bijective. Let $\{e_i\}$ be a basis in $V$ and $\{\epsilon_i\}$ its dual basis in $V^*.$ Then $f_{e_i}(\epsilon_j)=1$ if $i=j$ and 0 otherwise. Thus $\{ f_{e_i}\}$ is the basis of $V^{**},$ which is dual to the basis $\{\epsilon_i\}$ of $V^*,$ and the mapping $v\mapsto f_v$ sends the vector with coordinates $v_i$ to the vector with the same coordinates in
the basis $\{ f_{e_i}\}$ of $V^{**}.$ Hence the latter is bijective.

In view of the latter, we can identify $V$ with $V^{**},$ and write $v(g)$ instead $f_v(g).$ The set of $g\in V^*$ such that $v(g)=0$ is a subspace, called annihilator of $v,$ of dimension $n-1=dim(V)-1.$ More generally, the following holds.

Let $U$ be a subspace of $V,$ and $U^0:=\{g\in V^*\mid g(u)=0\text{ for all } u\in U\}.$ Then the annihilator $U^0$ of $U$ is a subspace of $V^*$ of dimension $dim(V)-dim(U).$

Indeed, we can choose a basis $\{e_i\}$ in $V$ so that $\{e_1,\dots,e_{k}$ is a basis of $U,$ where $dim(U)=k.$ Then we have the dual basis $\{\epsilon_i\}$ of $V^*,$ and $U^0$ is the subspace with the basis $\{e_{k+1},\dots,e_n\}.$

In view of this, each $U$ can be obtained as the set of solutions of a system of homogeneous linear equations $g(u)=0,$ for $g\in U^0,$ of rank $dim(V)-dim(U).$

Dual spaces and annihilators under a basis change
Let $X\in GL_n(\mathbb{F})$ be a linear transformation of $V,$ and $U$ a subspace of $V.$ Then $X(U)=\{ Xu\mid u\in U\}$ is a subspace. How can one look at $g(U^0)?$ By writing out $u=\sum_i u_i e_i$ in a basis $\{e_i\},$ for any $g=\sum_i g_i\epsilon_i\in U^0$ in the dual basis $\{\epsilon_i\},$ we get equation $\sum_i g_i u_i =0.$ Thus, considering $X$ as a matrix, we get $g^T YX u=0,$ where $Y$ denotes the action of $X$ on $V^*.$ It follows that $YX=1_{GL_n},$ i.e. $Y=X^{-1}.$ We have, considering that $Y$ acts on $V^*$ by right multiplication, and not by left ones, to take the transpose, too.

$X\in GL_n(\mathbb{F})$ acts on $V^*$ as $(X^{-1})^T.$

An example.
Let $V=\mathbb{F}^3.$ We work in the standard basis $\{e_1,e_2,e_3\}$ of $V.$ Then the dual basis of $V^*$ is $\{\epsilon_1,\epsilon_2,\epsilon_3\}$, so that $\epsilon_i((u_1,u_2,u_3)^T)=u_i.$
Let $G$ be the group of matrices $G=\left\{ \begin{pmatrix} 1&x&y\\ 0&z&u\\ 0&t&w \end{pmatrix} \mid x,y,z,u,t,w\in\mathbb{F} \right\} It fixes, in its left action on $V$ by multiplication, the vector $e_1=(100)^T.$. Let $U$ be the 1-dimensional subspace of $V$ generated by $e_1.$ Then $U^0$ is generated by $\epsilon_2$ and $\epsilon_3.$ The group $G$ preserves $U^0,$ in its action on $V^*.$ As $U_0$ is 2-dimensional, there should be a nontrivial kernel in this action, and indeed, it consists of the elements of the form $\begin{pmatrix} 1&x&y\\ 0&1&0\\ 0&0&1 \end{pmatrix}.$

A particularly simple case is $\mathbb{F} = \mathbb{Z}_2.$ Then $G$ is isomorphic to $S_4,$ the symmetric group on 4 letters, as can be seen in its action on the 4 elements of $V^*$ outside $U^0.$ On the other hand, it acts on the 3 nonzero elements of $U^0$ as $S_3.$