Archive for February, 2009

Nonabelian groups

25 February, 2009

Dropping the commutativity of the operation makes groups less easy to work with, but allows to cover the common properties of such an important class of structures as sets of bijections (of various kinds) of a set to itself.

A group is a pair (G,\cdot), where G is a set and a binary operation \cdot on G that is

  • associative, i.e. x\cdot (y\cdot z)=(x\cdot y)\cdot z for all x,y,z\in G,
    and thus we can drop brackets x\cdot (y\cdot z)=x\cdot y\cdot z without introducing any ambiguity;
  • there exists 1=1_G\in G, a neutral element, i.e. a special element satisfying 1\cdot x=x\cdot 1=x for all x\in G. It must not be confused with the integer number 1, as in general it has nothing to do with 1_G, although it shares similar properties (rationals w.r.t. addition are an example of a group, and there these two things are indeed the same).
  • for any x\in G there exists x'\in G such that x\cdot x'=x'\cdot x=1. We denote x' by x^{-1}.

To explain the title, note that these axioms are essentially the axioms of an abelian group, with the commutativity axiom dropped, and the operation written as \cdot instead of +. (Respectively, the notaton for the neutral element and for the inverse is adjusted.) We will often simply drop the \cdot from expressions, i.e. write ab instead of a\cdot b, etc.

A number of useful rules can be easily derived from the axioms, e.g. the following:

The equation a\cdot x=b for any given a,b\in G has unique solution x=b\cdot a^{-1}.
Thus we can cancel: x\cdot y=x\cdot z iff y=z, for all x,y,z\in G.

Indeed, by associativity and properties of the inverse and the neutral element, we have a^{-1}(ax)=(a^{-1}a)x=1\cdot x=x=a^{-1}b, thus such an x exists and is unique.

This immediately implies e.g. the following:

  1. x^{-1} is unique, for any given x\in G
  2. 1\in G is the only neutral element in G

The next important concept is of homomorphism and isomorphism of groups. These should come as no surprise after the abelian group homomorphisms and isomorphisms, and ring homomorphisms and isomorphisms.

Given two groups (A,\cdot _A) and (B,\cdot _B), a function \phi:B\to A satisfying \phi(b\cdot _B b')=\phi(b)\cdot _A\phi(b') is called a homomorphism from B to A.

When \phi:B\to A is a bijection, then it is called isomorphism, and one writes (A,\cdot _A)\cong (B,\cdot _B), or just A\cong B.

From the point of view of abstract algebra, two isomorphic groups are hardly distinguishable, although of course they can appear in different disguises.

Groups of bijections
Quite often, nonabelian groups arise in the following context. Let \Omega be a set and G be a set of bijections f:\Omega\to\Omega satisfying the following properties:

  • if f,g\in G then f\circ g, the composition of f and g, is in G
  • id, the identity bijection, is in G
  • if f\in G then f^{-1}, the inverse of f, is in G

Then G is a group, usually denoted (G,\Omega) to emphasise the role of \Omega. When |\Omega|<\infty, we call (G,\Omega) a group of permutations (as bijections of a finite set are just permutations.)

This gives examples of nonabelian groups

  1. S_n, the group of all bijections (a.k.a. permutations) of the finite set \{1,\dots,n\}, usually called the symmetric group (of n-element set). Note that S_2 is abelian, of order |S_2|=2, but ceases to be abelian for n\geq 3. More generally, |S_n|=n!
  2. Let V be an n-dimensional vector space over a field \mathbb{F}. The linear bijections V\to V form a group, denoted by GL_n(\mathbb{F}) or GL(n,\mathbb{F}), and called the general linear group over \mathbb{F} (in dimension n)
  3. Dihedral group D_{2n}: the group of symmetries of the regular n-gon. It consists of 2n elements, namely n cyclic rotations and n reflections w.r.t. n symmetry axes. Sometimes also denoted by D_n. The group D_6 is isomorphic to S_3, and the group D_4 is isomorphic to \mathbb{Z}_2\times \mathbb{Z}_2.

Now let us show that

every group G is isomorphic to a group of bijections.

Observe that for any g\in G the mapping L_g:G\to G defined by L_g(x)=gx, for any x\in G, is a bijection (called multiplication by g on the left). Moreover, for h\in G one has L_g\circ\L_h=L_{gh}, as by associativity of the multiplication L_g(L_h(x))=L_g(hx)=ghx=(gh)x=L_{gh}(x). Similarly, (L_g)^{-1}=L_{g^{-1}}, as L_g(L_{g^{-1}}(x))=L_g(g^{-1}x)=gg^{-1}x=x. Thus L_G:=\{L_g\mid g\in G\} satisfies the group axioms, and (L_G,G) is a group of bijections. The mapping \phi : G\to L_G is a group isomorphism, as it is a bijection (check this!) and as \phi(gh)=L_{gh}=L_g\circ L_h, as shown above.

Example: S_3 and L_{S_3}. Let G=S_3=\left\{ \begin{pmatrix}1&2&3\\1&2&3\end{pmatrix}, \begin{pmatrix}1&2&3\\2&1&3\end{pmatrix}, \begin{pmatrix}1&2&3\\3&2&1\end{pmatrix}\right., \left.\begin{pmatrix}1&2&3\\1&3&2\end{pmatrix}, \begin{pmatrix}1&2&3\\2&3&1\end{pmatrix}, \begin{pmatrix}1&2&3\\3&1&2\end{pmatrix}\right\}=\{\pi_1,\dots,\pi_6\}, where \begin{pmatrix}1&\dots&k\\ g_1&\dots& g_k\end{pmatrix} denotes the permutation sending \ell to g_\ell for 1\leq \ell\leq k. Then e.g. L_{\pi_2}=\begin{pmatrix}\pi_1&\pi_2&\pi_3&\pi_4&\pi_5&\pi_6\\ \pi_2&\pi_1&\pi_5&\pi_6&\pi_3&\pi_4\end{pmatrix}.

New groups from old: direct products (just as in the case of rings and abelian groups):

Given two groups (A,\cdot _A) and (B,\cdot _B), define the group
A\times B=(A\times B,\cdot ), the direct product of A and B, where the operation \cdot is component-wise:
(a,b)\cdot (a',b')=(a\cdot _A a',b\cdot _B b').

Similarly, one can define direct products of an arbitrary number of groups.

Subgroups

A subgroup B of A is a subset of A=(A,\cdot ) that is closed under the multiplication in A. (Notation: B\le A or B\leq A, the latter does not exlcude B=A.) This means that for any x,y\in B also x^{-1},xy\in B, and immediately implies 1\in B.

Examples of subgroups:

  1. Let G\leq S_n, and \Sigma\subset\Omega=\{1,\dots,n\}. Let H=\{h\in G\mid (\forall s\in\Sigma) h(s)\in\Sigma\}. Then H\leq G; it is called the stabiliser of \Sigma in G.
    Note that the stabiliser of \Sigma in S_n is isomorphic to S_m\times S_{n-m}, where m=|\Sigma|.
  2. Let G=GL_n(\mathbb{F}), and H=\{h\in G\mid \det(h)=1\}. Then H\leq G, is denoted by SL_n(\mathbb{F}), and is called the special linear group.
  3. Some subgroups of the direct product G=A\times B: let H\leq A, F\leq B. Then H\times F\leq G.

Generating sets, etc.
First, a bit of terminology: for a group A and any a\in A, n\in\mathbb{Z}, the power a^n is well-defined, as follows:

  1. if n=0 then a^n=1_A,
  2. if n<0 then a^n=(a^{-1})^{-n},
  3. if n>0 then a={a\cdot \dots\cdot a}, the n-fold product of a.

It can perfectly happen that a^n=1_A while neither n=0 nor a=1_A, e.g. a^{n!}=1_A for any a\in A=S_n.

Let S\subseteq A. Define \langle S\rangle, the subgroup generated by S, to be \langle S\rangle=\cap_{S\subseteq B\leq A} B, i.e. the intersection of all the subgroups of A containing S.
One can show that S consists of words in S\cup S^{-1}, i.e. finite products of the form
g_{i_1} g_{i_2}\dots g_{i_k}, where g_\ell\in S\cup S^{-1}, and we denoted S^{-1}:=\{s^{-1}\mid s\in S\}.

Cyclic subgroups. These are subgroups generated by just one element, s\in A, i.e. one should take S=\{s\} and denote \langle S\rangle=\langle s\rangle. We recall (just the notation changes from the additive to the multiplicative) that \langle s\rangle=\{s^n\mid n\in \mathbb{Z}\}.
More generally,

A group A is called finitely generated if there exists S\subset A such that \langle S\rangle=A and |S|<\infty.

E.g. any finite group, e.g. S_n, is finitely generated. E.g. GL_n(\mathbb{R}) is not finitely generated.

Ring homomorphisms

17 February, 2009

Maps between commutative rings are just as important as these between abelian groups. As you should anticipate, our maps should preserve ring operations.

A map \phi: R\to S between commutative rings R and S is called a homomorphism if \phi: (R,+)\to (S,+) is a group homomorphism and \phi(a\cdot b)=\phi(a)\cdot\phi(b) for any a,b\in R.

Here, as usual, we use + and \cdot to denote addition and multiplications in respective rings. You should notice that the second property is essentially the same as for the addition in the group homomorphism definition. Analogously, we define the kernel and the image of a homomorphism: Ker(\phi)=\{a\in R\mid \phi(a)=0\}, Im(\phi)=\{s\in S\mid \exists r\in R: \phi(r)=s\}. As well, a ring isomorphism is a homomorphism that is a bijection. Again, we would be interested in the properties of rings that are not changing under an isomorphism.
The following is why the ideals of R are so important.

The kernel of a homomorphism \phi: R\to S is an ideal in R.

To see this, recall first that the kernel I=Ker(\phi) of the group homomorphism \phi: (R,+)\to (S,+) is a subgroup of (R,+). Let x\in I. We have to show that r\cdot x\in I for any r\in R. By the homomorphism property, \phi(r\cdot x)=\phi(r)\cdot\phi(x)=\phi(r)\cdot 0=0.

Given an ideal I\subset R, we can define a multiplication on the abelian group A=R/I, by setting (x+I)(y+I)=xy+I. To see that this is well-defined, we take s,t\in I and compute (x+s+I)(y+t+I)=xy+xt+ys+st+I=xy+I, i.e. the product does not depend upon a particular choice of coset representatives of I in R. We should also check the distributivity, certainly.
Unlike for quotient groups, the equation (x+I)(y+I)=xy+I does not mean an equality of sets. Indeed, it can already happen that I\cdot I=\{0\}, while I\neq\{0\} (Such an example can already be obtained by taking I=\{0,2\}\subset \mathbb{Z}_4.)

It is a good point to discuss this phenomenon a bit more.

Nilpotent elements and ideals
A ring element r\in R is called nilpotent if there exists a natural number k such that r^k=0, and the minimal such k is called the degree of nilpotency. For instance 2\in \mathbb{Z}_4 is nilpotent, and has nilpotency degree 2. More generally, we can talk about a subset S\subset R satisfying S^k=S\cdot S\cdot\dots\cdot S=\{0\}. Such subsets, and in particular, ideals, are also called nilpotent.

Exercise. Prove that the sum of two nilpotent elements of a commutative ring R is nilpotent. Derive that the set of all nilpotent elements of R is an ideal.

Here I wrote more about various other operations on ideals.

Just as for abelian groups, we have an isomorphism theorem:

Let \phi: R\to S be a ring homomorphism, then \pi: Im(\phi)\to R/Ker(\phi) s.t. \phi(r)\mapsto r+Ker(\phi) is a ring isomorphism.

By the analogous theorem for groups, we already know that \pi is a bijection. To prove the rest, we only need to check that \pi(st)=\pi(s)\pi(t) for any s,t\in Im(\phi). Let \phi(x)=s and \phi(y)=t. Then, as \phi is a homomorphism, \phi(xy)=st and so \pi(st)=xy+Ker(\phi)=(x+Ker(\phi))(y+Ker(\phi))=\pi(s)\pi(t).

Examples.

  1. Let \pi:\mathbb{Z}[t]\to\mathbb{Z}_p[t] be given by \pi(\sum_k a_k t^k)=\sum_k (a_k \mod p) t^k, for a fixed prime p. This is a ring homomorphism, and thus \mathbb{Z}_p[t]\cong \mathbb{Z}[t]/p\mathbb{Z}[t].
  2. Let c\in \mathbb{R} and \mathbb{R}[t]\to \mathbb{R} be given by f(t)\mapsto f(c). Then this is a ring homomorphism, and thus \mathbb{R} \cong \mathbb{R}[t]/(t-c)\mathbb{R}.
  3. Let c\in \mathbb{C} and f(t)=(t-c)(t-\overline{c})\in \mathbb{R}[t]. Then \mathbb{R}[t]/f(t) \mathbb{R}[t]\cong \mathbb{C}.

Multiplicative inverses. (A bit of number theory.)
Let R be a commutative ring with identity. How far is it from a field? In a field \mathbb{F}, all the non-zero elements form an abelian (or better said, a commutative) group (w.r.t. to the multiplication), denoted \mathbb{F}^*.
For instance, for \mathbb{Z}_p one has \mathbb{Z}_p^* isomorphic to the cyclic group of order p-1. This follows from a more general result, that we state here.

Exercise Any finite subgroup of \mathbb{F}^* is cyclic. Prove this using the observation that the polynomial equation X^m=1 has at most m solutions in \mathbb{F}.

More generally, for \mathbb{Z}_n one has |\mathbb{Z}^*_n|=\phi(n), the number of integers between 1 and n-1 that are relatively prime to n. Thus in particular we have Euler Theorem a^{\phi(n)}=1\mod n for any a such that (a,n)=1.

The isomorphism of the abelian groups \mathbb{Z}_{mn}=\mathbb{Z}_m\oplus \mathbb{Z}_n is in fact also a ring isomorphism: \mathbb{Z}_{mn}=\mathbb{Z}_m\times \mathbb{Z}_n.In particular, this implies \mathbb{Z}^*_{mn}=\mathbb{Z}^*_m\times \mathbb{Z}^*_n.

Excercise. Show that \phi(n)=n(1-1/p_1)\dots (1-1/p_\ell), where p_k‘s are the distinct prime divisors of n.

Commutative rings

14 February, 2009

One operation only often does not capture the whole mathematical picture. Two, or even more, operations are often used, and you have in fact seen examples of such objects in previous courses, such as fields, polynomials, power series, square matrices. What unites these examples together is that they have an abelian group lurking in the background, and a multiplication operation that need not be too nice (with the exception of fields.) In fact, matrix multiplication isn’t ever commutative; we postpond investigating the latter to another course.

A commutative ring is a set R with two binary operations, addition and multiplication, denoted by + and \cdot, respectively, (for multiplication the dot is often omitted all together) that satisfy the following axioms:

  1. (R,+) is an abelian group
  2. multiplication is associative, i.e. (a\cdot b)\cdot c=a\cdot (b\cdot c) for all a,b,c\in R
  3. multiplication is commutative, i.e. a\cdot b=b\cdot a for all a,b\in R
  4. multiplication is distributive over addition, i.e. a\cdot (b+c)=a\cdot b+a\cdot c for all a,b,c\in R

We can derive more useful rules from the above, e.g. that 0\in R (remember, R is an abelian group, and they all have the neutral element, 0) satisfies a\cdot 0=0.) This follows by computing a\cdot 0=a(b-b)=ab-ab=0 using distributivity.

Examples

  1. Integers: \mathbb{Z} and integers \mod m: \mathbb{Z}_m\cong \mathbb{Z}/m \mathbb{Z}
  2. Fields: e.g. \mathbb{R}, \mathbb{C}, \mathbb{Q}, \mathbb{Z}_p for a prime p\geq 2
  3. (Univariate) polynomials over a field \mathbb{F}: \mathbb{F}[t]=\{\sum_{i\geq 0} a_i t^i\mid a_i\in\mathbb{F}, a_k=0\text{ for all }k\geq d_a\}.
  4. Formal power series over a field \mathbb{F}: \mathbb{F}[[t]]=\{\sum_{i\geq 0} a_i t^i\mid a_i\in\mathbb{F} \}.
  5. Zero ring: take any abelian group A and define the trivial multiplication on it, i.e. a\cdot b=0 for all a,b\in A.
  6. Ring R of functions f: X\to \mathbb{R}, with addition and multiplication defined as (f+g)(x)=f(x)+g(x), (f\cdot g)(x)=f(x)\cdot g(x), for any f,g\in R.
  7. Functions f: \mathbb{R}\to \mathbb{R} that possess a property preserved under addition and multiplication form a ring (e.g. ring of continuos functions, ring of differentiable functions).

Note that we talk about polynomials and power series not as functions, but as formal (that’s why “formal” in formal power series!) expressions we know how to add and multiply.

So we see that commutative rings are like fields, but with less demands placed on multiplication: there need not be the multiplicative inverse to any non-0 element, for instance, in \mathbb{Z}, this is the case. Moreover, there need not be any multiplicative identity element: e.g. consider 2\mathbb{Z}, the set of all even integers with the usual addition and multiplication; it does not have a multiplicative identity. So we separate the cases when there is a multiplicative identity into a separate case:

A commutative ring with identity is a commutative ring R that has a multiplicative identity element, denoted 1, so that a\dot 1=a for all a\in A.

Another difference is that is can happen in a ring that ab=0 for two non-0 elements a, b. Such elements have a special name:

0\neq a\in R is called a zero divisor if there exists 0\neq b\in R such that ab=0.

In a zero ring every non-0 is a zero divisor. Can you find a zero divisor in \mathbb{Z}_4?

The fields are certainly the nicest kind of rings. The second best do not have zero divisors and have a “real” 1:

A commutative ring with identity 1\neq 0 is an integral domain if it has no zero divisors.

Subrings and ideals
After seeing the definition of a subgroup, the following should not be a surprise:

A subring S of a commutative ring R is an abelian subgroup of (R,+) that is also closed under multiplication.

For instance, 2 \mathbb{Z} is a subring of \mathbb{Z}, the polynomials \mathbb{F}[t] are a subring of \mathbb{F}[[t]].
A very important class of subrings are ideals:

An ideal S of a commutative ring R is a subring of R that in addition satisfies the following: rS\subseteq S for any r\in R (here, as usual here, multiplication rS means \{rs\mid s\in S\}.

For instance, 2 \mathbb{Z} is an ideal of \mathbb{Z}, but \mathbb{F}[t] is not an ideal in \mathbb{F}[[t]].
R itself and \{0\} are ideals in R, for any commutative ring R. They are called improper ideals, and all the other ideals are called proper.

A commutative ring R with 1 is a field if and only if all its ideals are improper.

Indeed, if I is an ideal in R and 0\neq r\in I then, as R is a field, there exists r^{-1}\in R such that rr^{-1}=1. As I is an ideal, r^{-1}r=1\in I, implying I=R. On the other hand, if 0\neq r\in R does not have a multiplicative inverse, then Rr is a proper ideal in R.

Again, as in the case of abelian groups, we can talk about generating sets of subrings and ideals. (Certainly, we shall use both operations to express elements in terms of generators.) In the case of ideals, the natural definition is as follows:

In an ideal I\subset R in a commutative ring R the set
S\subseteq I is called a generating set if a=\sum_{s\in S} r_s s, with r_s \in R and only finitely many of them non-0 for any a\in I. In other words, we can write I=\sum_{s in S} Rs.

The analog of a cyclic subgroup, an ideal with a generating set of size 1, is called principal:

an ideal I\subset R in a commutative ring R is called principal if I=Rs for an s\in I.

Root systems

6 February, 2009

In order to study Lie algebras, one needs a reasonable command of root systems. Root systems are closely associated with groups generated by reflections in \mathbb{R}^n. We equip \mathbb{R}^n with the Euclidean scalar product (u,v) (w.l.o.g. (u,v)=\sum_{i=1}^n u_i v_i). Then for

a non-zero a\in \mathbb{R}^n we have the linear transformation, called reflection \sigma_a of \mathbb{R}^n defined by \sigma_a(v)=v-2\frac{(v,a)}{(a,a)}a. Note \sigma_a(a)=-a and \sigma_a(u)=u for any u such that (u,a)=0.

In the basis consisting of n-1 linearly independent vectors u_i satisfying (u_i,a)=0 and the vector a one has \sigma_a as the diagonal matrix \mathrm{diag}(1,\dots,1,-1). It shows that in particular \sigma_a^2=1.

A subset \Phi of \mathbb{R}^n equipped with the Euclidean scalar product (u,v) is called
a root system if

  1. |\Phi|<\infty, 0\not\in\Phi, and \Phi spans \mathbb{R}^n
  2. if v\in\Phi then \{\alpha v\mid \alpha\in \mathbb{R}\}=\{v,-v\}.
  3. for any a,b\in\Phi one has \sigma_a(b)\in\Phi, i.e. \sigma_a leaves \Phi invariant.
  4. for any a,b\in\Phi one has 2\frac{(b,a)}{(a,a)}\in \mathbb{Z}.

The elements of \Phi are called roots, and n is called the rank.

An orthogonal transformation of \mathbb{R}^n, i.e. a linear transformation g satisfying gg^T=1, and thus (gu,gv)=(u,v) for all u,v\in \mathbb{R}^n, maps \Phi to an isomorphic root system. However, there is even coarser equivalence relation: we say that \Phi is isomorphic to \Phi' is there is a linear transformation g such that g(\Phi)=\Phi' and (u,v)/(v,v)=(\Phi(u),\Phi(v))/(\Phi(u),\Phi(u)) for all u,v\in\Phi. In particular, multiplying every u\in\Phi by a scalar does not change the isomorphism class of \Phi.

Let us consider few examples first.

  • n=1. Then there is just one example, denoted A_1, so that \Phi=\{a,-a\}, with, say, (a,a)=2.
  • n=2, \Phi=\{a,-a,b,-b\} with (a,b)=0. This is denoted A_1\times A_1, and we can choose (a,a)=(b,b)=2.
  • n=2, \Phi=\{a,-a,b,-b,a+b,-a-b\}, where a=(\sqrt{2},0), b=(\frac{-\sqrt{2}}{2},\frac{\sqrt{6}}{2}), so (a,b)=-1. This is denoted by A_2.

A system like A_1\times A_1 is called reducible, as it consists of two completely independent root systems living in orthogonal to each other subspaces. It is obvious that in this way it is always possible to construct the root system \Phi\times\Psi\subset\mathbb{R}^{n+m}, given a root system \Phi\in \mathbb{R}^n and a root system \Psi\in \mathbb{R}^m. On the other hand, A_2 is an example of irreducible, i.e. not reducible, root system.
It is more convenient to represent A_2 in the plane x_1+x_2+x_3=0 in \mathbb{R}^3. We take a=(1,-1,0) and b=(0,1,-1). A generalisation of this construction is easy, and gives one of the examples of an infinite series A_n of irreducible root systems.

the root system A_n\subset \mathbb{R}^{n+1}, with the roots \pm (1,-1,0,\dots,0), \pm (0,1,-1,0,\dots,0),\pm (0,\dots,0,1,-1). It is not so hard to see that |\Phi|=n(n+1).

It turns out that a complete classification of root systems is possible. Certainly, it suffices to classify irreducible ones. The idea is that one settles the case of rank n=2 and then uses it to rule out almost all possibilities for pairs of roots to appear in a root system of a bigger rank. This results in 3 infinite families, A_n, B_n, and D_n, and few exceptions.

We present here a straightforward, albeit somewhat messy, treatment for root systems of rank at most 3.

Rank 2 case
We can assume that in a rank 2 system \Phi there are roots a=(\sqrt{2},0) and b=(b_1,b_2)\in\Phi-\{a,-a\} such that (b,a)\neq 0 (otherwise we have a reducible system, A_1\times A_1). As \sigma_a(b)=(-b_1,b_2)\in\Phi by 3), w.l.o.g. b_1>0. Using 4) twice, we have b_1\sqrt{2}, \frac{2b_1\sqrt{2}}{b_1^2+b_2^2}\in \mathbb{Z}_{+}. Thus b_1=\sqrt{2}k, with 2k\in \mathbb{Z}, and 4k\geq 2k^2 +b_2^2. Thus b_2^2\leq 2k(2-k), and so k\in\{1/2,1\}.

  1. k=1. Then \frac{4}{2+b_2^2}\in \mathbb{Z}_{+}. As 0 < \frac{b_2^2}{2}\leq 1, we derive that b_2=\pm\sqrt{2}. Then \sigma_b(a)=(\pm\sqrt{2},0)=\pm\gamma, and so we obtain (applying the reflection \sigma_\gamma) the root system B_2=\{\pm a,\pm\gamma, \pm(\alpha+\gamma),\pm(\alpha-\gamma)\}. Note that there are 2 copies of A_1\times A_1 sitting inside.
  2. k=1/2. Then \frac{4}{1+2b_2^2}\in \mathbb{Z}_{+}. We derive that b_2^2\in\{1/2,3/2\}.
    1. if b_2^2=3/2 then one obtains (a system isomorphic to) A_2.
    2. if b_2^2=1/2 then one obtains G_2, certain root system with 12 roots of two different squared norms (we chose them to be 1 and 2). Note that one can find two copies of A_2 inside, one for each norm.

We are not quite done yet; it remains to see whether we can add more roots to each of the irreducible systems we constructed. One can certainly add more roots to A_2 and obtain G_2, but is there any other missed option? (The answer is no.)

Rank 3 case

If for an irreducible root system \Phi\supseteq G_2 then G_2=\Phi.

In other words, G_2\cong\Delta=\{\pm a,\pm b, \pm (a+b), \pm (2a+b), \pm(a+2b),\pm(b-2a)\}, where a is as above and b=\frac{1}{2}(-\sqrt{2},\sqrt{6}), cannot get extended. Observe that the squared lengths of the roots in G_2 are 2=(a,a) and 6=(2a+b,2a+b)=4(a,a)+4(a,b)+(b,b)=8-4+2.
Let r\in\Phi-\Delta be such that (r,r)=2 and (a,r)\neq 0. Replacing, if necessary, r by -r, and considering the possibilities for a, r to generate a root system, we can assume (a,r)=1.
Similarly, we conclude that \gamma=(b,r)\in \{0,\pm 1\}. We compute (r,2a+b)=2+\gamma\in\{1,2,3\}. On the other hand r,2a+b generate a G_2, and so (r,2a+b)=3, implying \gamma=1. As well, r,a+2b generate a G_2, but (r,b-2a)=1-2=-1, a contradiction.
Thus (a,r)=(b,r)=0. By irreducibility, either the claim holds, or
there exists r\in\Phi-\Delta such that (a,r)\neq 0. (Again, we can assume (a,r)>0.) So there are 2 possibilities: either a,r generate a G_2, or a B_2. In the 1st case (r,r)=6, (a,r)=3, \gamma=(b,r)\in\{0,\pm 3\}. As (r,a+b)\leq 3, one has \gamma\in\{-3,0\}. As r\neq 2a+b, one has (r,2a+b)=6+\gamma=3, implying \gamma=-3. But then (r,2a-b)=9, a nonsense.
In the 2nd case (r,r)=4, (a,r)=2, (b,r)\in\{0,\pm 2\}. Thus (r,2a+b)> 0. But the squared length ratio of two roots in an irreducible root system in \mathbb{R}^2 is either 1, or 2, or 3, but for 2a+b and r it is 3/2, the final contradiction.

Next, we shall prove the following.

Let a,b,c\in\Phi, a root system, be linearly independent, and (a,b)\neq 0. Then there is a root r in the rank 2 system generated by a,b such that (r,c)=0. Moreover, the systems generated by c and roots that are not proportional to r cannot both be equal to B_2.

Consider first the case when a,b generate a subsystem B_2\cong\Delta\subset\Phi. Say, 2(a,a)=(b,b)=4. Then, without loss in generality, we may assume that (a,b)=2. Applying an orthogonal transformation to the space, we may assume that
a=(110\dots 0), and b=(020\dots 0).

Suppose first (c,c)=4. W.l.o.g. (a,c)\neq 0 (for otherwise, if (a,c)=(b-a,c)=0 then we have a reducible system). Then a,c generate B_2, so, changing, if necessary, the sign of c, we have (a,c)=2.
Thus c=(x,2-x,z_1,\dots) and so 2>x>0, as follows from x^2+(2-x)^2\leq 4=(c,c). If (b,c)\neq 0 then (b,c)=4-2x=\pm 2, and the inequality above implies x=1, i.e. c=(1,1,z_1,\dots). Now we can take r=b-a.
(We constructed the system known as C_3. It has 12 long roots and 6 short ones.)

Now, suppose (c,c)=2. Then a,c generate, w.l.o.g., A_2
(for if neither a,c nor b-a,c generate A_2 then we have a reducible system again). Thus w.l.o.g. (a,c)=1 and so c=(x,1-x,z_1,\dots), and x^2+(1-x)^2\leq 2=(c,c). So 2x^2-2x\leq 1.
Assume that b, c generate B_2. Then \pm 2=(b,c)=2(1-x), i.e. either x=0, or x=-2. The latter case is ruled out by the inequality above. So c=(0,1,z_1,\dots). So we can take r=2a-b=(20\dots 0). Otherwise, (b,c)=0, and we can take r=b.
(In both cases, we constructed the system known as B_3. It has 6 long roots and 12 short ones.)

Now, by changing the roles and signs of a,b,c if necessary, we may assume that a,b and a,c generate A_2‘s and that a=(1,-1,0,\dots 0), b=(0,1,-1,0\dots 0), c=(x,x+1,z,\dots). If (b,c)\neq 0 then (b,c)=x+1-z=\pm 1. In the first case x=z, and so r=a+b, and we obtain the root system A_3. In the second case z=x+2, and (a+b,c)=-2, a contradiction.

It is not so hard to see that we actually showed that

An irreducible root system \Phi, of rank 3 is isomorphic to one of the following systems: A_3, B_3, C_3.

Classifying finitely generated abelian groups

1 February, 2009

First, we need a result rectifying the theorem that a subgroup of a f.g. free abelian group is free.
In a way, it is a generalisation of a well-known property of a subspace U of a vectorspace V: we can change the basis of V to, say, e_1,\dots,e_n so that U will be generated by e_1,\dots,e_{\dim U}. As we cannot, generally speaking, invert our integer coefficients in the case of abelian groups, the e_j‘s will get extra multipliers. More precisely, the following, known as Smith Normal Form Theorem for abelian groups, holds.

Let H\leq B be a subgroup of a f.g. free abelian group B. Then in B there exists a free generating set e_1,\dots,e_n such that H is freely generated by u_1 e_1,\dots,u_m e_m, with m\leq n, positive integers u_1,\dots u_m, and u_i dividing u_{i+1} for i=1,\dots,m-1.

We know that H is free of rk(H)=m\leq n. So H is freely generated by h=(h_1,\dots,h_m), and there is an integer m\times n matrix Q such that Qs=h, where s=(s_1,\dots,s_n) are free generators for B. We apply to Q the Smith Normal Form algorithm (in this reference one can ignore the PID etc bits, just assuming that matrix entries are integers), something akin to Gauss elimination, although without division.

More precisely, given a free abelian group and an (ordered) basis for it, we are allowed to (check this!)

  1. multiply basis elements by -1;
  2. swap two basis elements;
  3. add to a basis element another one, multiplied by an integer

without “breaking” the basis property (the resulting set of elements will remain a basis for the group.)
These are the operations we will perform on h and on s. Respectively, they affect, in the way just described, rows and columns of Q. Thus, after applying Smith Normal Form algorithm to Q, we obtain a diagonal matrix Q'=diag(u_1,\dots,u_m) and bases h' and s' of H and B, respectively, so that (Q'|\overline{0})s'=h'. (We denoted here by \overline{0} the all-0 matrix of size m\times (n-m).) This completes the proof.

Our work will bear fruit here. We will prove the following Classification theorem for finitely generated abelian groups. One more piece of notation: B^k=B\oplus\dots\oplus B, with k direct summands on the right-hand side.

Let A be an abelian group generated by s_1,\dots,s_n. Then A\cong \mathbb{Z}_{n_1}\oplus\dots\oplus\mathbb{Z}_{n_t}\oplus\mathbb{Z}^{n-t}. Moreover, n_i|n_{i+1}, for 1\leq i\leq t-1, and n_1,\dots,n_p, for some p\geq 0, are allowed to be 1.

Consider the homomorphism \phi:\mathbb{Z}^n\to A given by (k_1,\dots,k_n)\mapsto \sum_{j=1}^n k_j s_j. By the 1st Isomorphism Theorem one has A\cong \mathbb{Z}^n/Ker(\phi). Changing, if necessary, the generating set in \mathbb{Z}^n, we can assume by Smith Normal Form Theorem that Ker(\phi) is freely generated by u_1 e_1,\dots,u_m e_m, with m\leq n and e_1,\dots, e_n freely generating \mathbb{Z}^n.

Now consider the homomorphism \psi:\mathbb{Z}^n\to \mathbb{Z}_{u_1}\oplus\dots\oplus\mathbb{Z}_{u_m}\oplus\mathbb{Z}^{n-m} given by \sum_{j=1}^n \ell_j e_j\mapsto (\ell_1\mod u_1,\dots,\ell_{m}\mod u_m,\ell_{m+1},\dots,\ell_n). It can happen that u_1=\dots = u_p=1, then the corresponding first p direct summands must be dropped – they correspond to trivial subgroups. Then Ker(\psi)=Ker(\phi), and so A\cong \mathbb{Z}^n/Ker(\psi)= \mathbb{Z}_{u_{p+1}}\oplus\dots\oplus\mathbb{Z}_{u_m}\oplus\mathbb{Z}^{n-m}. This completes the proof (we will have t=m-p).

As a 1st example, consider an abelian group B=\langle t_1,t_2\rangle, where the generators satisfy just one irredundant relation 2t_1+4t_2=0. The latter just means that the kernel of the homomorphism \phi:\mathbb{Z}^2\to B is Ker(\phi)=\langle  2t_1+4t_2\rangle. Then, according to the procedure outlined in the proof, we are to compute the Smith normal form of the matrix \begin{pmatrix}2& 4\end{pmatrix}. Applying the column operation of adding the -2 times the 1st column to the 2nd column, we obtain the normal form Q'=\begin{pmatrix}2& 0\end{pmatrix}. Thus we see that B\cong \mathbb{Z}_2\oplus\mathbb{Z}.
It is instructive to see what happens to the basis x=(x_1,x_2) of \mathbb{Z}^2 and to the relation r^T x=2x_1+4x_2\in Ker(\phi) as this basis change is carried out. Note that the column operation we used is given by the matrix C=\begin{pmatrix}1&-2\\ 0&1 \end{pmatrix}. (To see this, perform this operation on the matrix \begin{pmatrix} 2&4\\ 1&0\\ 0&1 \end{pmatrix}, and read off C from the 2nd and 3rd rows of the result.) So the new basis is equal to (x'_1,x'_2)=C^{-1}x=(x_1+2x_2,x_2), and r^T x=r^T CC^{-1}x\in Ker(\phi). The new relation will be written as (r^T C)x'=2x'_1, so in the new basis Ker(\phi)=\langle 2x'_1\rangle.

As an exercise, do the corresponding computation with 5 in the place of 4 in the definition of B. Note that you will need more than one column transformation in this case!

While Smith Normal Form for a given group A is unique, often (e.g. for purposes of ring theory) we are more interested in decomposing A into a direct sum of as many cyclic subgroups as possible.
We can do a bit more work to arrive at a direct sum decomposition that is, in fact, unique in the latter sense, up to permuting summands. Here we just note that the number n-t, that counts \mathbb{Z}-summands, does not depend upon a particular decomposition. To see that, we introduce the notion of torsion subgroup Tor(A) of A.

Tor(A)=\{a\in A\mid ma=0\text{ for some } m\in\mathbb{Z}\}.

More exercises

  1. Let A=\langle x,y,z\rangle be an abelian group generated by x,y,z that satisfy the relations 0=2x=2y=2z=3x+3y=5y+5z. Determine A using the Theorem above and the Smith normal form computation.
  2. Let A=\mathbb{Z}_n\oplus\mathbb{Z}_m, and GCD(m,n)=1, i.e. m and n relatively prime. Use the Theorem above and the Smith normal form computation to show that A\cong\mathbb{Z}_{nm}. (As usual, GCD=Greatest Common Divisor.)
  3. More generally, show, using the method from the previos problem, that =\mathbb{Z}_n\oplus\mathbb{Z}_m\cong \mathbb{Z}_g\oplus\mathbb{Z}_{mn/g}, where g=GCD(m,n).
  4. Show that and n\times n integer matrix X satisfying |\det X|=1 can be transformed into the identity matrix by the elementary row operations. What about elementary column operations?
  5. Find the n\times n matrices C such that for an n\times m matrix Q the elementary row operations on Q can be described as computing the products CQ. That is,
    1. C=C(i) that corresponds to the multiplication of the i-th row by -1;
    2. C=C(i,j) that corresponds to swapping of the i-th and the j-th rows;
    3. C=C(i,j,\alpha) that corresponds to adding to the i-th row the j-th row multiplied by \alpha;
  6. Show that the operation of swapping the rows in the Smith normal form computation is redundant, i.e. the same can be accomplished using the other two operations only.
  7. Let Q be an n\times m integer matrix and U its Smith normal form. Investigate what happens if we consider the (n+m)\times (n+m) “block” matrix \begin{pmatrix}Q & I_n \\ I_m & 0 \end{pmatrix} and apply the row operations on the topmost n rows and the column operations on the leftmost m columns to obtain a matrix \begin{pmatrix}U & A \\ B & 0 \end{pmatrix}.
    Find out how to use the matrix product of A, B,Q to obtain U.
  8. Investigate how A and B from the previous problem change the generating sets of the corresponding free abelian groups;
  9. check that Tor(A)\leq A.
    (As the definition of Tor(A) does not depend on a particular decomposition as in the theorem, and A/Tor(A)\cong Z^{n-t}, we see that n-t is invariant, as claimed.)