## Posts Tagged ‘algebra’

### Schonhardt polyhedron and IMS-2013/4 program

28 May, 2012

One cannot always triangulate a non-convex polyhedron using only its vertices, sometimes one need to add more of them. A simple example of this phenomenon is Schonhardt polyhedron. Here is a picture illustrating how one builds it that I drew for a forthcoming paper, using Tikz LaTeX package, which is awesome, but totally overwhelming.

It fact, it’s easy to see that the 6 vertices and 12 edges it has are not enough. Indeed, each pair of non-intersecting edges determines a simplex, but it’s easy to observe that any such selection will include one the forbidden pairs of vertices AC, A’B, or B’C’. (The LaTeX source of the picture is here).

The paper I mention is related to a topic of the program on inverse moment problems at IMS (NUS/Singapore) in late 2013-early 2014 which I co-organize.

### Prime ideals and rings of fractions

10 April, 2009

The concept of prime ideal in a commutative ring $R$ with $1$ is one of several natural generalisations of the concept of prime integer number.

An ideal $I\subset R$ is called prime if for any $u,v\in R$ the following holds:
$uv\in I$ implies that at least one of these elements, $u$ and $v,$ is in $I.$

E.g. the principal ideal $(p)\subset \mathbb{Z}$ is prime if and only if $p$ is prime.

A maximal ideal $I\subset R$ is prime.

Indeed, let $uv\in I.$ Assume that $u\not\in I.$ We need to show that then $v\in I.$ If this were not the case then $u+I$ and $v+I$ are two non-zero elements in the ring $R/I$ such that $(u+I)(v+I)\subseteq I.$ But this is not possible, as $R/I$ is a field. Thus $v\in I,$ as claimed.

Analysing this proof, one can easily see that

If $I\subset R$ is prime then $R/I$ has no zero-divisors, i.e. it is an integral domain.

Further important property of prime ideals is that they are radical, i.e.
$I=\sqrt{I},$ where the radical $\sqrt{I}$ of the ideal $I$ ideal is $\sqrt{I}=\{x\in R\mid \exists k\geq 1 : x^k\in I\}.$ Indeed, $x^k\in I$ implies that either $x$ or $x^{k-1}$ is in $I,$ and we derive $x\in I$ by applying this reduction.

Yet another interesting observation is that $R\setminus I$ is multiplicatively closed.

Rings of fractions

A subset $S\subset R$ is called multiplicatively closed (or multiplicative) if $0\not\in S,$ $1\in S,$ and $uv\in S$ for any $u,v\in S.$

Given a multiplicatively closed set $S\subset R,$ one defines
a relation on $R\times S,$ as follows:

${\displaystyle (y,t)\equiv (x,s)\quad\text{iff there exists }u\in S: uys=uxt.}$

It is not hard to show that $\equiv$ is an equivalence relation.
To simplify the notation, write its equivalence class with representative $(x,s)$ as $\frac{x}{s}.$ We define

$S^{-1}R=(R\times S)/\equiv$ is the ring of fractions of $R$ w.r.t. $S,$ with addition and multiplication given by the rules
$\frac{x}{s}+\frac{y}{t}=\frac{xt+ys}{st},$ $\frac{x}{s}\cdot\frac{y}{t}=\frac{xy}{st}.$

It is easy to check that this is well-defined. We also have

$\phi: R\to S^{-1}R,$ so that $\phi(x)=\frac{x}{1},$ is a ring homomorphism.

Note that $\phi$ need not be injective, i.e. $\phi(R)\cong R$ need not hold. Indeed, if $x\in R$ is a zero-divisor such that $xu=0$ for $u\in S$ then $(x,1)\equiv (0,1),$ and so $\phi(x)=\frac{x}{1}=\frac{0}{1}=0_{S^{-1}R}.$

The most well-known example is the case $R$ being an integral domain, and $S=R\setminus\{0\}.$ Then $S^{-1}R$ is a field, called the field of fractions of $R.$

Examples:

• $\mathbb{Q}$ is the field of fractions of $\mathbb{Z}$
• for the ring of polynomials $\mathbb{F}[T]$ over a field $\mathbb{F}$ the field $\mathbb{F}(T)$ is the field of rational functions over $\mathbb{F} .$
• Let $x\in R$ be non-nilpotent. Then $S=\{1,x,x^2,x^3,\dots\}$ isa multiplicative set. Moreover, then $S^{-1}R\cong R[T]/(Tx-1)$ (it is not completely trivial to prove this, though). Intuitively, we make the variable $T$ behave like the inverse of $x,$ as $Tx=1$ in this ring.

Now let us look at the case $S=R\setminus I,$ for $I$ a nonzero prime ideal. In this case $S^{-1}R$ is denoted by $R_I$ and called the localisation of $R$ at $I.$

Example: Let $\mathbb{F}[T]$ be the ring of polynomials over a field $\mathbb{F},$ and $a\in \mathbb{F}.$ Then $I=(T-a)$ is prime, and $R_I=\mathbb{F}[T]_{(T-a)}$ is equal to $\{\frac{f}{g}\in \mathbb{F}(T)\mid (T-a)\not|\, g \}.$

The ring $R_I$ has unique maximal ideal $IR_I.$

It suffices to show that $\frac{x}{s}$ is invertible in $R_I$ iff ${x}\not\in I.$ Indeed, if $x\in S$ then $\frac{x}{s}\frac{s}{x}=1.$ On the other hand, if $\frac{x}{s}\frac{y}{t}=1$ then there exists $u\in S$ such that $uxy=ust.$ Thus $uxy\not\in I,$ and so $x\not\in I$ (if it was, $uxy$ would be in $I,$ as $I$ is an ideal).

### Dual spaces

5 April, 2009

There is one glaring omission in our Linear Algebra curriculum – it avoids talking about the dual space of a vector space. This makes talking about relationship between subspaces and equations that define them exceedingly difficult. Better late than never, so here it comes.

Let $V$ be a vector space over a field $\mathbb{F}.$ Denote by $V^*$ the set of linear functions $V\to \mathbb{F}.$

Examples
Let $V=C[a,b],$ the space of continuous functions on $[a,b].$ Then the function $\int: V\to V$ given by $f\mapsto \int_a^b f(x) dx$ is linear on $V.$

Let $V=\mathbb{R}[x]$ be the vector space of polynomials with real coefficients.
Then the function $V\to V$ given by $f\mapsto \frac{df}{dx}(0)$ is linear on $V.$

Note that as $f\in V^*$ is linear, one has $f(\alpha v)=\alpha f(v)$ for any $v\in V, \alpha\in \mathbb{F}.$ Thus we have $m_\alpha:V^*\to V^*$ defined by $m_\alpha(f)(v)=f(\alpha v)$ so that $m_\alpha(m_\beta(f))=(m_\alpha m_\beta)(f).$ To simplify notation, we will write $\alpha f$ instead $m_\alpha(f).$ As well, we can define $(f+g)(v)=f(v)+g(v)$ for any $f,g\in V^*,$ and more generally, $(\alpha f+\beta g)(v)=\alpha f(v)+\beta g(v).$ And there is the zero function $0(v)=0$ for any $v\in V.$ Thus we have all the ingredients of a vector space, as can be easily checked.

$V^*$ is a vector space over $\mathbb{F}.$ It is called the dual space of $V.$

So far, we haven’t used the linearity of our functions at all (we actually did not need the fact that $\alpha f(v)=f(\alpha v)$). Indeed, any closed under addition and multiplication set of functions $V\to V$ would form a vector space.
What makes the dual space so special is that to define $f\in V^*$ it suffices to define $f(e_i)$ on a basis $\{e_i\}$ of $V.$ Indeed, $f(\sum_i \alpha_i e_i)=\sum_i \alpha_i f(e_i),$ so we can compute $f(v)$ for any $v=\sum_i \alpha_i e_i,$ once we know the $f(e_i)$‘s.

Thus for a finite-dimensional vector space $V$ one sees a (dependent upon the choice of a basis in $V$) bijection between $V$ and $V^*.$ This bijection, that is even an isomorphism of vector spaces, is defined by the dual basis of $V^*$ given by coordinate functions $x_i=\epsilon_i(x),$ where $x_i$‘s are the coefficients of $x\in V$ is the decomposition of $x$ in the basis $\{ e_i\}.$

Finite-dimensionality is crucial here. E.g. let us consider the vector space of polynomials $\mathbb{Z}[x].$ It is a countable space: one can view it as the set of infinite 0-1 strings, with only finitely many 1’s occurring in each string. On the other hand, $V^*$ can be viewed as the set of all the infinite 0-1 strings, which is uncountable, so there cannot be a bijection between $V$ and $V^*.$

Given $v\in V,$ one can define a function $f_v:V^*\to \mathbb{F},$ as follows: $f_v(g):=g(v).$ It is linear, as $f_v(\alpha g+\beta h)=\alpha g(v)+\beta h(v)=\alpha f_v(g)+\beta f_v(h).$ Here we do not see any dependence on the choice of a basis in $V,$ and we have

The vector space $V^{**}$ of linear functions on $V^*$ is (canonically) isomorphic to $V,$ via the mapping $v\mapsto f_v.$

Indeed, we see immediately that $f_{\alpha v+\beta w}=\alpha f_v+\beta f_w,$ and so we need only to check that this mapping is bijective. Let $\{e_i\}$ be a basis in $V$ and $\{\epsilon_i\}$ its dual basis in $V^*.$ Then $f_{e_i}(\epsilon_j)=1$ if $i=j$ and 0 otherwise. Thus $\{ f_{e_i}\}$ is the basis of $V^{**},$ which is dual to the basis $\{\epsilon_i\}$ of $V^*,$ and the mapping $v\mapsto f_v$ sends the vector with coordinates $v_i$ to the vector with the same coordinates in
the basis $\{ f_{e_i}\}$ of $V^{**}.$ Hence the latter is bijective.

In view of the latter, we can identify $V$ with $V^{**},$ and write $v(g)$ instead $f_v(g).$ The set of $g\in V^*$ such that $v(g)=0$ is a subspace, called annihilator of $v,$ of dimension $n-1=dim(V)-1.$ More generally, the following holds.

Let $U$ be a subspace of $V,$ and $U^0:=\{g\in V^*\mid g(u)=0\text{ for all } u\in U\}.$ Then the annihilator $U^0$ of $U$ is a subspace of $V^*$ of dimension $dim(V)-dim(U).$

Indeed, we can choose a basis $\{e_i\}$ in $V$ so that $\{e_1,\dots,e_{k}$ is a basis of $U,$ where $dim(U)=k.$ Then we have the dual basis $\{\epsilon_i\}$ of $V^*,$ and $U^0$ is the subspace with the basis $\{e_{k+1},\dots,e_n\}.$

In view of this, each $U$ can be obtained as the set of solutions of a system of homogeneous linear equations $g(u)=0,$ for $g\in U^0,$ of rank $dim(V)-dim(U).$

Dual spaces and annihilators under a basis change
Let $X\in GL_n(\mathbb{F})$ be a linear transformation of $V,$ and $U$ a subspace of $V.$ Then $X(U)=\{ Xu\mid u\in U\}$ is a subspace. How can one look at $g(U^0)?$ By writing out $u=\sum_i u_i e_i$ in a basis $\{e_i\},$ for any $g=\sum_i g_i\epsilon_i\in U^0$ in the dual basis $\{\epsilon_i\},$ we get equation $\sum_i g_i u_i =0.$ Thus, considering $X$ as a matrix, we get $g^T YX u=0,$ where $Y$ denotes the action of $X$ on $V^*.$ It follows that $YX=1_{GL_n},$ i.e. $Y=X^{-1}.$ We have, considering that $Y$ acts on $V^*$ by right multiplication, and not by left ones, to take the transpose, too.

$X\in GL_n(\mathbb{F})$ acts on $V^*$ as $(X^{-1})^T.$

An example.
Let $V=\mathbb{F}^3.$ We work in the standard basis $\{e_1,e_2,e_3\}$ of $V.$ Then the dual basis of $V^*$ is $\{\epsilon_1,\epsilon_2,\epsilon_3\}$, so that $\epsilon_i((u_1,u_2,u_3)^T)=u_i.$
Let $G$ be the group of matrices $G=\left\{ \begin{pmatrix} 1&x&y\\ 0&z&u\\ 0&t&w \end{pmatrix} \mid x,y,z,u,t,w\in\mathbb{F} \right\} It fixes, in its left action on $V$ by multiplication, the vector $e_1=(100)^T.$. Let $U$ be the 1-dimensional subspace of $V$ generated by $e_1.$ Then $U^0$ is generated by $\epsilon_2$ and $\epsilon_3.$ The group $G$ preserves $U^0,$ in its action on $V^*.$ As $U_0$ is 2-dimensional, there should be a nontrivial kernel in this action, and indeed, it consists of the elements of the form $\begin{pmatrix} 1&x&y\\ 0&1&0\\ 0&0&1 \end{pmatrix}.$

A particularly simple case is $\mathbb{F} = \mathbb{Z}_2.$ Then $G$ is isomorphic to $S_4,$ the symmetric group on 4 letters, as can be seen in its action on the 4 elements of $V^*$ outside $U^0.$ On the other hand, it acts on the 3 nonzero elements of $U^0$ as $S_3.$

### Orbit-counting Lemma

31 March, 2009

Excercises in algebra

One application of actions of finite groups is in combinatorics, more precisely, counting objects that possess particular symmetries.

A classical example are $n$necklaces and $n$bracelets. These are words of length $n$ in a $k$-letter alphabet, with beginnings and ends of the words glued together and made invisible (so one can visualise them as regular $n-gons$ with letters from the alphabet placed at the vertices). Two necklaces are identical when one is obtained from another by rotations; two bracelets are identical when one can be obtained from another by rotation and a “flip”. One can view counting of necklaces and bracelets as counting of orbits of certain groups on the sets of “labelled” necklaces (resp. bracelets). For the necklace case, the group is $\mathbb{Z}_n,$ and for the bracelet case it is the dihedral group of order $2n.$
The set $\Omega$ of labelled necklaces/bracelets is just the set of words of length $n$ in a $k$-letter alphabet; so one has $|\Omega|=k^n.$

Orbit counting is often simplified by the following result, often attrubuted to Burnside, although it was certainly known to Frobenius.

Let a finite group $G$ act on a finite set $\Omega.$ Denote by $\Omega^g,$ for $g\in G,$ the set of elements of $\Omega$ fixed by $g.$ Then the number $N$ of orbits of $G$ on $\Omega$ is the average, over $G,$ of $|\Omega^g|,$ i.e.
$\displaystyle N=\frac{1}{|G|}\sum_{g\in G} |\Omega^g|.$

The proof involves counting of edges in the bipartite graph, with one part being $\Omega,$ and the other being $G;$ a pair $(\omega,g),$ for $\omega\in\Omega$ and $g\in G$ is an edge iff $\omega\in\Omega^g.$ Thus the total number of edges in this graph is $\sum_{g\in G} |\Omega^g|.$
On the other hand, a pair $(\omega,g)$ is an edge iff $g\in G_{\omega},$ the stabiliser of $\omega$ in $G.$ Thus, for a $G$-orbit $\Omega_\omega,$ one has the total number of edges leaving $\Omega_\omega$ equal to $|\Omega_\omega||G_\omega|=|G|.$ Summing over all the $G$-orbits, one obtains the total number of edges $N|G|,$ completing the proof.

Often it is easier to sum over distinctive representatives $R=\{g_1,\dots,g_t\}$ of the conjugacy classes $g^G=\{xgx^{-1}\mid x\in G\},$ as $|\Omega^g|=|\Omega^{xgx^{-1}}|.$ So one has
$\displaystyle N=\frac{1}{|G|}\sum_{g\in R } |\Omega^g||g^G|=\sum_{g\in R } \frac{|\Omega^g|}{|C_G(g)|},$
where $C_G(g):=\{x\in G\mid xgx^{-1}\}$ denoting the centraliser of $g$ in $G.$

Let us see how this can be applied to counting bracelets. For simplicity, let us assume that $n$ is prime. We consider natural the action of the dihedral group $G=D_{2n}$ on the set of $k^n$ words of a $k$-letter alphabet.

We know that $G$ is generated by the cyclic permutation $c=(1,2,\dots,n)$ and an order two “flip” permutation $f=(2,n)(3,n-1)\dots (\lfloor n/2\rfloor,\lceil n/2\rceil).$ The group can be decomposed into the union of cosets $G=\langle c\rangle\cup f\langle c\rangle.$
The only words fixed by a nonidentity element $x=c^m$ of the subgroup $\langle c\rangle$ are the “constant” ones, i.e. words consisting of a single letter, and so $|\Omega^x|=k.$ The coset $f\langle c\rangle$ equals $f^G$ (an unusual coincidence), and so it suffices to find
$\Omega^f.$ For a word $w$ to be in the latter set, it must have $w_2=w_n,$ $w_3=w_{n-1},$$w_{\lfloor n/2\rfloor}=w_{\lceil n/2\rceil},$ so there are $k^{(n+1)/2}$ possibilites.
Thus we count $N|G|=2nN=|\Omega|+(n-1)|\Omega^c|+n|\Omega^f|=k^n+(n-1)k+n k^{(n+1)/2}.$
E.g. for $n=5,$ $k=3$ one obtains $N=39.$

### Group actions

25 March, 2009

We have already seen abstract groups as permutation groups, or, more generally, as groups of bijections, in a number of contexts:

• $G$ acting on itself by (left) multiplication, when we associated with each $g\in G$ a bijection $L_g:G\to G$ by $L_g(x)=gx$
• $G$ acting on itself by conjugation, when we associated with each $g\in G$ a bijection $C_g:G\to G$ by $C_g(x)=gxg^{-1}$
• $G$ acting on the set of (left) cosets $G/H=\{gH\mid g\in G\}$ of a subgroup $H\leq G,$ when we associated with each $g\in G$ a bijection $M_g:G/H\to G/H$ by $M_g(xH)=gxH$

We have also seen matrix groups at work as permutation groups (e.g. on the set of subspaces), permutation groups becoming matrix groups (e.g. $S_n$ represented by $n\times n$ permutation matrices), etc.
What unites all these examples is the concept of a group action.

Let $G$ be a group and $\Omega$ be a set, with a group of bijections $S(\Omega).$ An action of $G$ on $\Omega$ is a homomorphism $\phi: G\to S(\Omega).$

There are many different types of actions, e.g. permutation actions when $\Omega$ is a finite set and $S(\Omega)$ the symmetric group (of all permutations of $\Omega,$
linear actions, when $\Omega$ is a vector space, and $S(\Omega)$ a group of its linear transformations (i.e. matrix groups), etc.

By the homomorphism theorem, $Im(\phi)\cong G/Ker(\phi).$ An action satisfying $Ker(\phi)=\{id\}$ is called faithful (or effective).
For instance, the action of $G$ on itself by (left) multiplication is always faithful, but the action on cosets of an $H\leq G$ need not be faithful. E.g. it will not be faithful if $G$ is commutative and $H\neq\{id\}.$

Orbits and point stabilisers
An action $\phi$ of $G$ on $\Omega$ defines an equivalence relation $\sim$ on $\Omega,$ as follows: $a\sim b$ iff there exists $g\in G$ such that $\phi(g)a=b.$ The equivalence classes of $\sim$ are called orbits (more precisely, orbits of $Im(\phi)$ on $\Omega$).
Let $\omega\in\Omega$ and $G\geq H=\{h\in G\mid \phi(h)\omega=\omega\}.$ Then $\phi(H)$ is called the stabiliser of $\omega$ in $\phi(G),$ and denoted $\phi(G)_\omega.$

Let $\omega'=g\omega$ for some $g\in \phi(G).$ What can be said about $\phi(G)_{\omega'}?$

$\phi(G)_{\omega'}=g\phi(G)_\omega g^{-1}.$

Indeed, the coset $g\phi(G)_\omega$ consists of the elements of $\phi(G)$ that map $\omega$ to $\omega'.$ Thus any $x\in H:=g\phi(G)_\omega g^{-1}$ fixes $\omega',$ and any $y$ that fixes $\omega'$ belongs to $H,$ as $g^{-1}yg\in \phi(G)_\omega.$

When we consider the action $C: G\to G$ of $G$ on itself by conjugation, the orbits are called conjugacy classes.
For instance, the conjugacy classes of $S_n,$ the symmetric group on $n$ letters, can be shown to be in 1-to-1 correspondence with the unordered partitions of $n$ into parts $1\leq p_1\leq p_2\leq\dots\leq p_k,$ where $n=p_1+\dots +p_k,$ as follows:
Let a permutation $\sigma=(i_{1,1},\dots,i_{1,p_1})\dots (i_{k,1},\dots,i_{k,p_k})$ be written in the cyclic notation. Then for any $\tau\in S_n$ one has

$\displaystyle \tau\sigma\tau^{-1}=(\tau(i_{1,1}),\dots,\tau(i_{1,p_1}))\dots (\tau(i_{k,1}),\dots,\tau(i_{k,p_k})),$

and so we can transform $\sigma$ to any permutation with cycles of the same lengths $p_1,\dots,p_k.$

Equivalent actions
When do two actions $\phi$ and $\psi$ of a group $G$ on, respectively, sets $\Omega$ and $\Sigma$ can be viewed as “the same”? Obviously we need a bijection $f:\Omega\to\Sigma$ to exist, that also “respects” $G,$ in the sense that $f(\phi(g)\omega)=\psi(g)f(\omega)$ for all $\omega\in\Omega$ and $g\in G.$ Such an $f$ is called a $G$equivariant bijection, and is said to define an isomorphism of actions.

An action that has just one orbit is called transitive.

Let $\phi$ be a transitive action of $G$ on $\Omega.$ Then $\phi$ is isomorphic to the action on $G/G_\omega=\{gG_\omega\mid g\in G\},$ for any $\omega\in\Omega,$ where we denoted by $G_\omega$ the preimage of $\phi(G)_\omega$ in $G.$

This isomorphism of actions is given by $f:\Omega\to G/G_\omega$ such that $f(\mu)=gG_\omega,$ where $\phi(g)(\omega)=\mu.$ Indeed,

1. $f$ is obviously a bijection
2. $\phi(h)\omega=\mu$ iff $\phi(g^{-1}h)\omega=\omega$ iff $g^{-1}h\in G_\omega$ iff $h\in gG_\omega$
3. $f$ is $G$-equivariant. Indeed, let $\phi(h)(\omega)=\mu.$ Then for any $g\in G$ one has $f(\phi(g)\mu)=f(\phi(gh)\omega)=(gh)G_\omega=g(hG_\omega)=\phi(g)f(\mu),$ as claimed.

Finally, for an action $\phi$ of $G$ on $\Omega$ we can characterise $Ker(\phi)$ as

$\displaystyle Ker(\phi)=\bigcap_{g\in G} gG_\omega g^{-1},$ for an $\omega\in\Omega.$

Actions of matrix groups (a.k.a. linear actions)
Subgroups of $GL_n(\mathbb{F}),$ the group, with respect to matrix multiplication, of invertible $n\times n$ matrices with entries in a field $\mathbb{F},$ have a number of natural actions, that all come one way or another from the vectorspace $V\cong \mathbb{F}^n.$ Indeed, each $g\in GL_n( \mathbb{F} )$ provides a bijection $V\to V$ by $v\mapsto gv$ for any $v\in V.$ Moreover, such a bijection is even an automorphism of the vectorspace $V.$ (Automorphisms of vectorspaces are defined just as these for groups and rings: these are bijections preserving the corresponding algebraic structure (i.e. operations, etc)).

Apart from the action $v\mapsto gv$ on the vectors of $V,$ (called the natural action (by left multiplication)) there is another action on the vectors, namely, by $v\mapsto vg^{-1}$ (which one can equivalently writen as $v\mapsto (g^{-1})^T v$). It is called contragradient action.
(Note that the map $f$ given by $v\mapsto vg$ is not always an action, as $f(gg')=f(g')f(g),$ and so $f$ is not a homomorphism, unless we have a commutative group.)

Natural and contragradient actions give a simple example of two non-equivalent faithful (i.e. with trivial kernel) actions of one group, say, $G=GL_n(\mathbb{F} )$, with $n\geq 2,$ on the same set $V.$ Indeed, if $f:V\to V$ were an equivalence of these actions, then $f(gv)=(g^{-1})^T (f(v))$ for all $v\in V$ and $g\in G.$
Let $n=2$ and $f(v)=u$ so that $u$ and $v$ are linearly independent. In the basis $(v,u)$, take $g=\begin{pmatrix} 1&1\\ 0& 1\end{pmatrix}.$ Then $(g^{-1})^T=\begin{pmatrix} 1&0\\ -1& 1\end{pmatrix}.$ We see that $f(gv)=f(v)=u\neq (g^{-1})^T(u),$ a contradiction.
If, on the other hand, $f(v)$ and $v$ are always linearly dependent,
we take the same $g$ in the standard basis $(v,w)$ and see that $f(gv)=f(v)=\alpha v=(g^{-1})^T (f(v))=\alpha (g^{-1})^T (v),$ for some $0\neq \alpha\in\mathbb{F},$ again a contradiction. The argument showing non-equivalence in the case $n\geq 3$ is similar. For $n=1$ these two actions are isomorphic, as $GL_1(\mathbb{F})$ is commutative.

Another important class of actions is the action on $G\leq GL_n(\mathbb{F} )$ on subspaces of $V.$ For simplicity, let us consider the action on the set $\Gamma_k(V)$ of subspaces of dimension $k.$ Given a $k$-dimensional subspace $U\subset V$ and $g\in G,$ we define $g(U)=\{gu\mid u\in U\}.$ Then $g(U)$ is a $k$-dimensional subspace, as can be seen by choosing a basis $u_1,\dots, u_k$ of $U$ and observing that $gu_1,\dots,gu_k$ is a basis of $g(U).$ Thus $g$ induces a bijection on $\Gamma_k(V).$
When $G$ fixes a $U\in\Gamma_k(V),$ we can choose a basis in such a way, that $G$ becomes a group of $2\times 2$ block matrices, discussed earlier here.In this case $G$ has a normal subgroup acting trivially on $U,$ that is the kernel of the action of $G$ on the quotient space $V/U.$

The centre $Z=Z(GL_n(\mathbb{F} ))$ of $GL_n(\mathbb{F} )$ acts trivially on $\Gamma_k(V),$ as it preserves every 1-dimensional subspace (indeed, any $g\in Z$ equals $\lambda I,$ with $0\neq\lambda\in\mathbb{F}$).

Projective line.
The simplest nontrivial $\Gamma_k(V)$ is $\Gamma_1(V)$ when $n=2.$ This set is called projective line and denoted $\mathbb{F}P^1,$ and can be viewed as the set of pairs $\{(1:\alpha)\mid \alpha\in \mathbb{F}\}\cup\{(0:1)\},$ as follows: every 1-dimensional subspace $(x:y)$ of $V$ can be described by equation $y=\alpha x$ or by equation $x=0.$
To understand the action of an element $g$ of $G=GL_2(V)$ on $\mathbb{F}P^1,$ consider $g=\begin{pmatrix} a& b\\ c& d\end{pmatrix}$ and $g((x:y))=(ax+by:cx+dy)$ just as if we multiply a vector by $g,$ and then normalize either to $\left(1:\frac{cx+dy}{ax+by}\right)$ or to $(0:1),$ depending upon whether or not $ax+by=0.$

### Strongly regular graphs and generalised quadrangles

16 March, 2009

A generalised quadrangle with lines of size $s+1$ and with $t+1$ points on each line (a $GQ(s,t)$, for short) has a lot of “regularity” – and as we learned from a previous post, any finite “non-degenerate” GQ is a $GQ(s,t)$ for some $s,t\geq 2.$ This regularity allows for the use of quite a bit of linear algebra and algebraic graph theory in investigating them.
One considers the collinearity graph of the GQ $(P,L).$

The collinearity graph of a GQ $(P,L)$ is the graph with the vertex set $P,$ two vertices $u,v$ are adajcent iff they are collinear, i.e. if there exists $\ell\in L$ such that $u,v\in \ell.$

For a $GQ(s,t),$ such a graph is strongly regular, see wikipedia for a good short intro.
Basically, a graph $\Gamma=(V,E)$ is strongly regular if it is

• regular, i.e. each vertex is adjacent to $k$ other vertices;
• for each edge $(u,v)\in E,$ there are exactly $\lambda$ vertices $w\in V$ such that $(u,w),(v,w)\in E;$
• for each non-edge $(u,v)\in \binom{V}{2}-E,$ there are exactly $\mu$ vertices $w\in V$ such that $(u,w),(v,w)\in E.$

One says that when $\Gamma$ is as above, one has an $srg(|V|,k,\lambda,\mu).$ One famous example of a srg is the Petersen graph. It is an $srg(10,3,0,1).$

Exercise.

1. Prove that the collinearity graph of an $GQ(s,t)$ $(P,L)$ is an $srg(|P|,s(t+1),s-1,t+1).$
2. Compute $|P|$ as a function of $s$ and $t.$ (namely, it is $(s+1)(st+1)$)

An important tool in studying srg’s, and graphs in general, is the

Adjacency matrix of a graph $\Gamma=(V,E)$ is the $|V|\times |V|$ matrix $A=A(\Gamma)$ that has $A_{uv}=1$ for each $(u,v)\in E$ and otherwise $A_{uv}=0.$

In particular $A=A^T$ and the $v$-th row sum is the number of vertices adjacent to $v.$ This means that the all-1 vector is an eigenvector of $A$ with eigenvalue $k.$ In fact, this is the largest eigenvalue, and for the connected graphs it has 1-dimensional eigenspace. For srgs, much more can be said.

For a connected $\Gamma$ an $srg(v,k,\lambda,\mu),$ the adjacency matrix $A$ has at most 3 distinct eigenvalues, namely $k$ and $(\lambda-\mu\pm\sqrt{(\lambda-\mu)^2-4(k-\mu)})/2.$

This follows from the fact that multiplication of 0-1 matrices boils down to computing certain combinatorial parameters of underlying graphs. E.g. $(A^2)_{uv}$ is the number of length 2 walks from $u$ to $v.$ In our case we have

$\displaystyle A^2=kI+\lambda A+\mu (J-I-A),$

where $I$ is the identity matrix and $J$ the all-1 matrix. Any eigenvector $z$ with eigenvalue $\theta\neq k$ must be orthogonal to the $k$-eigenvector, in particular $Jz=0.$ Thus

$\displaystyle A^2z=kIz+\lambda Az-\mu (I+A)z,$

and this translates into $\theta^2=k+\lambda\theta-\mu-\mu\theta,$

a quadratic equation! It remains to compute its solutions to establish the claim.

Moreover, one can compute the multiplicities of the eigenvalues of $A.$
Indeed, we know that $k$ has multiplicity $1,$ and so the other two eigenvalues have multiplicities $m_1$ and $m_2$ that satisfy

• $m_1+m_2=v-1$ (by just discussed) and
• $m_1\theta_1+m_2\theta_2+k=0$ (as the trace of $A$ is 0.)

We can solve these two linear equations to obtain

$m_{1,2}=\left(v-1\pm\frac{2k+(v-1)(\lambda-\mu)}{\sqrt{(\lambda-\mu)^2-4(k-\mu)}}\right)/2.$

These are powerful conditions that rule out large sets of tuples $(v,k,\lambda,\mu),$ as $m_1,m_2$ must be nonnegative integers!

For an srg arising from a $GQ(s,t),$ one can compute that the eigenvalues of $A$ are $s(t+1),$ $s-1,$ and $-t-1.$

This gives one, by computing the multiplicities, that for $s=2$ one has $t\in\{1,2,4,10\}.$ With more work (one needs e.g. the Krein bound) the case $t=10$ can be ruled out.

### Cosets and normal subgroups

11 March, 2009

Just as in the case of abelian groups, each subgroup $H\leq G$ gives rise to a coset decomposition of $G,$ or, more precisely, to two decompositions, into left cosets i.e.

$\displaystyle \{gH\mid g\in G\},$

and right cosets, i.e.

$\displaystyle \{Hg\mid g\in G\}.$

It is proved by the same argument as in the case of abelian groups that one indeed has decompositions of $G$ in this way.
Unlike in the case of abelian groups, these two decompositions need not be the same: an example is given by $S_3\geq H\cong S_2.$
The following is an important and elementary implication of the fact that all the cosets of $H$ have the same cardianlity:

Let $|G|<\infty,$ and $H\leq G.$ Then $|H|$ divides $|G|.$

An extremely important case (and, in fact, the only case) when these two decompositions are the same is when $H=Ker(\phi),$ where $\phi: G\to F$ is a group homomorphism. (Just as in the case of abelian groups, one can show that $Ker (\phi)\leq G$ and $Im(\phi)\leq F.$)
We need to show that for any $g\in G$ and $k\in Ker(\phi)$ one has $gk=k'g$ for some $k'\in Ker(\phi).$ We compute $\phi(gk)=\phi(g)\cdot 1_F=\phi(k')\phi(g)=\phi(k'g).$ Hence $gK=Kg.$

A subgroup $H\leq G$ satisfying $gH=Hg$ for all $g\in G$ is called normal (notation: $H\unlhd G,$ or $H\lhd G,$ when $H\neq G.$)

Often it is more convenient to work with the equivalent condition for normality, namely that $gHg^{-1}=H$ for all $g\in G.$

Examples of normal subgroups.

• The subgroup generated by $\begin{pmatrix}1&2&3\\3&1&2\end{pmatrix}$ in $S_3$ is normal.
• $H\leq G=GL(n,\mathbb{F})$ defined by $H=\{x\in G\mid\det x=1\}$ is normal.
• $H\leq G$ of index 2 (i.e. the one that has exactly 2 left cosets in $G$) is normal.

The 1st is easy to establish directly (it also follows from the 3rd).The 2nd follows from the identity $\det x=\det (gxg^{-1}).$
The 3rd follows from the equality $G=H\sqcup gH=H\sqcup Hg',$ and an observation that $g\in Hg',$ so $Hg'=Hg.$

Cosets play an important role in constructing “easier” groups from “complicated” ones, by the quotient group construction. The difference with the case of abelian groups is that only normal subgroups can be kernels of homomorphism.

Let $H\unlhd G.$ Then $G/H:=\{gH\mid g\in G\}$ is a group with operation $(gH)(g'H)=(gg')H,$ and $g\mapsto gH$ a group homomorphism $\phi: G\to G/H$ with the kernel $Ker\phi=H.$

We need to use $gH=Hg$ to establish that the multiplication in $G/H$ is well-defined: $(gH)(g'H)=(gH)(Hg')=g(HH)g'=g(Hg')=(gg')H.$
Then $G/H$ is a group with the identity element $H,$ as associativity in $G/H$ follows from the associativity in $G,$ and $(gH)^{-1}=(g^{-1})H.$
It can be readily checked that $\phi$ is a homomorphism, and that $H= Ker\phi.$ Indeed, $\phi(gg')=(gg')H=(gH)(g'H).$ Also, $\phi(h)=hH=H$ for $h\in H,$ and so $H\leq Ker\phi.$ On the other hand if $g\in G-H$ then $\phi(g)=gH\neq H,$ and so $H\geq Ker\phi.$

Matrix groups
Subgroups of $GL_n(\mathbb{F}),$ the group, with respect to matrix multiplication, of invertible $n\times n$ matrices with entries in a field $\mathbb{F},$ provide a rich playground for various examples of normal subgroups and quotient groups. To this end, we will work with block matrices, i.e. we will partition our matrices into rectangular blocks of appropriate size, so that this block decomposition is preserved under the matrix multiplication. The simplest case is the following partition into 4 blocks (i.e. $2\times 2$ block matrices).

Given an $n\times n$ matrix $A$ and positive integer $p we can view $A$ as $A=\begin{pmatrix} A_{11}&A_{12}\\A_{21} & A_{22}\end{pmatrix},$ where $A_{11}$ is a $p\times p$ matrix, $A_{22}$ a $(n-p)\times (n-p)$ matrix (and certainly $A_{12}$ and $A_{21}^T$ being of size $p\times (n-p)$).

Let us fix $n$ and $p.$
First, we need to convince ourselves (an easy exercise in matrix multiplication) that the following multiplication law works for two block matrices $A$ and $B$:
$\displaystyle AB=C=\begin{pmatrix} A_{11} B_{11}+A_{12}B_{21}& A_{11}B_{12}+A_{12}B_{22}\\ A_{21}B_{11}+A_{22}B_{21}&A_{21}B_{12}+A_{22}B_{22}\end{pmatrix}$
It should of course look very familiar, as this is exactly how two $2\times 2$ matrices are multiplied.
When $A_{21}=B_{21}=0,$ we see that the shape is preserved, i.e. $C_{21}=0.$ More precisely
$\displaystyle AB=C=\begin{pmatrix} A_{11} B_{11}& A_{11}B_{12}+A_{12}B_{22}\\ 0 &A_{22}B_{22}\end{pmatrix}$
We see that the blocks with indices 11 and 22 behave as if “nothing around them matters”, as if they sit in their own groups $G_p(\mathbb{F})$ and $G_{n-p}(\mathbb{F})$ (that is, assuming the matrices we talk about are invertible). We can formalise this observation as follows.

Let $H be a subgroup of $2\times 2$ block matrices, with diagonal blocks of sizes $p\times p$ and $(n-p)\times (n-p),$ and such that $A_{21}=0$ for all $A\in H.$ Then

1. the maps
$\phi_1 : H\to GL_p(\mathbb{F}):\quad A\mapsto A_{11}$ and
$\phi_2 : H\to GL_{n-p}(\mathbb{F}):\quad A\mapsto A_{22}$
are group homomorphisms.
2. $Ker(\phi_k)=\{A\in H\mid A_{kk}=I\}$ for $k=1,2.$

The reader is encouraged to provide a proof.
More generally, this result can be interpreted in terms of the actions of $H$ on the $n-$dimensional vectorspace $V$ over $\mathbb{F}.$

Exercise. Prove that the intersection of two normal subgroups of a group is a normal subgroup. Based on this, describe $Ker(\phi_1)\cap Ker(\phi_2),$ and derive that $\{A\in H\mid A_{11}=I, A_{22}=I\}$ is a normal subgroup of $H.$

Note that $Im(\phi_k)$ (where $k=1$ or $K=2$) cannot be obtained by simply setting the remaining off-diagonal block of $H$ to 0, and the remaining diagonal block to the identity matrix. For instance, let $H be generated by the matrix $X=\begin{pmatrix} \mathbf{i}&0\\ 0& -1\end{pmatrix},$ where $\mathbf{i}=\sqrt{-1}.$ Then $H$ is a cyclic group of order 4, and it satisfies the conditions of the above lemma with $n=2,$ $p=1.$ It is easy to see that $\phi_1$ is an isomorphism, and $\phi_2$ is a homomorphism with the kernel generated by $X^2.$

Let us denote $K_k=Ker(\phi_k),$ and $K_{12}=K_1\cap K_2.$
As $|H|=|K_k||Im(\phi_k)|,$ by the usual argument involving decomposition of $H$ into the cosets of $K_k,$ computing $|H|$ for $\mathbb{F}= \mathbb{Z}_q,$ for $q$ prime, can be accomplished by computing $|K_k|$ and $|Im(\phi_k)|$ separately. In turn, $|K_k|=|K_{12}||Im(\phi_{3-k}|,$ so we can simplify the task of computing $|K_k|,$ too.

To illustrate this principle, let us compute the order of
$\displaystyle H=\{A\in GL_n( \mathbb{Z}_q)\mid A_{21}=0.\}\qquad\qquad (1)$
First we derive the following:

$|GL_m( \mathbb{Z}_q)|=(q^m-1)(q^m-q)(q^m-q^2)\dots (q-1),$ where $q$ is prime.

Indeed, we can fill in the 1st row of an $m\times m$ matrix in $q^m-1$ way (everything goes, except all zeros). Fixing this 1st row vector $v_1,$ we can again fill in the 2nd row with entries of any vector $v_2,$ which is linearly independent (to get an invertible matrix) of $v_1,$ i.e. lies outside of the subspace $V_1$ spanned by $v_1.$ As $|V_1|=q,$ we get $q^m-q$ ways to choose $v_2.$ Now we fix $v_2,$ as well, and fill in the 3rd row with entries of any vector $v_3,$ which is linearly independent of $v_1$ and $v_2,$ i.e. lies outside of the subspace $V_2$ spanned by $v_1$ and $v_2.$ As $|V_2|=q^2,$ we get $q^m-q^2$ ways to choose $v_2.$
So when filling the $(\ell+1)$-th row, we have $q^m-q^\ell$ choices, giving us the formula above.

Now, we claim that $|K_{12}|=q^{p(n-p)}.$ Indeed, $K_{12}$ consists of $A$ such that $A_{21}=0,$ $A_{11}=I,$ $A_{22}=I,$ whereas we can fill the entries of $A_{12}$ without any strings attached – the result will be invertible. Thus we have
$|H|=(q^p-1)(q^p-q)\dots (q-1) q^{p(n-p)} (q^{n-p}-1)(q^{n-p}-q)\cdots (q-1).$

Excercise. Compute $|H'|$ for $H'=\{A\in H\mid \det A=1\}$ and $H$ as in (1).
(Hint: use the fact that $\det$ is a group homomorphism.)