Archive for the ‘undergrad maths’ Category

Experimental Sage-based Mathematics course

7 August, 2011

We are to embark upon teaching a 2nd year undergraduate course Experimental Mathematics, which will cover computer algebra basics, and refresh concepts from 1st year linear algebra, calculus, and combinatoris, with few more advanced things thrown in. The course will be based on Sage, with the actual software running on dedicated servers, and accessible via Sage web notebook interface (i.e. basically nothing but a web browser running on the student’s computer/laptop/ipad, etc).

Given the enrollment of about 120, this will be interesting…

PS. Most students did not appreciate the freedom given, and complained, complained, complained…

Advertisements

Integer flows and graph theory applications

14 October, 2010

Here we are continuing the story of Ford-Fulkerson algorithm. Assume that {c:D\rightarrow {\mathbb Z}_+}, i.e. the capacities are all integers. Then starting the Ford-Fulkerson from the zero flow, we see that at each iteration {\tau(P)\in {\mathbb Z}_+}. Thus the optimal flow will have integer values for each arc (such flows are called integral). We have

Corollary 3 A maxflow problem with integer capacities has an integral optimal solution.

This has various graph-theoretic applications. We can convert an undirected graph {\Gamma=(V,E)} into a digraph by replacing each edge {\{x,y\}} by a pair of opposite arcs {xy} and {yx}, set all arc capacities to 1, and see what this implies for a pair of non-adjacent vertices {s\neq t\in V}. Running Ford-Fulkerson algorithm, we obtain an integral maximum {s-t} flow {f}, of value {k:={\mathrm val}(f)}.

We also know that there is, by the maxflow-mincut Theorem, an {s-t} cut {(S,V\setminus S)} s.t. {{\mathrm cap}(S,V\setminus S)={\mathrm val}(f)}. Instead of capacity, it is more appropriate to talk here about the size {|(S,V\setminus S)|} of the cut, i.e. the number of edges crossing from one part of the cut to another. Obviously, by definition of the capacity, we have {{\mathrm cap}(S,V\setminus S)=|(S,V\setminus S)|}.

The flow {f} can be viewed as a collection {D_f} of {k} edge-disjoint {\Gamma}-paths from {s} to {t}, as can be seen by induction on {{\mathrm val}(f)}: indeed, if {{\mathrm val}(f)=1} then by construction (via the Ford-Fulkerson algorithm) {D_f} is a path. Suppose now this statement holds for all {k<{\mathrm val}(f)}. As {D_f} induces a connected subgraph, it contains a shortest path {P}. Then, {P} is cut by a minimum cut {(S,V\setminus S)}—otherwise the cut does not cut! Removing all the edges of {P} from {D}, we thus obtain a graph {\Gamma'} that has the cut {(S,V\setminus S)} of size {{\mathrm val}(f)-\ell}, for {\ell>0}. The flow {f'} in {\Gamma} corresponding to {D_f\setminus P} has value {{\mathrm val}(f)-\ell}, and {P} is an augmenting path for this flow, with {\tau(P)=1}. Thus {\ell=1}, i.e. {(S,V\setminus S)} cuts just 1 edge in {P}, and by induction {\Gamma'} contains {k-1} edge-disjoint {\Gamma}-paths, and we obtain

Corollary 4 (Menger Theorem (for edge-disjoint paths)) The minimum size of an {s-t} cut in {\Gamma} equals the maximal number of edge-disjoint {s-t} paths in {\Gamma}.

There is a graph transformation that allows one to prove Menger Theorem for internally disjoint {s-t} paths (two paths between {s} and {t} are called internally disjoint is their only common vertices are {s} and {y}). Our goal is to prove the following.

Theorem 5 (Menger Theorem (for internally disjoint paths)) The minimum size of an {s-t} cut in {\Gamma} equals the maximal number of internally disjoint {s-t} paths in {\Gamma}.

Proof: Assume {s} and {t} are not adjacent. Replace each {v\in V\setminus\{s,t\}} by a pair of vertices {v_+}, {v_-}, joined by an arc {v_+ v_-}. Then replace each edge {\{x,y\}} of {\Gamma} by a pair of opposite arcs {xy} and {yx}. Replace every arc {xy} such that {s,t\not\in \{x,y\}} by the arc {x_{-}y_{+}}, each arc {xy} with {x=s} or {x=t} by {xy_+}, and each arc {xy} with {y=s} or {y=t} by {x_-y}. Now each {s-t} path {s,x^1,\dots,x^{k-1},t} corresponds to the path {s,x^1_+,x^1_-,x^2_+,\dots,x^{k-1}_+,x^{k-1}_-,t} in the new graph, and the opposite holds, too: each {s-t} path in the new graph has this form.

Assign capacities 1 to the arcs {v_+ v_-}, and infinite capacities (it suffices to take this “infinity” to be bigger than, say, {10|E\Gamma|}) to the rest of the arcs. Due to the choice of capacities, a minimum {s-t} cut {(S,V\setminus S)} will satisfy {{\mathrm cap}(S,V\setminus S)\leq |E\Gamma|}, and thus the only arcs from {S} to {V\setminus S} will be of type {v_+v_-}. Say, there will be {k} such arcs. By (a straightforward generalisation of) Menger Theorem for edge-disjoint paths (to directed graphs), there will be exactly {k} arc-disjoint {s-t} paths in this digraph. By its construction, they cannot share a vertex {v_-} (resp. {v_+}), as there is only one outgoing (resp. incoming) arc on it. Thus these paths are internally disjoint, and the corresponding {k} paths in {\Gamma} are internally disjoint, too. \Box

One more classical application is a maxflow-based algorithm to construct a maximum matching in a bipartite graph {\Gamma=(V,E)}, with the bipartition {V=V_s\cup V_t}. We add two more vertices {s} and {t} to it, and connect {x\in\{s,t\}} to each {v\in V_x} by an edge. In the resulting new graph {\Gamma'} we solve the {s-t} maxflow problem with all capacities set to 1. More precisely, we orient each edge so that each {\{v_s,v_t\}}, {\{s,v_s\}}, {\{v_t,t\}} becomes an arc {v_sv_t}, respectively, {sv_s}, respectively {v_tt}.

Ford-Fulkerson algorithm, when started from the zero flow, will produce an integral maximal flow, of value {k:={\mathrm val}(f)}. As we know from Menger’s theorem, we will have {k} internally disjoint {s-t}-paths. They will induce a matching {M_f} on {\Gamma} (just look at the {\{v_s,v_t\}} edges only). On the other hand, any matching {M} on {\Gamma} can be used to construct an {s-t}-flow of value {|M|} in {\Gamma'} with all capacities 1. Hence {M_f} is a maximum matching in {\Gamma}.

Augmenting paths for Maxflow and Mincut

11 October, 2010

We discussed how to formulate the MAXFLOW problem for networks, i.e. arc-weighted digraphs {(V,D,c)} with two specific vertices, source {s} and sink {t} and arc capacities {c:D\rightarrow{\mathbb R}_+}, and treat it via linear programming. An alternative approach is via augmenting paths. The paths we talk about will disregard the arc directions in {D}, i.e. they will be {s}{t} paths in the underlying undirected graph. (When {e\in P} has the direction opposite to the arc in {D}, we write {e\in P\setminus D}.)

Definition 1 Given an {s}{t} flow {f:D\rightarrow {\mathbb R}_+}, an {f}augmenting path is an {s}{t} path {P} in the underlying graph, such that for each {e\in P}

  • if {e\in D} then {f(e)<c(e)};
  • if {e\not\in D} then {f(e)>0}.

The tolerance of {P} is

\displaystyle \tau(P):=\min(\min_{e\in P\setminus D} f(e), \min_{e\in P\cap D}c(e)-f(e)).

Note that the definition of tolerance extends unchanged to any path starting at {s}.

We can use {P} to improve {f}, as follows.

Lemma 2 For each {e\in D}, let

\displaystyle f'(e)=\begin{cases} f(e) & e\not\in P\\ f(e)-\tau(P) & e\in P\setminus D\\ f(e)+\tau(P) & e\in P\cap D \end{cases}.

Then {f'} is an {s}{t} flow in {(V,D,c)}, of value greater than that of {f} by {\tau(P).}

Proof: It is immediate from the definition of {\tau(P)} that {0\leq f'(e)\leq c(e)} for any {e\in D}. We still need to check the flow conservation constraints

\displaystyle \sum_{x: xv\in D} f'((xv))=\sum_{x: vx\in D} f'((vx))\quad\forall v\in V\setminus\{s,t\}.

If {P} visits {v\in V\setminus\{s,t\}} then there are two arcs, say, {xv} and {vy} on {v} that are affected when we change {f} to {f'}, and the changes in flow values cancel each other in each of the four cases (recall that {P} is a path in the underlying undirected graph). Finally, the value of {f'} is the net outflow from {s}, and the latter is greater than that of {f} by {\tau(P)}. \Box

This suggests finding a maximum flow by repeatedly finding an augmenting path and improving the current flow with it, till no augmenting path is available (this is, for instance, how Ford-Fulkerson algorithm is working). For this to succeed, we obviously need to prove that {f} is maximum if and only if it does not have an {f}-augmenting path.

Augmenting paths are found by a breadth-first search from {s}. At some point, if we did not succeed in reaching {t}, we would end up with the set of vertices {S} reachable, in the underlying non-oriented graph, by paths with positive tolerance. Each {e\in D} crossing over from {S} to {T=V\setminus S} satisfies the property that {c(e)=f(e)}, and each arc {e} crossing from {T} to {S} satisfies {f(e)=0} — otherwise we would have added their vertices in {T} to {S}. Therefore,

\displaystyle {\mathrm val} (f)=\sum_{xy\in D: x\in S, y\in T} c((xy))=:{\mathrm cap} (S,T),

where on the right we have the definition of capacity of an arbitrary {s}{t} cut {(S,T)}.

It is easy to show that for an arbitrary {s}{t} cut {(S',T')}, one has {{\mathrm cap} (S',T')\geq {\mathrm val} (f)}. Thus we have have obtained the Maxflow-Mincut Theorem, and also proved that our algorithm, to be described in more detail shortly, finds a maximum flow {f}, and a corresponding to it minimum cut {(S,T)}.

Breadth-first search: Ford-Fulkerson algorithm

As usual in breadth-first search, we use a queue, i.e. a data structure {(Q,p,q)} that has two indices, one, {q}, pointing to the end of the queue, for new arrivals, and the other, {p}, pointing to the beginning of the queue, i.e. to the first element to be processed. There will be elementary operations:

  • {{\mathrm Insert} (Q,x)} – adds a new element, {x}, to the queue, and icrements {q}. In this case we also do not allow an element to enter the queue twice, so {{\mathrm Insert} (Q,x)} has no effect if {x} already was these once.
  • {x:={\mathrm First} (Q)} – takes the (currently) first element, {x}, from the queue, and icrements {p}—this will fail if {Q} is empty, i.e. if {p=q}.

Now the algorithm is as follows:

  1. Input: a feasible flow {f} in {(V,D,c)} with source {s} and sink {t}.
  2. Initialization: let {(Q,p,q)} be an empty queue; {{\mathrm Insert} (Q,s)}, and set {S:=\{(s,\infty)\}}.
  3. Loop: take {x:={\mathrm First} (Q)}. If this fails, return the first coordinates of the pairs in {S}—a minimum cut.
  4. for {v\in V} s.t. {(xv)\in D} and {f((xv))<c((xv))}, or s.t. {(vx)\in D} and {f((vx))>0}:
    • {{\mathrm Insert} (Q,v)}, {S:=S\cup\{(v,x)\}}.
    • if {v=t} then return the augmenting {s}{t} path by tracing back the pairs {(t,v_t)}, {(v_t,v_{t-1})},\dots, {(v_1,s)} in {S}.
  5. go to Loop.

So if {f} is not maximum we get an {f}-augmenting path, that can be used to improve {f}, and repeat. However, the speed of this procedure becomes unacceptable when the capacities are large. If {c\in\mathbb{Q}^{|D|}} then we are at least assured that it will converge after finitely many steps: indeed, we can multiply {c} by the least common multiple of its denominators, reducing to the case {c\in\mathbb{Z}^{|D|}}. Each iteration in this case improves the flow by at least {1}. However, the time would not be polynomial in the input of the problem. E.g. for the following graph the number of iterations can be as large as {2M}, when {M} is a large integer.

For irrational {c} the convergence not even need to be the case!

If one chooses a shortest {f}-augmenting path to improve {f} with, then the running time becomes bounded by {O(|V||D|^2)}—that was shown by E.Dinitz in 1970 and independently by J.Edmonds and R.Karp in 1972. Details, and further improvements by A.Karzanov et.al can be found e.g. in A.Schrijver’s lecture notes, Sect.4.4.

Edmonds algorithm for maximum matchings

5 October, 2010

We continue with matchings in simple graphs, following the exposition in A.Schrijver’s 3-volume work, and his lecture notes available online, filling in few more details.

While in a bipartite graph {\Gamma=(V,E)} and a matching {M\subset E} it is always the case that a shortest path from the set {X\subset V} of {M}-unsaturated vertices to itself (i.e. an {X}{X}-path) is always {M}-augmenting, this is no longer true in general graphs:

here {X=\{a,b\}}, {M} is indicated by thicker edges, and the shortest path (of length 4) between them is not augmenting. This precludes the algorithmic idea of augmenting a matching {M} using an augmenting path (found by a shortest path procedure in a digraph constructed from {\Gamma} and {M}) from working in the general case. Edmonds came up with an idea that one can allow for repeated vertices, and shink the arising in the process cycles to (new) vertices.

Definition 1 An {M}-alternating walk {P=(v_0,\dots,v_t)} is a sequence of vertices (some of which can repeat!) that is a walk (i.e. {v_i v_{i+1}\in E} for all {0\leq i\leq t-1}) so that exactly one of {v_{i-1} v_i} and {v_i v_{i+1}} is in {M}.

Definition 2 An {M}-alternating walk {P=(v_0,\dots,v_t)} is an {M}flower if {P} has an even number of vertices (as a sequence, i.e. {t} is odd), {v_0\in X}, all the {v_0,\dots,v_{t-1}} are distinct, and {v_t=v_{2k}} for {2k<t.} (Removing the vertex {b} and the edge {bs} on the picture above is an example of an {M}-flower).
The cycle {(v_{2k},v_{2k+1},\dots, v_t)} in an {M}-flower is an {M}blossom. ({(qvuts)} on the picture above is an example of an {M}-blossom.)

Once we have an {M}-blossom {B}, we can shrink it by replacing all of its vertices {V(B)} with just one vertex (call it {B}, too), replacing each edge {uv}, {v\in V(B)} with the edge {uB} (and ignoring any loops that arise on {B}). The resulting graph is denoted by {\Gamma/B=(V/B, E/B)}. (E.g. shrinking the {M}-blossom {B} on the picture above produces the graph that is the path {apBb}). In {\Gamma/B} one will have the matching {M/B}, where {B} will become just a saturated vertex.

Theorem 3 {M} is a maximum matching in {\Gamma} if and only if {M/B} is a maximum matching in {\Gamma/B}, for an {M}-blossom {B}.

Proof: If {M/B} is not a maximum matching in {\Gamma/B} then there will be an {M/B}-augmenting path {P} in {\Gamma/B.} If {P} does not meet {B=(v_{2k},\dots,v_t)} then it corresponds to an {M}-augmenting path {P'} in {\Gamma}. So we can assume that {uB\not\in M} is an edge of {P}, and {Bu'\in M} is also an edge of {P}. As {uB\in E/B}, there exists {2k\leq j<t} so that {uv_j\in E.} Now, depending upon the parity of {j}, we replace {B} in {P} by the part of the blossom rim {v_j,v_{j\pm 1},v_{j\pm 2},\dots,v_t}, where the sign in the indices is {+} for {j} odd, and {-} for {j} even.

Conversely, if {M'} is not a maximum matching in {\Gamma}, then

\displaystyle M=M'\Delta \{v_0v_1,\dots,v_{2k-1}v_{2k}\}

is also a matching of the same size, and {|M/B|=|M'/B|}. Thus it suffices to show that {M/B} is not maximal in {\Gamma/B}, and we will assume w.l.o.g. that {k=0}, i.e. {v_{2k}=v_0\in X} (our flower has no stem). There exists an {M}-augmenting path {P=(u_0,\dots,u_s)}. W.l.o.g. {P} does not start in {B} (we can reverse it if it does). If {P} does not contain a vertex in {B}, it will continue to be an {M/B} augmenting path in {\Gamma/B}. Otherwise let the first vertex of {P} in {B} be {u_j}. Then {(u_0,\dots,u_{j-1},B)} is an augmenting path in {\Gamma/B}. \Box

For the algorithm we will need

Theorem 4 A shortest {X} to {X} {M}-alternating walk is either a path or its initial segment is an {M}-flower.

Proof: Assume {P=(v_0,\dots,v_t)} is not a path, and let {i<j} be such that {v_i=v_j} and {j} as small as possible. We will show that {(v_0,\dots,v_j)} is an {M}-flower, and we immediately have it if {i} is even and {j} is odd.

If {j-i} were even, as shown above, where the walk might have been {(apqstpqb)}, and so {j=5}, {i=1}, then the path {(v_i,\dots,v_{j-1})} is {M}-alternating, so we would simply delete it from {P} and obtain a shorter {M}-alternating walk.

If {i} were odd and {j} even, we would have {v_iv_{i+1}} and {v_{j-1}v_j} in {M}, so {v_{i+1}=v_{j-1}}, contradicting the choice of {j}. \Box

Combining these two observations, we obtain the following augmenting path algorithm:

  1. Input: a graph {\Gamma} and a matching {M}.
  2. Let {X} be the set of vertices unsaturated by {M}. Construct, if possible, a positive length shortest walk {P} from {X} to {X}.
  3. If no {P} was constructed, return {\emptyset}{M} is maximum.
  4. If {P} is a path then return it.
  5. Otherwise take the first blossom {B} from {P}, call the algorithm recursively on {\Gamma/B} and {M/B}, and return the expanded to an augmenting path in {\Gamma} resulting path in {\Gamma/B}.

Applying the algorithm starting from {M=\emptyset}, and augmenting {M} with the path it returns, we obtain an efficient algorithm, running in fact in {O(|V|^2|E|)} operations.

We still need to discuss here how to do Step 2. To this end we construct a directed graph {D_M=(V,A\cup A')} with

\displaystyle A:=\{(u,v)\mid u\neq v\in V, \exists ux\in E, xv\in M\}

\displaystyle A':=\{(u,v)\mid u\neq v\in X, \exists uv\in E\}

Lemma 5 Each {X} to {X} {M}-alternating path, resp. each {M}-flower, corresponds to a directed path in {D} from {X} to {\Gamma(X)}—the set of neighbours of {X} in {\Gamma.} Conversely, each {X} to {\Gamma(X)} directed path in {A} corresponds either to an {M}-augmenting path or to an {M}-flower.

Proof: (Sketch) Each length 1 {M}-augmenting path {uv} obviously corresponds to the pair of arcs {uv} and {vu} in {A'}.

If {u_0,u_1,\dots,u_s} is a {A}-path in {D}, so that {u_0\in X}, {u_t\in\Gamma(X)}, then there is a unique, up to the choice of {u_{s+1}\in X}, lifting of it to an {M}-alternating walk {P=u_0,v_1,u_1,v_2,u_2,\dots,v_s,u_s,u_{s+1}}, with {v_iu_i\in M} for {1\leq i\leq s}. If {P} is a path then it is {M}-augmenting.

If {P} has repeated vertices, this repetition occurs in a regular way, corresponding to an {M}-flower. As we move along {P} from {u_0} to {u_s}, let {u_t=v_i}, {t>i}, be the first repetition. Then {u_i} is the beginning of the blossom, and {u_{t-1}} the last vertex of the blossom. \Box

Is C++ the worst 1st programming language for maths majors?

7 May, 2010

In our department we have a 1-semester course called “Introduction to scientific programming”, that is taught to the 1st year undergrads. It is C++-based. I am arguing for years that it is a real waste, and only good for getting people dazed and confused, and that instead something like Python should be used.

The choice of Python is basically due to it being interpreted, (largely) untyped, and higher level than C/C++.
(And last but not least, free, unlike Maple, Matlab, and Mathematica, which might be marginally more useful to a maths major).

Here is a ranking of programming languages in industry. Inevitably biased in some ways, I suppose it still demonstrates that claimed “superiority” of C++ is nowhere to be seen…

Total unimodularity and networks

12 October, 2009

Total unimodularity is a tool to show that a linear programming problem (LP) has integer optimal solutions. In our discussion on bipartite matchings and linear programming we saw how this is possible to do by hand in a particular case, but as the structure of the LP gets more complicated the details can get very messy. Fortunately, there is a general tool that sometimes gives what is needed.

Definition. A matrix A is called totally unimodular if each of its subdeterminants equals 0, 1, or -1. (In particular, each entry of A is 0, 1, or -1.)

Proposition. Let A be totally unimodular and b\in\mathbb{Z}^n. Then the polyhedron P=\{x\mid Ax\leq b\} is integral (i.e. each face has an integer point.)

Proof. For a minimal (w.r.t. inclusion) face F of P one has F=\{x\mid A'x=b'\}, where A', resp. b' is a submatrix, resp. subvector, of A, resp. of b, and A' has full row rank. Then, possibly after permuting coordinates, we have A'=(U; V), with |U|\neq 0. By total unimodularity, |U|=\pm 1. Then x:=(U^{-1}b'\ 0)\in \mathbb{Z}^m and satisfies A'x=b'.

To complete the proof we need to check that x\in P. This is where the minimality of F is needed. In the simplest case F is a vertex, and \{x\}=F, implying the claim. In general, F=v+Null(A), i.e. a translate of the nullspace of A. So one can reduce to the vertex case by adding equations of the form x_j=0 for each column j of the submatrix V; this does not destroy the feasibility of Ax\leq b. QED.

As a corollary, the LP \max_{x\in P} c^T x has integer optimum (and optimiser) whenever c\in \mathbb{Z}^m. Moreover, the same holds for its LP dual \min b^T y, y\in P^*=\{z\geq 0\mid A^Tz=c\}, by rewriting P^*=\{z\mid (I;A^T;-A^T)z\leq (\overline{0},c,-c) and applying the proposition above.

Hoffman-Kruskal Theorem. It turns out that this property characterises
total unimodularity:

  1. A is totally unimodular iff for any b\in\mathbb{Z}^n the polyhedron P=\{x\mid Ax\leq b\} is integral.
  2. A is totally unimodular iff for any b\in\mathbb{Z}^n, c\in\mathbb{Z}^m the equality in the LP duality equation \max_{x\in P} c^T x=\min_{y\in P^*}b^T y is achieved by integral vectors (assuming these min and max are finite).

The second obviously follows from the first.
The first will follow from the following:

A full rank matrix A' is totally unimodular iff for any b'\in\mathbb{Z}^{n'} the polyhedron \{x\mid x\geq 0, A'x=b'\} is integral.

We omit the proof here.

Examples of totally unimodular matrices.

(Vertex-edge) incidence matrix A of a graph \Gamma=(V,E) is totally unimodular iff \Gamma is bipartite.

Sketch of the proof.
1) Prove that the incidence matrix of an odd cycle is not totally unimodular, e.g. by considering the vertex numbering such that the edges are (1,2), (2,3),\dots,(n-1,n),(1,n) and arranging the columns of the matrix in this order, while arranging the rows in the order 2,3,\dots,n-1,n,1. Then the usual decomposition of the determinant w.r.t. all the permutations of n symbols will have only 2 nonzero summands, that will not cancel each other (actually, in the case of n even they will cancel each other).
This shows that A cannot be totally unimodular if \Gamma is not bipartite.
2) Let A' be a square submatrix of A. If it has a 0 column its determinant is 0. If it has a column with just one 1, we can decompose the determinant w.r.t. this column and apply induction on the size of A. This leaves one with the case of A' having exactly two 1s per column. Same argument can be applied to argue that A' has, w.l.o.g., at least two 1s per row, and thus exactly two 1s, by counting 1s of A' in two ways. It follows that A' is the incidence matrix of a disjoint union of (even, as \Gamma is bipartite) cycles. By reordering the rows and columns we and make A' to be block-diagonal, with each block the incidence matrix of an even cycle. The determinant of a block-diagonal matrix equals the product of determinants of the blocks, and so it remains to show that the incidence matrix of an even cycle is totally unimodular; this can be done as above. Q.E.D.

Excercise. Combine this with the result on the total unimodularity above to show that the maximum matching LP for bipartite graphs has an integer solution. Construct the LP dual for this problem and interpret its integer solutions (show that they exist first!) in graph-theoretic terms.

Excercise. The incidence matrix M of a digraph \Gamma=(V,A) is a 0,1,-1 matrix of size |V|\times |A| so that A_{v,d}=0,1,-1 if, respectively, d\in A misses v\in V, or d arrives to v, or d leaves v. Show that A is totally unimodular.

Network flows and max-flow min-cut theorem.
Armed with the above, we can easily prove the max-flow min-cut theorem for networks. Recall that a network is a directed graph \Gamma=(V,A) with two marked nodes s,t\in V, and arc capacities c: A\to \mathbb{R}_+. An s-tflow on the network (\Gamma,s,t,c) is a function f: A\to \mathbb{R}_+ satisfying f(a)\leq c(a) for all a\in A and f_+(v)=f_-(v) for any v\in V\setminus\{s,t\}, where f_+=\sum_{u\in V\mid (uv)\in A} f(uv), and f_-=\sum_{u\in V\mid (vu)\in A} f(vu). (The latter is called conservation law.) The value of f is f_+(s)-f_-(s), i.e. the “amount” of f that “leaves” s.
And s-t cut is a partition of V into two parts S and T so that s\in S, t\in T. The capacity of the cut is the sum of the capacities of the arc leaving T.

Max-flow min-cut theorem. Given the network (\Gamma,s,t,c), the maximum value of an s-t-flow is equal to the minimum capacity of an s-t-cut.

In order to formulate this as an LP, we introduce the function w:A\to \mathbb{Z}_+ so that w(a)=0,1,\text{ or }-1 if, respectively, a misses s, or resp. a enters s, or resp. a leaves s. Now we can formulate the max-flow problem as follows:

\max \sum_{a\in A} w(a)f(a)\quad\text{subject to}\\ 0\leq f(a)\leq c(a), \text{ for } a\in A,\\ f_+(v)=f_-(v)\text{ for } v\in V\setminus\{s,t\}.

Let M' be the sumbatrix of the incidence matrix M of \Gamma obtained by removing the rows with indices s and t. Then the latter can be rewritten as

\max w^T f\mid 0\leq f\leq c,\ M'f=0.

The matrix of this LP is totally unimodular. If c:A\to \mathbb{Z}_+ then we can apply the integrality result directly, and establish along the way that there exists an optimal integral flow. In general, we cannot apply the unimodularity directly. But we can take the LP dual. It is as follows:

\min y^T c \mid y\geq 0, y^T+z^T M'\geq w^T.

Here the minimum is attained by integer vectors, as the polytope describing the feasible set is integral. We can extend z by defining z_s:=-1, z_t:=0. Then y^T+z^T M\geq 0. The set U:=\{v\in V| z_v\geq 0\} satisfies t\in U, s\not\in U. So it is a cut. To show the claim of the theorem, it suffices to show that the total capacity of the arcs leaving U is bounded from above (the opposite direction is easy) by y^T c=\text{maxflow}. Let a=(u,v) is such an arc. Then z_u\geq 0 and z_v\leq -1. Then y^T+z^T M\geq 0 implies v_a+z_u-z_v\geq 0, so y_a\geq 1. Q.E.D.

Hungarian algorithm, take 1

23 September, 2009

Hungarian algorithm is an efficient procedure to find a maximum weight matching in a bipartite graph \Gamma=(V_1\cup V_2,E) with parts V_1 and V_2 and a weight function w:E\to\mathbb{R}_+. Usually it is described in terms of potential functions on V=V_1\cup V_2, and looks quite mysterious, as it hides the origin of the procedure, that is a variation of the augmenting path method for finding a maximum (unweighted) matching in \Gamma. Here we describe this point of view.

Let M be a matching of maximum weight w(M) among all matchings of size |M|. If M is a maximum (unweighted) matching then we are done.

Otherwise, we construct a bipartite digraph D=(V,A) by orienting all the edges in M from V_2 to V_1, and the remaining edges E\setminus M in the opposite direction. We also construct the length function \ell: A\to\mathbb{R} by setting \ell(a)=w(a) for all a\in M, and \ell(a)=-w(a) for all a\in A\setminus M. Observe that there no negative length (directed) cycles in D. Indeed, any directed cycle C in D contains equally many edges in C\cap M from M and in C\setminus M. Note that \ell(C)<0 would mean that w(C\cap M)< w(C\setminus M), and so w(M) is not maximum among all matchings of size |M|, as w(C\triangle M)> w(M), whereas |C\triangle M|=|M|.

Let V_1^M and V_2^M be subsets of V_1, resp. V_2 that are unsaturated by M. We find (say, using the Bellman-Ford algorithm) a minimum length directed path P between V_1^M and V_2^M. By construction of D, P is an augmenting path for M. So M'=M\triangle P is a matching satisfying |M'|=|M|+1.

Claim: w(M') is maximum among all matchings of size |M'|.

We’ll prove this claim below. For now, we remark that it is obvious that this procedure, repeated with M' taking place of M, will find matchings of maximum weight among all matchings of size 1,2,3,\dots In the end it remains to select among these at most \min(|V_1|,|V_2|) matchings one with maximum weight.

Now let us prove the claim. It certainly holds for |M'|=1, as the procedure will select a maximum weight edge in E and M' will consist of this edge. So we can assume |M'|\geq 2. Let N be a matching of size |M'|, and of maximum weight. We need to show that w(N)\leq w(M'). As |N|=|M|+1, in the subgraph N\cup M there
exists a connected component Q that is a M-augmenting path Q.
But P is a shortest M-augmenting path, so \ell(Q)\geq\ell(P).
On the other hand, N\triangle Q is a matching of size |M|, and so w(N\triangle Q)\leq w(M), by induction. Now w(N)=w(N\triangle Q)-\ell(Q)\leq w(M)-\ell(Q)\leq w(M)-\ell(P)=w(M'), as claimed. QED.

As described, we need to do less that n=|V| loops of this procedure. In each loop, the Bellman-Ford dominates the complexity, so in total we need O(n^2|E|) operations. In fact, we can stop as soon as we reached the situation when w(M')\leq w(M).

Let a matching M have maximum w(M) among matchings of size |M|, and any matching M' of size |M'|=|M|+1 satisfy w(M')\leq w(M). Then w(N)\leq w(M) for any matching N of size bigger than |M|.

Proof. Suppose N be a matching contradicting the claim, minimal w.r.t. |N\triangle M|. Let D be the digraph constructed from \Gamma, w, and M as above. The connected components of |N\triangle M| induce oriented cycles and directed paths on D. Each such component will have a nonnegative total length, as neither there can be a negative length cycle in D (see above) nor can there be a negative length M-augmenting path (otherwise we can construct M'). But this implies that w(M)\geq w(N), contradiction. QED.

In particular, the complexity above can be improved to O(n|M||E|).