Introduction to l^{2}-invariants (part I)

Łukasz Grabowski

These are the lecture notes for the four lectures which I delivered in the first week. I don’t intend to improve them much beyond fixing typos and grave mathematical mistakes (if someone finds such), so if in doubt ask me in person or see the references, in particular [ECK00]. I make no attempt at providing references to original papers, so please treat the “who prove what” parts only as a very vague first approximation.

PDF file is also available.

Corrections are welcome, preferably via sending me a corrected latex file. If you do send corrections please make your changes as small as possible, so I can easily proofread the changes using meld.

1 Basics

The rings of integers, rational numbers, real numbers and complex numbers are denoted, respectively, by \mathbb{Z}, \mathbb{Q}, \mathbb{R}, and \mathbb{C}. The natural numbers is the set \mathbb{N}=\{0,1,2,3,\ldots\}. The ring of integers modulo a natural number m\in\mathbb{N} is denoted by \mathbb{Z}/m. The set of positive integers is \mathbb{Z}_{+}. If R is a ring then R[x] is the ring of polynomials over R in one variable x. Complex conjugate of a\in\mathbb{C} is denoted with \overline{a}. The set of all k\times l matrices over a ring R is denoted with \operatorname{Mat}(k\times l,R), and furthermore we let \operatorname{Mat}(k,R):=\operatorname{Mat}(k\times k,R)

1.a Examples of groups

Let us explicitly mention a few examples of groups to have in mind. Neutral element will be denoted by 1 or e (if the group operation is written multiplicatively), or by 0 (if the group operation is written additively).

  1. 1.

    Infinite cyclic group: the underlying additive group of the ring \mathbb{Z}, frequently denoted by the same symbol. If we want to use the multiplicative notation, we will use the symbol C to denote the set


    of all integer powers of an indeterminate t, with the obvious group law.

  2. 2.

    Finite groups. Particular examples are cyclic groups; the cyclic group of order k is denoted with C_{k}=\{e,t,t^{2},\ldots,t^{k-1}\}, and when using additive notation it is identified with the additive group of the ring \mathbb{Z}/k of integers modulo k

  3. 3.

    Free groups. The free group on two different symbols x and y is denoted by F_{2}. As a set it consists of all reduced words in the letters x,y,x^{-1},y^{-1}. The group operation is “concatenate two words and reduce the result”. The group F_{k}, where k is either in \mathbb{Z}_{+} or k=\infty, is defined similarly (in the case k=\infty we consider the free group on countably many symbols)

  4. 4.

    Various matrix groups. Whenever R is a ring (associative with identity, commutative or not) and k is a positive integer we can define \operatorname{GL}(k,R) to be the group of invertible square k\times k matrices with entries in R. If R is commutative then we can also define the subgroup \operatorname{SL}(k,R) of \operatorname{GL}(k,R) of matrices whose determinant is equal to 1.

  5. 5.

    In particular it is sometimes useful to have in mind some more “concrete” models for the free group F_{2}. For example, the subgroup of \operatorname{SL}(2,\mathbb{Z}) generated by the matrices \begin{pmatrix}1&2\\
0&1\end{pmatrix} and \begin{pmatrix}1&0\\
2&1\end{pmatrix} is free. In order to show this, one needs to invoke the ping-pong lemma, which we don’t cover here.

  6. 6.

    The subgroup of \operatorname{SL}(2,\mathbb{Z}[x]) generated by the matrices \begin{pmatrix}1&x\\
0&1\end{pmatrix} and \begin{pmatrix}1&0\\
x&1\end{pmatrix} is also free. Of course the fact that this group is free follows from the fact that the group generated by \begin{pmatrix}1&2\\
0&1\end{pmatrix} and \begin{pmatrix}1&0\\
2&1\end{pmatrix} is free. However, it is also easy to show it directly.

  7. 7.

    The discrete Heisenberg group is the subgroup of \operatorname{SL}(3,\mathbb{Z}) of all the matrices of the form \begin{pmatrix}1&a&c\\
0&0&1\end{pmatrix}, where a,b,c\in\mathbb{Z}. It is an example of a nilpotent group - we will talk more about such groups in the later lectures.

1.b Group ring and the \ell^{2} space of a countable group

Group Ring

Given a countable group {\Gamma} and a commutative ring R we let R[{\Gamma}] be the group ring of {\Gamma} over R, which is defined as follows. As a set, we have that R[{\Gamma}] consists of all formal finite R-linear combinations of the elements of {\Gamma}.

The addition in the ring R[{\Gamma}] is the obvious one, and the multiplication is induced by the multiplication of elements of {\Gamma}.

Remark 1.1.

The above definition is hopefully clear, but it is somewhat informal, because usually the notion of a “formal R-linear combination” is an informal one (i.e. it does not appear in any of the Bourbaki’s texts). If we wanted to be more prudent we would say that R[{\Gamma}] consists of finitely supported R-valued functions defined on {\Gamma}. The addition of elements of R[{\Gamma}] is then defined as the addition of functions, and the multiplication in R[{\Gamma}] is defined as a convolution product.

Example 1.2.

If {\Gamma}=C then \mathbb{C}[C] consists of all the expressions of the form a_{i}t^{i}+a_{i+1}t^{i+1}+\ldots+a_{j}t^{j}, where i,j\in\mathbb{Z}, i\leqslant j, and for all k we have a_{k}\in\mathbb{C} - in other words, the ring \mathbb{C}[C] can be identified with the ring of Laurent polynomials with coefficients in \mathbb{C}.

Hilbert spaces and bounded operators

Let us recall some basic definitions about Hilbert spaces. A Hilbert space is a complex vector space {\mathcal{H}} together with a Hermitian inner product \langle\cdot,\cdot\rangle\colon{\mathcal{H}}\times{\mathcal{H}}\to\mathbb{C} which is linear in the first variable and antilinear in the second variable, and such that {\mathcal{H}} is complete with respect to the norm defined by \|v\|:=\langle v,v\rangle.

Remark 1.3.

There are three basic examples of Hilbert spaces which we need to consider:

  1. 1.

    finite dimensional spaces \mathbb{C}^{k}, where k\in\mathbb{Z}_{+} with the standard Hermitian inner product

  2. 2.

    \ell^{2}(S), where S is a (typically infinite) set, is the Hilbert space of functions f\colon S\to\mathbb{C} such that \sum_{s\in S}|f(s)|^{2}<\infty, with Hermitian inner product \langle f,g\rangle:=\sum_{s\in S}f(s)\overline{g(s)}. The indicator function of s\in S will be denoted by \zeta_{s}

  3. 3.

    L^{2}(X,\mu) where (X,\mu) is a space with a measure (typically interval with the Lebesgue measure, or the set S^{2} of all complex number of modulus one, also with the Lebesgue measure), whose elements are measurable functions f\colon X\to\mathbb{C} such that \int_{X}|f(x)|^{2}d\mu(x)<\infty. The inner product is \langle f,g\rangle=\int_{X}f(x)\overline{g(x)}d\mu(x).

A linear map T\colon{\mathcal{K}}\to{\mathcal{L}} between Hilbert spaces is bounded if for some c<\infty and for all v\in{\mathcal{K}} with \|v\|_{{\mathcal{K}}}=1 we have \|Tv\|_{{\mathcal{L}}}\leqslant c. If T is bounded then the smallest c which is a witness of it is called the norm (or the operator norm) of T, and is denoted by \|T\|.

The adjoint of an operator T\colon{\mathcal{K}}\to{\mathcal{L}} is the unique bounded operator T^{\ast}\colon{\mathcal{L}}\to{\mathcal{K}} such that for all v\in{\mathcal{K}} and w\in{\mathcal{L}} we have

\langle T^{\ast}v,w\rangle=\langle v,Tw\rangle.

It is easy to check that T^{\ast\ast}=T and (TS)^{\ast}=S^{\ast}T^{\ast}.

If T\colon{\mathcal{K}}\to{\mathcal{K}} then we say that T is self-adjoint if T=T^{\ast}.

Example 1.4.

If {\mathcal{K}}=\mathbb{C}^{k} with the standard inner product and T\colon{\mathcal{K}}\to{\mathcal{K}} is represented in the standard basis by a matrix M then T^{\ast} is the operator represented by the matrix \overline{M^{T}}, where M^{T} denotes the transpose of M. In particular the condition of being self-adjoint is equivalent to M^{T}=\overline{M}. Thus T is self-adjoint if it is represented by a Hermitian matrix in the standard basis.

Example 1.5.

If T\colon{\mathcal{K}}\to{\mathcal{K}} is any bounded operator then T+T^{\ast}, T^{\ast}T and TT^{\ast} are all self-adjoint.

We say that a bounded self-adjoint operator T\colon{\mathcal{K}}\to{\mathcal{K}} is positive if for all v\in{\mathcal{H}} we have \langle Tv,v\rangle\geqslant 0. Note that for any operator T\colon{\mathcal{K}}\to{\mathcal{L}} we have that T^{\ast}T (and hence also TT^{\ast}) is positive: we have \langle T^{\ast}Tv,v\rangle=\langle Tv,Tv\rangle=\|Tv\|\geqslant 0.

\ell^{2}-space of a group

Given a countable group {\Gamma} we define \ell^{2}({\Gamma}) to be the Hilbert space of all those functions f\colon{\Gamma}\to\mathbb{C} such that \sum_{{\gamma}\in{\Gamma}}|f({\gamma})|^{2}<0 (i.e. \ell^{2}({\Gamma}) is the Hilbert space of all \ell^{2}-summable functions in {\Gamma}). Given {\gamma}\in{\Gamma}, the function \zeta_{\gamma} is defined by demanding that \zeta_{\gamma}({\gamma})=1 and \zeta_{\gamma}({\delta})=0 when {\delta}\neq{\gamma}, i.e. \zeta_{\gamma} is the indicator function of {\gamma}.

The scalar product on \ell^{2}({\Gamma}) is defined by demanding that the functions \zeta_{\gamma}, {\gamma}\in{\Gamma}, form an orthonormal basis, i.e. \langle\zeta_{\gamma},\zeta_{\gamma}\rangle=1 for all {\gamma}\in{\Gamma} and \langle\zeta_{\gamma},\zeta_{\delta}\rangle=0 for {\delta}\neq{\gamma}. Thus every element of \ell^{2}({\Gamma}) is a linear combination of the vectors \zeta_{\gamma}, {\gamma}\in{\Gamma}.

We have a natural left action {\lambda}\colon{\Gamma}\curvearrowright\ell^{2}({\Gamma}), which is called the left regular representation, defined on the basis vector by the formula


Similarly we have the right action \rho\colon{\Gamma}\curvearrowright\ell^{2}({\Gamma}) defined as \rho({\gamma})\zeta_{\delta}=\zeta_{{\delta}{\gamma}}.

Both {\lambda} and \rho extend to actions of the group ring \mathbb{C}[{\Gamma}] by linearity, i.e. if T\in\mathbb{C}[{\Gamma}] is equal to \sum_{{\gamma}\in{\Gamma}}a_{\gamma}{\gamma}, then


and similarly for \rho(T). In this way {\lambda}(T) and \rho(T) become bounded linear operators on \ell^{2}({\Gamma}). Sometimes we simply say that T\in\mathbb{C}[{\Gamma}] is an operator on \ell^{2}({\Gamma}) - in that case we will always mean the left regular representation.


The operation of taking the inverse in {\Gamma} extends to an involutive operation on \mathbb{C}[{\Gamma}] which we will denote with an asterisk:


On the other hand we have the operation of taking the adjoint operator defined on all bounded operator on \ell^{2}({\Gamma}) which also denote by \ast, i.e. if T\colon\ell^{2}({\Gamma})\to\ell^{2}({\Gamma}) is a bounded operator then T^{\ast} is the adjoint of T.

The following lemma justifies the choice of notation.

Lemma 1.6.

For any T\in\mathbb{C}[{\Gamma}] we have


To prove the claim we need to check that {\lambda}(T) and {\lambda}(T^{\ast}) are adjoints of each other, i.e. for any v,w\in\ell^{2}({\Gamma}) we have

\langle{\lambda}(T)v,w\rangle=\langle v,{\lambda}(T^{\ast})w\rangle.

By linearity we can just as well assume that for some {\alpha},{\beta},{\gamma}\in{\Gamma} and a\in\mathbb{C} we have T=a{\alpha}, v=\zeta_{\beta} and w=\zeta_{\gamma}. Then we need to show that

\langle a\zeta_{{\alpha}{\beta}},\zeta_{\gamma}\rangle=\langle\zeta_{\beta},%

Clearly LHS is equal to a if {\alpha}{\beta}={\gamma} and to 0 otherwise, and RHS is equal to a if {\beta}={\alpha}^{-1}{\gamma} and to 0 otherwise, which shows the desired equality. ∎

Recall that a bounded operator T on a Hilbert space is self-adjoint iff T=T^{\ast}: note that if T\in C[{\Gamma}] then the above lemma gives a handy “visual condition” to recognize if T is a self-adjoint operator. Namely, for every {\gamma}\in{\Gamma} we need to compare the coefficients of {\gamma} and {\gamma}^{-1} in T and check if they are conjugate to each other.

Example 1.7.

If {\Gamma} is a finite group then \mathbb{C}[{\Gamma}] and \ell^{2}({\Gamma}) are both finite-dimensional and isometric to each other as Hilbert spaces, by sending the linear combination 1\cdot{\gamma} to \zeta_{\gamma} (If {\Gamma} is not finite then this still gives a natural embedding of \mathbb{C}[{\Gamma}] into \ell^{2}({\Gamma})).

If {\Gamma} is finite then the ring \mathbb{C}[{\Gamma}] can be described as the direct sum of matrix rings


where the sum is over all iso-classes \pi of irreducible linear representations of {\Gamma}. On the other hand the space \ell^{2}({\Gamma}) can be conveniently described as the direct sum \oplus_{\pi}V_{\pi}^{\dim\pi}, i.e. \ell^{2}({\Gamma}) decomposes as the sum of irreducible representations of {\Gamma}, and an iso-class of dimension k appears exactly k times.

Example 1.8.

Particular case of the previous example is when {\Gamma} is a finite abelian group. In that case the space \ell^{2}({\Gamma}) has a particularly nice orthogonal basis: the elements of it are characters, i.e. the homomorphisms {\Gamma}\to S^{1}, where S^{1} denotes the set of complex numbers of norm 1. Given such a character \chi\in\ell^{2}({\Gamma}), it can be checked that the action of {\Gamma} is as follows: for {\gamma}\in{\Gamma} we have {\lambda}({\gamma})(\chi)=\overline{\chi({\gamma})}\chi, i.e. \chi spans a one-dimensional {\Gamma}-invariant subspace of \ell^{2}({\Gamma}).

If we denote by \widetilde{\Gamma} the set of all characters of {\Gamma}, then \mathbb{C}[{\Gamma}] can be identified with the set of all complex valued sfunctions on \widetilde{\Gamma}, via the map which sends {\gamma}\in{\Gamma} to the function \chi\mapsto\overline{\chi({\gamma})}. The above remark implies that under this map the operator T\colon\ell^{2}({\Gamma})\to\ell^{2}({\Gamma}) corresponds to an operator on \ell^{2}(\widetilde{\Gamma}) of pointwise multiplication.

The self-adjoint elements of \mathbb{C}[{\Gamma}] correspond exactly to those functions on \widetilde{{\Gamma}} which only take real values.

Example 1.9.

We have already seen that \mathbb{C}[C] can be identified with the ring of Laurent polynomials with complex coefficients. On the other hand Fourier transform gives us an isomorphism of Hilbert spaces \ell^{2}(C) and L^{2}(S^{1}) (recall that the latter space is the space of all measurable function f\colon S^{1}\to\mathbb{C} such that \int_{S^{1}}|f(x)|^{2}d\mu(x)<\infty. We normalize the measure \mu on S^{1} so that \mu(S^{1})=1) Under this isomorphism the action of \mathbb{C}[C] on L^{2}(S^{1}) corresponds simply to pointwise multiplication of functions on S^{1}. As in the previous example self-adjoint elements of \mathbb{C}[C] correspond to real-valued functions.

This and the previous example can be generalized to arbitrary countable abelian groups, using so-called Pontryagin transform in place of the Fourier transform.

1.c Lemma about operators on Hilbert spaces

Given a bounded operator T on a Hilbert space {\mathcal{H}}, we define \ker(T):=\{v\in{\mathcal{H}}\colon T(v)=0\}.

Lemma 1.10.
  1. 1.

    For any bounded operator T\colon{\mathcal{L}}\to{\mathcal{M}} between Hilbert spaces we have \ker(T)=\ker(T^{\ast}T)

  2. 2.

    For any bounded operator S\colon{\mathcal{K}}\to{\mathcal{L}} between Hilbert spaces we have \operatorname{im}(S)^{\perp}=\ker(SS^{\ast})

  3. 3.

    If S\colon{\mathcal{K}}\to{\mathcal{L}} and T\colon{\mathcal{L}}\to{\mathcal{M}} are bounded operators between Hilbert spaces and \operatorname{im}(S)\subset\ker(T) then the orthogonal complement of \overline{\operatorname{im}S} in \ker(T) is equal to \ker(SS^{\ast}+T^{\ast}T)

  1. 1.

    Clearly we have \ker(T)\subset\ker(T^{\ast}T), so let v\in\ker(T^{\ast}T). Then \langle T^{\ast}Tv,v\rangle=0, and hence \langle Tv,Tv\rangle=0, and so Tv=0 as needed.

  2. 2.

    We have v\in\operatorname{im}(S)^{\perp} iff for any w\in{\mathcal{K}} we have \langle v,S(w)\rangle=0. This is equivalent to \langle S^{\ast}v,w\rangle=0 for all w\in{\mathcal{K}} which is equivalent to v\in\ker S^{\ast}, which by previous point is equivalent to v\in\ker SS^{\ast}.

  3. 3.

    We need to show that \ker(T)\cap\overline{\operatorname{im}(()}S)^{\perp} is equal to \ker(SS^{\ast}+T^{\ast}T). By the previous points it is enouch to show that


    The inclusion \subset is obvious. For the other inclusion let v\in\ker(SS^{\ast}+T^{\ast}T). Since SS^{\ast} is positive, we must have \langle SS^{\ast}v,v\rangle=0, and hence \langle S^{\ast}v,S^{\ast}v\rangle=\|S^{\ast}v\|=0, i.e. V\in\ker(S^{\ast}). Similarly v\in\ker(T), which finishes the proof.

1.d von Neumann dimension

The final bit in this section is the definition of the von Neumann dimension. Let k\in\mathbb{Z}_{+} and for i=1,\ldots,k let \zeta_{i}\in(\ell^{2}({\Gamma}))^{k} be the vector whose i-th coordinate is \zeta_{e} and all other coordinates are equal to 0.

Let V\subset{\ell^{2}({\Gamma})}^{k} be a closed subspace which is \rho({\Gamma})-invariant, and let P_{V}\colon{\ell^{2}({\Gamma})}^{k}\to V be the orthogonal projection onto V. We define the von Neumann dimension \dim_{\text{vN}}(V) of V to be equal to

\sum_{i=1}^{k}\langle P_{V}\zeta_{i},\zeta_{i}\rangle
Example 1.11.

The way V arises in basic cases concerning \ell^{2}-invariants is as follows. Every element M\in\operatorname{Mat}(k\times l,\mathbb{C}[{\Gamma}]) gives rise (via the left multiplication) to a bounded operator \ell^{2}({\Gamma})^{k}\to\ell^{2}({\Gamma})^{l}, and as such we have \ker(M)\subset\ell^{2}({\Gamma})^{k} and similarly \overline{\operatorname{im}(M)}\subset\ell^{2}({\Gamma})^{l}. It is easy to check that both \ker(M) and \overline{\operatorname{im}(M)} are right-invariant closed subspaces of \ell^{2}({\Gamma})^{k} and \ell^{2}({\Gamma})^{l}, respectively.

Example 1.12.

The formula simplifies when M\in\mathbb{C}[{\Gamma}] (i.e. M is a 1\times 1 matrix), in which case we have \dim_{\text{vN}}\ker(M)=\langle P_{\ker(M)}\zeta_{e},\zeta_{e}\rangle

Example 1.13.

It is very instructive to digest the definition of the von Neumann dimension in the case when {\Gamma} is a finite group. For example, let V\subset\ell^{2}({\Gamma}) be a \rho({\Gamma})-invariant subspace. From elementary linear algebra we know that


On the other hand we have that \zeta_{\gamma}, {\gamma}\in{\Gamma}, is an orthonormal basis of \ell^{2}({\Gamma}), so we can use it to compute the trace of P_{V}=P in that basis. Thus we have

\operatorname{tr}(P_{V})=\sum_{{\gamma}\in{\Gamma}}\langle P\zeta_{\gamma},%

But now since V is \rho({\Gamma})-invariant, and \rho({\gamma}) is an isometry for every {\gamma}\in{\Gamma}, we have that P commutes with \rho({\gamma}) for every {\gamma}\in{\Gamma}. Therefore for every {\gamma}\in{\Gamma} we have

\langle P\zeta_{\gamma},\zeta_{\gamma}\rangle=\langle P\rho({\gamma})\zeta_{e}%

Since {\Gamma} acts by isometries, the above is equal to \langle P\zeta_{e},\zeta_{e}\rangle=\dim_{\text{vN}}(V). In other words, we have

\dim_{\text{vN}}(V)=\langle P\zeta_{e},\zeta_{e}\rangle=\frac{1}{|{\Gamma}|}%
\sum_{{\gamma}\in{\Gamma}}\langle P\zeta_{\gamma},\zeta_{\gamma}\rangle=\frac{%
Example 1.14.

Recall that \ell^{2}(C) is isomorphic to L^{2}(S^{1}) and the action of C on L^{2}(S^{1}) is by point-wise multiplication (since C is abelian the left action is equal to the right action). All closed C invariant subspaces of L^{2}(S^{1}) are of the form L^{2}(U), where U is a measurable subset of S^{1} (it is clear that these subspaces are C-invariant, but it takes more effort to show that all C-invariant subspaces are of this form).

What is the von Neumann dimension of L^{2}(U)\subset L^{2}(S^{1})? The projection P_{L^{2}(U)} is simply multiplication by the indicator function 1_{U} of U, and \zeta_{e} is the constant function 1_{S^{1}}\colon S^{1}\to\mathbb{C}, and so

\dim_{\text{vN}}(L^{2}(U))=\langle P_{L^{2}(U)}\zeta_{e},\zeta_{e}\rangle=%
\langle 1_{U}\cdot 1_{S^{1}},1_{S^{1}}\rangle=\int_{S^{1}}1_{U}(x)d\mu(x)=\mu(%

i.e. the von Neumann dimension recovers the Lebesgue measure on S^{1} in this case.

Example 1.15.

If T\in\mathbb{C}[C] and T\neq 0 then \ker(T)=\{0\}\subset L^{2}(S^{1})\cong\ell^{2}(C) because there is no non-zero L^{2}-function f on S^{1} with the property f\cdot g=0, when g is a non-zero Laurent polynomial (because such a polynomial has only finitely many zero on S^{1}). and consequently \dim_{\text{vN}}\ker(T)=0.

On the other hand, let M\in\operatorname{Mat}(k\times l,\mathbb{C}[C]). In this case \overline{\operatorname{im}(M)}\subset\ell^{2}(C)^{l} can easily be not equal to \ell^{2}(C)^{l}. However, we will shortly see that \dim_{\text{vN}}\overline{\operatorname{im}(M)} is an integer. Let us state the reason for this informally for now: the ring \mathbb{C}[C] lies in the field {\mathcal{R}}(C) of rational functions in one variable, and in fact {\mathcal{R}}(C) can be identified with the field of fractions of \mathbb{C}[C]. Using Gaussian elimination, we can find a matrix A\in\operatorname{GL}(k,{\mathcal{R}}(C)) such that AM is in the row-echelon form.

Similarly to the case of the standard dimension, we have that \dim_{\text{vN}}\overline{\operatorname{im}(M)}=\dim_{\text{vN}}\overline{%
\operatorname{im}(AM)}, and the latter is equal to the number of non-zero rows. This statement is best proved using affiliated operators, so we will return to it later.

2 Properties of the von Neumann dimension and its applications

A closed subspace of {\ell^{2}({\Gamma})}^{k} which si \rho({\Gamma})-invariant is called a Hilbert {\Gamma}-module.

Lemma 2.1.

For a Hilbert {\Gamma}-module A\subset{\ell^{2}({\Gamma})}^{k} we have \dim_{\text{vN}}(A)=0 iff A=\{0\}.


The direction \Leftarrow is obvious, conversely if \dim_{\text{vN}}(A)=0 then \langle P\zeta_{e},\zeta_{e}\rangle=0, where P is the projection onto A. Since the action is by unitaries, it follows that \langle P\zeta_{\gamma},\zeta_{\gamma}\rangle=0 for all {\gamma}\in{\Gamma}, and hence also \langle Pv,v\rangle=0 for all v\in{\ell^{2}({\Gamma})}^{k}, which shows that \langle P^{2}v,v\rangle=\langle Pv,Pv\rangle=0, and so P=0 and hence A=0. ∎

Recall from Jessie Peterso’s lecture that the bounded \rho({\Gamma})-invariant operators on \ell^{2}({\Gamma}) form, by the bicommutant theorem, the group von Neumann algebra of {\Gamma} which we will denote by L({\Gamma}).

We will say that two Hilbert {\Gamma}-modules A\subset{\ell^{2}({\Gamma})}^{k} and B\subset{\ell^{2}({\Gamma})}^{l} are isomorphic if there exists a \rho({\Gamma})-invariant isometry between them, and that they are weakly isomorphic if there exists T\in\operatorname{Mat}(k\times l,L({\Gamma})) which is injective on A and such that \overline{\operatorname{im}(TA)}=B.

Lemma 2.2.

For T\in\operatorname{Mat}(k,L({\Gamma})) we have \sum_{i}\langle T\zeta_{i},T\zeta_{i}\rangle=\sum_{i}\langle T^{\ast}\zeta_{i}%


For every {\gamma}\in{\Gamma} and i=1,\ldots,\max(k,l) let \zeta_{i,{\gamma}}\in\ell^{2}({\Gamma})^{\max{k,l}} be be the vector whose i-th coordinate is \zeta_{\gamma} and all other coordinates are 0.

We now have \langle T^{\ast}\zeta_{i,e},\zeta_{j,{\gamma}}\rangle=\langle T^{\ast}\zeta_{i%
\rangle=\overline{\langle T\zeta_{j,e},\zeta_{i,{\gamma}^{-1}}\rangle}.

In other words, if we write




for some complex numbers a^{i}_{j,{\gamma}} and b^{i}_{j,{\gamma}} then we have a^{i}_{j,{\gamma}}=\overline{b^{j}_{i,{\gamma}^{-1}}}. Hence


which proves the claim. ∎

Lemma 2.3.

If A and B are isomorphic then \dim_{\text{vN}}(A)=\dim_{\text{vN}}(B).


By taking a suitable direct sum we can just as well assume that A,B\subset{\ell^{2}({\Gamma})}^{k} for some k. Let f\colon A\to B be a {\Gamma}-invariant isometry, let P be the projection onto A, let Q be projection onto B, and let H=fP.

Then H\in\operatorname{Mat}(k,L({\Gamma})) and one can check that H^{\ast}=f^{-1}Q. It follows that HH^{\ast}=Q and H^{\ast}H=P. It follows that

\dim_{\text{vN}}A=\sum_{i}\langle H^{\ast}H\zeta_{i},\zeta_{i}\rangle=\sum%
\langle H\zeta_{i},H\zeta_{i}\rangle

and similarly

\dim_{\text{vN}}B=\sum_{i}\langle H^{\ast}\zeta_{i},H^{\ast}\zeta_{i}\rangle.

Now the claim follows from the previous lemma. ∎

Lemma 2.4.

If A and B are weakly isomorphic then they are isomorphic.


By passing to a suitable direct sum we may assume that A,B\subset{\ell^{2}({\Gamma})}^{k} for some k, and that we have an element T\in\operatorname{Mat}(k,L({\Gamma})) such that \overline{TA}=B which is injective on A, and which is equal to 0 on the orthogonal complement of A. In particular we have \overline{\operatorname{im}(T^{\ast}T)}=A.

On the other hand T^{\ast}T is positive self-adjoint, so by the spectral theorem we can find a positive self-adjoint f\in\operatorname{Mat}(k,L({\Gamma})) with f^{2}=T^{\ast}T. This f is injective on A. Now consider the (unbounded) operator g\colon\operatorname{im}f\to A which is defined as g(f(v))=v for v\in A, and finally let H=T\circ g\colon\operatorname{im}(f)\to B.

For v\in\operatorname{im}(f) we have \langle H\circ g(v),H\circ g(v)\rangle=\langle T^{\ast}Tg(v),g(v)\rangle=%
\langle f^{2}g(v),g(v)\rangle which is equal to \langle f(v),g(v)\rangle=\langle v,fg(v)\rangle=\langle v,v\rangle.

Finally we note that \operatorname{im}(f) is dense in A since \operatorname{im}(T^{\ast}T)=\operatorname{im}(f^{2})\subset\operatorname{im}(f), and hence we can extend H to an isometry defined on all of A. ∎

Corollary 2.5.

For f\in\operatorname{Mat}(k\times l,L({\Gamma})) we have \dim_{\text{vN}}\ker(f)+\dim_{\text{vN}}\overline{\operatorname{im}(f)}=k


Indeed, f induces a weak isomorphism from \ker(f)^{\perp} to \overline{\operatorname{im}(f)}, and it is easy to check from definitions that \dim_{\text{vN}}(V)+\dim_{\text{vN}}(V^{\perp})=k for any Hilbert G-module V\subset{\ell^{2}({\Gamma})}^{k}, ∎

2.a Kaplansky’s conjecture on direct finiteness

Conjecture 2.6 (Kaplansky).

If k is a field, {\Gamma} is a group, and S,T\in k[{\Gamma}] are such that ST=1 then we also have TS=1 (i.e. k[{\Gamma}] is directly finite).

Proposition 2.7 (Kaplansky).

\mathbb{C}[{\Gamma}] is directly finite for any group.


IF ST=1 then {\lambda}(ST)(v)=v for any v\in\ell^{2}({\Lambda}), and hence \operatorname{im}(S)=\ell^{2}({\Gamma}). Hence \dim_{\text{vN}}\operatorname{im}(S)=1, so \dim_{\text{vN}}\ker S=0 and therefore \ker S=\{0\}, i.e. S is an injection.

The rest is routine, for example we can argue as follows: since \operatorname{im}(ST)=\ell^{2}({\Gamma}) and S is injective we have Tv=\zeta_{e} for some v\in\ell^{2}({\Gamma}), and hence v=STv=S\zeta_{e}, so TS\zeta_{e}=\zeta_{e}. The last equality implies that TS=1 in \mathbb{C}[{\Gamma}]. ∎

We will return to Kaplansky’s conjecture for arbitrary fields later.

2.b \ell^{2}-homology and \ell^{2}-Betti numbers of a simplicial complex

For simplicity of notation we consider only simplicial complexes, although all the definitions can be easily generalized to the context of CW-complexes.

Let X be a simplicial complex, let X_{i} be the set of i-dimensional cells and let k be a filed. Let k[X_{i}] be the set of formal k-linear combinations of elements of X_{i}.

After we choose some arbitrary orientations on all cells, we get the boundary maps D_{i}\colon k[X_{i}]\to k[X_{i-1}] defined on the canonical basis as D_{i}(c):=\sum_{d\in\partial c}\pm d, where the sign depends on whether the chosen orientation of d in X agrees with the orientation of d induced from c.

Then the homology groups of X with coefficients in k are the k-vector fields


To define l^{2}-homology, we assume that X has bounded geometry, i.e. there exists C\in\mathbb{N} such that each i-dimensional cell is contained in at most C cells of dimension i+1. Recall that l^{2}(X_{i}) is the Hilbert space of l^{2}-summable functions on X_{i}. In particular it is spanned by the indicator functions \zeta_{c} for c\in X_{i}.

We define

D_{i}(\zeta_{c})=\sum_{d\in\partial c}\pm\zeta_{d}

The l^{2}-homology groups of X are defined as

Remark 2.8.

In particular, if X is a finite complex then l^{2}-homology is the same as the standard homology.

However, now let us take a finite simplicial complex X and consider a normal covering Y of X and denote with {\Gamma} the deck transformation group of Y. We consider the group {\Gamma} as acting from the right.

For each cell c of X let us choose a lift \hat{c} of it in Y. Note that {\Gamma} acts on Y_{i} and the chosen lifts provide an identification of Y_{i} with a disjoint union of copies of {\Gamma}. Thus we also get an isometry \ell^{2}(Y_{i})\cong{\ell^{2}({\Gamma})}^{X_{i}}, which sends \zeta_{c} to the vector which is equal to \zeta_{e} on the coordinate corresponding to c, and which is equal to 0 on all other coordinates.

The boundary maps D_{i} are {\Gamma}-equivariant and under the identification above they induce maps {\ell^{2}({\Gamma})}^{X_{i}}\to{\ell^{2}({\Gamma})}^{X_{i-1}} given by certain matrices in \operatorname{Mat}(|X_{i}|\times|X_{i-1}|,\mathbb{Z}[{\Gamma}]). As such the l^{2}-homology of Y can be naturally seen as a Hilbert {\Gamma} module, namely


and so we can define the \ell^{2}-Betti numbers of Y with respect to {\Gamma} as


3 Examples, basic properties and approximation of \ell^{2}-Betti numbers

Example 3.1.

Consider the standard square tessellation of \mathbb{R}^{2} as a square complex Y. There is one C^{2} orbit of 2-cells, two orbits of 1-cells and one orbit of 0-cells (See Figure 1). So the complex from which we compute l^{2}-homology is


where D_{3}=0, D_{2}=\begin{pmatrix}1-s&t-1\end{pmatrix}, D_{1}=\begin{pmatrix}t-1\\
s-1\end{pmatrix}, D_{0}=0.

We easily (using Fourier transform to identify \ell^{2}(C^{2}) with L^{2}((S^{1})^{2}), or directly), see that \operatorname{im}(D_{1}) is dense in \ell^{2}(C^{2}) and that \ker(D_{2})=\{0\}, which implies that {\beta}^{(2)}_{i}(Y,C^{2})=0 for all i.

Figure 1: Schematics of identifying the cells with group elements, e,f,g,h are chosen representatives of orbits; all other cells are translations by group elements of these cells.
Example 3.2.

Consider the Cayley graph Y of the free group F_{2} (see Figure 2). The complex from which we compute l^{2} homology is now


where D_{2}=0, D_{1}=\begin{pmatrix}t-1\\
s-1\end{pmatrix}, D_{0}=0. As before we can check that \operatorname{im}(D_{1}) is dense (see the next example), from which it follows that {\beta}^{(2)}_{0}=1 and {\beta}^{(2)}_{0}=0.

Figure 2: Schematics of identifying the cells with group elements,e,f,g are chosen representatives of orbits; all other cells are translations by group elements of these cells.

More generally, if Y_{k} is the Cayley graph of F_{k}, k\in\mathbb{N} then we can show that {\beta}^{(2)}_{0}=0 and {\beta}^{(2)}_{1}=k-1.

Example 3.3.

If Y is a connected simplicial complex with infinite Y_{0} then H_{0}^{2}(Y)=\{0\}. Indeed, we just need to argue that \operatorname{im}D_{1} is dense. For this we note that for any v,w\in Y_{0} we have \zeta_{v}-\zeta_{w}\in\operatorname{im}(D_{1}). Therefore, any w\in\overline{\operatorname{im}(D_{0})}^{\perp} must be a constant function, and since Y_{0} is infinite, the only constant function in l^{2}(Y_{0}) is the 0 function.

Definition 3.4.

Suppose that {\Gamma} is a group and X is a model of B{\Gamma}, i.e. we have that \pi_{1}(X)={\Gamma}, and \pi_{i}=\{0\} for i\geqslant 2. Then we define


where Y is the universal cover of Y

Remark 3.5.
  1. 1.

    This definition is sensible because in fact l^{2}-homology is a homotopy invariant, in the sense that homotopy f of finite complexes X and X^{\prime} induces a weak isomorphism of H^{(2)}_{i}(Y) and H^{(2)}_{i}(Y^{\prime}), where Y and Y^{\prime} are covers corresponding to the same subgroup of \pi_{1}(X)\cong_{f}\pi_{1}(X^{\prime}). For details see [ECK00] (or Thomas Schick’s lectures next week) It follows that no matter what model for B{\Gamma} we take, we get the same \ell^{2}-betti numbers.

  2. 2.

    Note that we have defined {\beta}^{(2)}_{i}({\Gamma}) only if there is a model for B{\Gamma} with bounded geometry. However {\beta}^{(2)}_{i}({\Gamma}) can be defined for an arbitrary group {\Gamma}, either via the theory of \ell^{2}-homology developed by W. Lueck, or simply by taking exhaustion by finite dimensional complexes of B{\Gamma} (see Gaboriau’s papers for the latter approach). We will not cover this in these notes.

    Also it is worth mentioning that by Gaboriau’s work, many groups (for example all amenable groups) admit a finite dimensional “measurable B{\Gamma}”, and this is enough to define their \ell^{2}-Betti numbers in the way which we do it in these notes.

In particular we have shown that {\beta}^{(2)}_{i}(C^{2})=0 for all i, {\beta}^{(2)}_{1}(F_{k})=k-1, and {\beta}^{(2)}_{0}({\Gamma})=0 for all infinite groups (we have shown this last statement only for groups with a model for B{\Gamma} of bounded geometry, but it is true in general)

Definition 3.6.

The l^{2}-Euler characteristic of Y with respect to {\Gamma} is


Using what we already know about the von Neumann dimension, we can show the following proposition

Proposition 3.7.

If X is a finite simplicial complex, and Y is a normal cover with deck transformation group {\Gamma} then \chi(X)=\chi^{(2)}(Y,{\Gamma})


Recall that \chi(X)=|X_{0}|-|X_{1}|+|X_{2}|-\ldots. By additivity of von Neumann dimension we have



\displaystyle\chi(X) \displaystyle=0+\dim_{\text{vN}}\ker(D_{0})-(\dim_{\text{vN}}\overline{%

and by additivity of von Neumann dimension we have {\beta}^{(2)}_{i}=\dim_{\text{vN}}\ker(D_{i})-\dim_{\text{vN}}\overline{%
\operatorname{im}(D_{i+1})}. Thus the claim follows. ∎

Finally let us mention that by Lemma 1.10 we can define the Laplacian {\Delta}_{i}=D_{i}^{\ast}D_{i}+D_{i+1}D_{i+1}^{\ast} and we have H^{(2)}_{i}(Y)=\ker({\Delta}_{i}), and so {\beta}^{(2}_{i}(Y,{\Gamma})=\dim_{\text{vN}}\ker({\Delta}_{i}).

3.a Approximation of the \ell^{2}-Betti numbers

It is natural to ask the following question. Suppose that X is finite simplicial complex, {\Gamma}=\pi_{1}(X), and Y is the universal cover of X. Suppose that {\Gamma}\supset{\Gamma}_{1}\supset{\Gamma}_{2}\supset\ldots is a chain of normal subgroup sof {\Gamma} such that \bigcap_{i}{\Gamma}_{i}=\{e\}, and let Y_{i} be the cover corresponding to {\Gamma}_{i},i.e. Y_{i} is the quotient of Y by the action of {\Gamma}_{i}. Then we can ask the following question.

Question 3.8.

Is it true that for every j we have {\beta}^{(2)}_{j}(Y_{i},{\Gamma}_{i})\to_{i\to\infty}{\beta}^{(2)}_{j}(Y,{%
\Gamma}) ?

The general answer is not known, however the following is a classical theorem of Wolfgang Lueck.

Theorem 3.9.

The answer is ”yes” if all the groups {\Gamma}/{\Gamma}_{i} are residually finite.

Lueck proved it when {\Gamma}/{\Gamma}_{i} are finite. The above version was proved in a paper of Jozef Dodziuk, Peter Linnell, Varghese Mathai, Thomas Schick and Stuart Yates. In fact it is a folklore theorem (or Elek’s theorem? I couldn’t find a reference immediately) that it is enough to assume that {\Gamma}/{\Gamma}_{i} are sofic (and the proof is essentially the same).

Later it was noticed by other researchers (as mentioned by Andrei in his course) that in fact the answer is positive when {\Gamma} is sofic, and with a very similar proof.

Algebraically this corresponds to the following theorem.

Theorem 3.10.

Let T\in\operatorname{Mat}(k,\mathbb{Z}[{\Gamma}]), and let T_{i}\in\operatorname{Mat}(k,\mathbb{Z}[{\Gamma}]) be sdefined by T_{i}=\pi_{i}(T), where \pi\colon\mathbb{Z}[{\Gamma}]\to\mathbb{Z}[{\Gamma}/{\Gamma}_{i}] is the natural projection map. If all the groups {\Gamma}/{\Gamma}_{i} is residually finite then we have \dim_{\text{vN}}\ker(T)=\lim_{i}\dim_{\text{vN}}\ker(T_{i}).

Remark 3.11.

Theorem 3.9 follows by taking T to be a Laplacian on Y.

Proof of Theorem 3.10.

Since \ker T^{\ast}T=\ker T and \ker T_{i}^{\ast}T_{i}=\ker T_{i}, we can assume that T is positive and self-adjoint. As such we can consider the spectral measure \mu of T and the spectral measures \mu_{i} of T_{i} (here we mean the scalar-valued spectral measure, i.e. the projection-valued spectral measure composed with the trace \tau, as introduced in Jessie Peterson’s course).

Let c=\|T\|_{1}^{2}, i.e. \sqrt{c} is the sum of absolute values of the coefficients of T. If {\sigma}\colon{\Gamma}\to{\Delta} is any group homomorphism then using Cauchy-Schwartz we can check that the operator norm of {\sigma}(T) is bounded by c.

The next two claims are general statement about so called weak convergence of measures.

Claim: Let {\Lambda} be any countable groups, let {\Lambda}\supset{\Lambda}_{1}\supset{\Lambda}_{2}\supset\ldots be any sequence of normal subgroups of {\Lambda} with \bigcap{\Lambda}_{i}=\{e\}. Let S\in\mathbb{C}[{\Lambda}] and let {\sigma}_{i}\colon\mathbb{Z}[{\Gamma}]\to\mathbb{Z}[{\Gamma}/{\Lambda}_{i}]. Let {\lambda}_{i} be the spectral measure of {\sigma}_{i}(S). Then the measures {\lambda}_{i} converge weakly to the spectral measure {\lambda} of S, i.e. for all continuous functions f on [0,c] we have \int fd{\lambda}_{i}\to\int fd{\lambda} as i\to\infty.

Indeed, by Weierstrass approximation and linearity it is enough to check this when f=x^{k} for some k. But for almost all i we have that {\lambda}_{i} is injective on the support of T^{k}, and hence the coefficient of the neutral element of S^{k} and of S_{i}^{k} is the same, which by definition means that \tau(S^{k})=\tau(S_{i}^{k}). Since \int x^{k}d{\lambda}=\tau(T^{k}) and \int x^{k}d{\lambda}_{i}=\tau(T_{i}^{k}), this finishes the proof.

Claim: Let {\lambda}_{i}, i\in\mathbb{Z}_{+} and {\lambda} be probability measures on some interval [-c,c]. If {\lambda}_{i} weakly converge to {\lambda} then for any open interval I\subset\mathbb{R} we have {\lambda}_{i}(I)\to{\lambda}(I) as i\to\infty

Indeed, let {\varepsilon}>0 and let f be a non-negative function which is bounded by 1, equal to 0 outside of I, and such that \int fd{\lambda}>{\lambda}(I)-{\varepsilon}. Since f is bounded by 1 on I and 0 outside we have {\lambda}_{i}(I)\geqslant\int fd{\lambda}_{i} for all i, so

\liminf{\lambda}_{i}(I)\geqslant\lim\int fd{\lambda}_{i}=\int fd{\lambda}>{%

and so \liminf{\lambda}_{i}(I)\geqslant{\lambda}(I).

In particular we have that the spectral measures \mu_{i} weakly converge to \mu. In the following claim we will crucially use both that {\Gamma}/{\Gamma}_{i} are residaull finite and that T\in\mathbb{Z}[{\Gamma}].

Claim: For all 1>{\varepsilon}>0 we have \mu_{i}((0,{\varepsilon}))\leqslant\frac{\log(c)}{|\log({\varepsilon})|}

Indeed, since {\Gamma}/{\Gamma}_{i} is residually finite, by the previous claim it is enough to check to take a finite quotient {\sigma}\colon{\Gamma}\to{\Lambda} and show that


where {\lambda} is the spectral measure of {\lambda}(T).

Let n=|{\Gamma}/{\Lambda}|, and let {\alpha}_{1},\ldots,{\alpha}_{k} be the non-zero eigenvalues of {\lambda}(T) (which are positive real numbers). Note that \prod_{i=1}^{k}{\alpha}_{i} is a coefficient of the characteristic polynomial of {\lambda}(T), so in particular \prod{\lambda}_{i}\geqslant 1.

Bounding eigenvalues which are less than {\varepsilon} by {\varepsilon}, and the other ones by c, we therefore get

{\varepsilon}^{n\mu_{{\lambda}(T)}((0,{\varepsilon}))}c^{n}\geqslant 1,

which after taking the logarithms shows n\mu_{{\lambda}(T)}((0,{\varepsilon}))\log({\varepsilon})+n\log(c)\geqslant 0. Since \log({\varepsilon})<0, we see that


which finishes the proof of the claim.

Now we can finish the proof of the theorem: Because of the previous claim, we can find a non-negative continuous function f (supported on some small interval around 0) such that \int fd\mu\approx\mu(\{0\}) and also for all i we have \int fd\mu_{i}\approx\mu_{i}(\{0\}). Therefore the statement follows from the weak convergence. ∎

Remark 3.12.

In the proof above with a little bit more care we could have obtained that \int\log(x)d\mu_{i}(x) converges to \int\log(x)d\mu(x) and that the latter number is finite. This shows in particular that \mu((0,{\varepsilon}))=o(\frac{1}{|\log({\varepsilon})|}). In [GRA15] it is shown that this is not far from optimal: for every {\delta}>0 there exists some group {\Gamma} and T\in\mathbb{Z}[{\Gamma}] such that \mu(0,{\varepsilon})\approx\frac{1}{|\log({\varepsilon})|^{1+{\delta}}}.

On the other hand, suppose {\Gamma} is a group, T\in\mathbb{C}[{\Gamma}] and there is some function f\colon\mathbb{R}_{+}\to\mathbb{R}_{+} such that f(x)\xrightarrow{{\varepsilon}\to 0}0. If we have \mu_{{\sigma}(T)}((0,{\varepsilon}))<f({\varepsilon}) for all finite quotients of {\sigma}\colon{\Gamma}\to{\Lambda}, then Lueck approximation holds for T (with respect to residually finite quotients), by virtue of repeating the proof. Unfortunately, noone so far has been able to prove the Lueck approximation over \mathbb{C} using this strategy.

3.b Approximation over \mathbb{C} in the case of amenable groups

Recall that a countable group {\Gamma} is amenable if there exists a sequence F_{1},F_{2},\ldots of finite subsets of {\Gamma} such that for every {\gamma}\in{\Gamma} we have \frac{|{\gamma}F_{i}\setminus F_{i}|}{|F_{i}|}\xrightarrow{i\to\infty}0. If {\Gamma} is amenable then a sequence witnessing the amenability of {\Gamma} is called a Foelner sequence and its elements are referred to as Foelner sets.

The following is a particular case of a theorem of Gabor Elek (presentation follows the simplified proof of Daniel Pape). Closely related statements were proved brefore by Cheeger and Gromov in a more geometric setting.

Theorem 3.13.

Let {\Gamma} be amenable, let F_{i} be a Foelner sequence and let T\in\mathbb{C}[{\Gamma}]. Let T_{i}\colon\ell^{2}(F_{i})\to\ell^{2}({\Gamma}) be the restriction of T to \ell^{2}(F_{i}). Then


Let {\Sigma}\subset{\Gamma} be the support of T. For A\subset{\Gamma} let us define

\partial A=\{{\gamma}\in{\Gamma}\setminus A\exists{\sigma}\in{\Sigma},a\in A%
\text{ such that }{\gamma}={\sigma}a\}


\overline{A}=A\cup\partial A.

Note that T_{i}\colon\ell^{2}(F_{i})\to\ell^{2}(\overline{F}_{i}), and we extend it to \overline{T}_{i}\colon\ell^{2}(\overline{F}_{i})\to\ell^{2}(\overline{F}_{i}) by declaring \overline{T}_{i}(\zeta_{\gamma})=0 for {\gamma}\in\partial F_{i}. Note that |\dim\ker T_{i}-\dim\ker\overline{T}_{i}=o(|F_{i}|), so it is enough to show


Let S=T^{\ast}T and let S_{i}={\overline{T}_{i}^{\ast}}\cdot\overline{T}_{i}, let \mu be the spectral measure of S and let \mu_{i} be the spectral measure of S_{i} (i.e. \mu_{i} is “the set of eigenvalues of S_{i} with multiplicities”).

Claim 1 The measures \mu_{i} weakly converge to the spectral measure \mu.

Indeed the proof is very similar to the proof of the analogous claim in Theorem 3.10: we need to show that for a fixed k\in\mathbb{N} we have


where \tau_{i} is the standard trace of a finite dimensional matrix.

Let {\varepsilon}>0. By definition of a Foelner sequence, for almost all i we have

|{\Sigma}^{2k}F_{i}\setminus F_{i}|<{\varepsilon}|F_{i}|.


G_{i}:=\{{\gamma}\in F_{i}\colon{\Sigma}^{2k}{\gamma}\subset F_{i}\}.

It follows that |G_{i}|>(1-{\varepsilon})|F_{i}|. But if {\gamma}\in G_{i} then S_{i}^{k}(\zeta_{\gamma})=S^{k}(\zeta_{\gamma}) so

\langle S_{i}^{k}(\zeta_{\gamma}),\zeta_{\gamma}\rangle=\langle S^{k}(\zeta_{%

Therefore we have

\tau_{i}(S_{i}^{k})=\sum_{{\gamma}\in\overline{F_{i}}}\langle S_{i}^{k}\zeta_{%
\gamma},\zeta_{\gamma}\rangle=\sum_{{\gamma}\in G_{i}}\langle S_{i}^{k}\zeta_{%
\gamma},\zeta_{\gamma}\rangle+\sum_{{\gamma}\in\overline{F_{i}}\setminus G_{i}%
}\langle S_{i}^{k}\zeta_{\gamma},\zeta_{\gamma}\rangle

which is equal to

|G_{i}|\tau(S^{k})+\sum_{{\gamma}\in\overline{F_{i}}\setminus G_{i}}\langle S_%

and so


where \|S_{i}^{k}\| is the operator norm of S_{i}^{k}. Since the latter is bounded by the norm of S^{k}, it is in paricular independent of i, which implies the claim.

Weak convergence by itself shows that


by taking a a non-negative continuous function f supported on some interval around 0 and such that \int fd\mu\approx\mu(\{0\}).

Thus we have

\displaystyle\dim_{\text{vN}}\ker(S)=\mu(\{0\}) \displaystyle\geqslant
}\dim\ker(S_{i}) \displaystyle=\limsup\frac{1}{|F_{i}|}\dim\ker(S_{i}),

and hence we also have


Claim 2 We have \dim_{\text{vN}}\operatorname{im}(T)\geqslant\limsup_{i}\frac{1}{|F_{i}|}\dim%

Indeed, let P\colon\ell^{2}({\Gamma})\to\ell^{2}({\Gamma}) be the orthogonal projection onto \overline{\operatorname{im}(T)}, and let P_{i}\colon\ell^{2}({\Gamma})\to\ell^{2}(\overline{F_{i}}) be the projection onto \operatorname{im}T_{i}. Note that \operatorname{im}(T_{i})\subset\operatorname{im}(T) and so for any v\in\ell^{2}({\Gamma}) we have \|Pv\|\geqslant\|P_{i}v\|, which shows that for all {\gamma}\in{\Gamma} we have \tau(P)=\langle P\zeta_{\gamma},\zeta_{\gamma}\rangle\geqslant\langle P_{i}%
\zeta_{\gamma},\zeta_{\gamma}\rangle. Now the result easily follows, because

\dim\operatorname{im}(T_{i})=\sum_{{\gamma}\in{\Gamma}}\langle P_{i}\zeta_{%

and there are at most |\overline{F}_{i}| non-zero summands in this sum.

By the additivity of dimensions, claim 2 implies that \dim_{\text{vN}}\ker(T)\leqslant\limsup_{i}\frac{1}{|F_{i}|}\dim\ker(T_{i}), which finishes the proof. ∎

Remark 3.14.

Using the same proof and slightly more involved notation we can show the analogous statement for T\in\operatorname{Mat}(k,L({\Gamma})). Note that this in particular shows that if T\in\operatorname{Mat}(k,L({\Gamma})), and \ker(T)\neq 0 then there exists v\in\ker(T) which is a finite sum of the vectors \zeta_{i,{\gamma}} (see the proof of Lemma 2.2 for the definition of \zeta_{i,{\gamma}}. Such kernel elements are often referred to as finitely supported.

Corollary 3.15 (Cheeger-Gromov).

If {\Gamma} is an amenable group then for all i we have {\beta}^{(2)}_{i}({\Gamma})=0. In particular \chi({\Gamma})=0.


We show it only when there exists a model for B{\Gamma} with bounded geometry. If Y\to X=B{\Gamma} is the universal cover, thn Y is contractible, and hence H_{i}(Y)=0. In other words there are no finitely supported elements in \ker{\Delta}_{i}. But then by the previous remark we have that \ker{\Delta}_{i}=\{0\}, which shows that H^{(2)}_{i}({\Gamma})=\{0\}. ∎

3.c Remark about approximation in positive characteristic

Elek and Szabo proved the Kaplansky’s conjecture on direct finiteness for sofic groups, i.e. they showed the following theorem.

Theorem 3.16.

Let {\Gamma} be a sofic group and k be a field. Then k[{\Gamma}] is directly finite, i.e. for all a,b,\in k[{\Gamma}] we have that ab=1 implies ba=1.

For lack of time we will not define what sofic groups are. However we note that they include residually finite and amenable groups.


We present the argument only in the residually finite case (the sofic case is done in a very similar way). By passing to the subfield generated by coefficients of a and b we can just as well assume that k is countable.

Consider a chain {\Gamma}_{1}\supset{\Gamma}_{2}\supset\ldots of finite-index normal subgroups of {\Gamma}. By passing to a subchain (here we use that k is countable) we may assume that for all T\in k[{\Gamma}] the limit


exists, where T_{i} is the image of T in k[{\Gamma}/{\Gamma}_{i}] and {\lambda}(T_{i})\colon k[{\Gamma}/{\Gamma}_{i}]\to k[{\Gamma}/{\Gamma}_{i}] is the k-linear map given by left multiplication by T.

Claim: If T\in k[{\Gamma}] and T\neq 0 then \operatorname{rank}(T)\neq 0.

Indeed, note that \operatorname{supp}(T) is a finite set and let {\Sigma}:=\operatorname{supp}(T)\cup\operatorname{supp}(T)^{-1}. Let A be a maximal subset of {\Gamma}/{\Gamma}_{i} such that for {\alpha},{\beta}\in{\Sigma} we have \operatorname{supp}(T)\cdot{\alpha}\cap\operatorname{supp}(T)\cdot{\beta}=\emptyset. By maximality we have that \bigcup_{a\in A}{\Sigma}^{2}\cdot a={\Gamma}/{\Gamma}_{i} and hence |A|\geqslant\frac{1}{|{\Sigma}|^{2}}\cdot|{\Gamma}/{\Gamma}_{i}|. It follows that the vectors {\lambda}(T_{i})\zeta_{\gamma}, {\gamma}\in A have pair-wise disjoint supports, and so \operatorname{rank}(T)\geqslant\frac{1}{|{\Sigma}|^{2}}.

But if ab=1 then clearly a_{i}b_{i}=1 for all i. Since \operatorname{Mat}(n,k) is directly finite for any field k and n\in\mathbb{Z}_{+}, we deduce that b_{i}a_{i}=1 for all i. Thus \operatorname{rank}(ab-1)=0, and so ab-1=0. This finishes the proof. ∎

4 Atiyah conjecture for some torsion-free groups

4.a Statement of the Atiyah conjecture for torsion-free groups

Let us recall a convenient characterisation of affiliated operators from Jessie Peterson’s course: A closed partially defined operator T\colon\ell^{2}({\Gamma})\to\ell^{2}({\Gamma}) is an affiliated operator iff there exists a sequence {\mathcal{H}}_{1}\subset{\mathcal{H}}_{2}\subset\ldots of Hilbert {\Gamma}-modules such that \dim_{\text{vN}}{\mathcal{H}}_{i}\to 1, \operatorname{dom}(T)=\bigcup_{i}{\mathcal{H}}_{i}, and for each i we have that T restricted to {\mathcal{H}}_{i} is equal to an element of L{\Gamma} restricted to {\mathcal{H}}_{i} (note that it follows from this description that T is densely defined).

The set of all affiliated operators will be denoted by {\mathcal{U}}({\Gamma}). Elements of \operatorname{Mat}(k\times l,{\mathcal{U}}({\Gamma})) will also be called affiliated operators.

The following is the Peter Linnell’s formulation of the Atiyah conjecture over \mathbb{C} for torsion-free groups:

Conjecture 4.1.

If {\Gamma} is a torsion-free group then there is a skew field {\mathcal{R}}({\Gamma})\subset{\mathcal{U}}({\Gamma}) which contains \mathbb{C}[{\Gamma}].

The more classical formulation is the following:

Conjecture 4.2.

If {\Gamma} is torsion free and M\in\operatorname{Mat}(k\times l,\mathbb{C}[Ga]) then \dim_{\text{vN}}\ker(M)\in\mathbb{N}.

Remark 4.3.

The equivalence of these two formulations is a theorem of Peter Linnell. Using the above characterisation of the affiliated operators it is easy to argue that if M\in\operatorname{Mat}(k\times l,\mathbb{Z}[{\Gamma}]) and U\in GL(l,{\mathcal{U}}({\Gamma})) then \dim_{\text{vN}}\overline{\operatorname{im}(UM)}=\dim_{\text{vN}}(\overline{%
\operatorname{im}(M)}). This can be used to formalize example 1.15, and in a similar way via Gaussian elimination this easily shows the implication “Linnel’s formulation \Rightarrow classical formulation”. The other direction relies on Cohn’s theory (see [LIN93])

4.b Biorderable groups

Recall that a group {\Gamma} is biorderable if there exists a linear order < on {\Gamma} such that for all a,b,c\in{\Gamma} we have that a<b implies that ac<bc and ca<cb.

Lemma 4.4.

If {\Gamma} is biorderable then there are no non-zero zero-divisors in k[{\Gamma}].


Let S,T\in k[{\Gamma}], let a,b\in{\Gamma} be the largest elements in \operatorname{supp}(S) and \operatorname{supp}(T) respectively. Then the coefficient of ab in ST is non-zero because the only pair (x,y)\in\operatorname{supp}(S)\times\operatorname{supp}(T) such that xy=ab is (a,b).

Indeed, for every c\in\operatorname{supp}(S) with c<a and every d\in\operatorname{supp}(T) we have cd\leqslant cb<ab, and similarly for every c\in\operatorname{supp}(S) and every d\in\operatorname{supp}(T) with d<b we have cd\leqslant ad<ab. This finishes the proof. ∎

Remark 4.5.

In fact for the lemma to hold it is enough if {\Gamma} is one-sided orderable (see the excellent monograph [DNR14] for this and more information about orderable groups)

Let su recall what is a nilpotent group: if {\Gamma} is a group then we let {\Gamma}_{1}=[{\Gamma},{\Gamma}], and inductively {\Gamma}_{i}=[{\Gamma},{\Gamma}_{i-1}]. Then {\Gamma} is said to be nilpotent if for some k we have {\Gamma}_{k}.

Proposition 4.6.

If {\Gamma} is torsion-free nilpotent then {\Gamma} is biorderable.


(Sketch from [DNR14]) Claim: If A\subset B are groups, A is a central biorderable subgroup of B, and B/A is biorderable, then B is biorderable.

Indeed, let \pi\colon B\to B/A be the natural projection and let us defined the order on B by declaring x>y iff \pi(x)>_{B/A}\pi(y) or if there’s a tie if xy^{-1}>_{A}e. It is straightforward to check that this is a biorder on B.

Now let us define {\Lambda}_{i}\subset{\Gamma} as {\Lambda}_{i}:=\{{\gamma}\in{\Gamma}\colon\text{ for some $k$ we have ${\gamma%
}^{k}\in{\Gamma}_{i}$}\}. If {\Gamma}_{l}=\{e\} then we have \{e\}={\Lambda}_{l}\subset{\Lambda}_{l-1}\subset\ldots\subset{\Lambda}_{1}%
\subset{\Gamma} is a chain of normal subgroups such that [{\Gamma},{\Lambda}_{i}]\subset{\Lambda}_{i+1}. and such that {\Gamma}/{\Lambda}_{i} is torsion-free (in particular the smallest non-trivial {\Lambda}_{i} is a central subgroup of {\Gamma}.

Since C is orderable, the claim follows by induction on the length of a minimal chain of subgroups {\Lambda}_{i} such that [{\Gamma},{\Lambda}_{i}]\subset{\Lambda}_{i+1} and such that {\Gamma}/{\Lambda}_{i} is torsion-free, ∎

Proposition 4.7.

For a torsion free amenable group the Atiyah conjecture is equivalent to the statement that if S\in\mathbb{C}[{\Gamma}]\setminus\{0\} then for all T\in\mathbb{C}[{\Gamma}]\setminus\{0\} we have ST\neq 0 (i.e. to the statement that \mathbb{C}[{\Gamma}] is a domain).


(Sketch) Clearly Atiyah conjecture implies that \mathbb{C}[{\Gamma}] is a domain, so assume conversely that \mathbb{C}[{\Gamma}] is a domain.

Claim: We have that \mathbb{C}[{\Gamma}] fulfils the Ore condition: for all S,T we can find A,B such that AS=TB.

Indeed, let F_{i} be a two-sided Foelner sequence, i.e. for all {\gamma}\in{\Gamma} we have \frac{|{\gamma}F_{i}\setminus F_{i}|}{|F_{i}|}\xrightarrow{i\to\infty}0 and \frac{|F_{i}{\gamma}\setminus F_{i}|}{|F_{i}|}\xrightarrow{i\to\infty}0. Such a sequence always exists and it is not very hard to construct it.

Let {\Sigma}=\operatorname{supp}(S)\cup\operatorname{supp}(T). Consider \rho(S) restricted to \ell^{2}(F_{i}). Then by assumption \dim\ker\rho(S)=\{0\} and so \dim\operatorname{im}\rho(S)=|F_{i}|, similarly \dim\operatorname{im}{\lambda}(T)=|F_{i}|. But we have that \operatorname{im}\rho(S) and \operatorname{im}{\lambda}(T) are both subspaces in \ell^{2}(\overline{F}_{i}), where \overline{F}_{i}:=F_{i}\cup{\Sigma}\cdot F_{i}\cup F_{i}\cdot{\Sigma}. In particular we have \dim(\ell^{2}(\overline{F}_{i}))=|F_{i}|+o(|F_{i}|) and hence \operatorname{im}\rho(S)\cap\operatorname{im}{\lambda}(T)\neq\{0\}. This shows the claim.

In particular \mathbb{C}[{\Gamma}] can be embedded in its classical field of fractions, which we will denote by Q.

On the other hand Elek’s approximation along a Foelner sequence implies that if \mathbb{C}[{\Gamma}] is a domain and {\Gamma} is amenable then for each T\in\mathbb{C}[{\Gamma}]\setminus\{0\} we have \ker{\lambda}(T)=\{0\}. As such T has an inverse in {\mathcal{U}}({\Gamma}). Therefore, by the universal property of Q we have that Q embeds in {\mathcal{U}}({\Gamma}). This finishes the proof. ∎

Corollary 4.8.

Any group which is residually torsion-free nilpotent fulfils the Atiyah conjecture.

Remark 4.9.

Note that we need Andrei’s result that Lueck’s approximation holds over \mathbb{C} to actually obtain this corollary. From the results in these notes we can deduce only the Atiyah conjecture over \mathbb{Q}, i.e. that for T\in\operatorname{Mat}(k\times l,\mathbb{Q}[{\Gamma}]) we have \dim_{\text{vN}}\ker(T)\in\mathbb{N} (or equivalently that there is a skew field between \mathbb{Q}[{\Gamma}] and {\mathcal{U}}({\Gamma})).

Corollary 4.10.

Atiyah conjecture holds for the free groups.


Indeed, it’s enough to argue that F_{2} is residually torsion-free nilpotent. One way to quickly check it is to consider the quotients of


of the form


These are easy check to be torsion-free nilpotent. ∎


  • [DNR14] B. Deroin, A. Navas and C. Rivas (2014-08) Groups, Orders, and Dynamics. ArXiv e-prints. External Links: 1408.5805 Cited by: 4.b, Remark 4.5.
  • [ECK00] B. Eckmann (2000) Introduction to l_{2}-methods in topology: reduced l_{2}-homology, harmonic chains, l_{2}-Betti numbers. Israel J. Math. 117, pp. 183–219. Note: Notes prepared by Guido Mislin External Links: ISSN 0021-2172, Document, Link, MathReview (Alain Valette) Cited by: 1., Introduction to l^{2}-invariants (part I).
  • [GRA15] Å. Grabowski (2015) Group ring elements with large spectral density. Math. Ann. 363 (1), pp. 637–656. External Links: Document, Link Cited by: Remark 3.12.
  • [LIN93] P. A. Linnell (1993) Division rings and group von Neumann algebras. Forum Math. 5 (6), pp. 561–576. External Links: ISSN 0933-7741, Document, Link, MathReview (Alain Valette) Cited by: Remark 4.3.
  • [LÜC02] W. Lück (2002) L^{2}-invariants: theory and applications to geometry and K-theory. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], Vol. 44, Springer-Verlag, Berlin. External Links: ISBN 3-540-43566-2, MathReview (Thomas Schick)