These are the lecture notes for the four lectures which I delivered in the first week. I don’t intend to improve them much beyond fixing typos and grave mathematical mistakes (if someone finds such), so if in doubt ask me in person or see the references, in particular [ECK00]. I make no attempt at providing references to original papers, so please treat the “who prove what” parts only as a very vague first approximation.
PDF file is also available.
Corrections are welcome, preferably via sending me a corrected latex file. If you do send corrections please make your changes as small as possible, so I can easily proofread the changes using meld.
The rings of integers, rational numbers, real numbers and complex numbers are denoted, respectively, by , , , and . The natural numbers is the set . The ring of integers modulo a natural number is denoted by . The set of positive integers is . If is a ring then is the ring of polynomials over in one variable . Complex conjugate of is denoted with . The set of all matrices over a ring is denoted with , and furthermore we let
Let us explicitly mention a few examples of groups to have in mind. Neutral element will be denoted by or (if the group operation is written multiplicatively), or by (if the group operation is written additively).
Infinite cyclic group: the underlying additive group of the ring , frequently denoted by the same symbol. If we want to use the multiplicative notation, we will use the symbol to denote the set
of all integer powers of an indeterminate , with the obvious group law.
Finite groups. Particular examples are cyclic groups; the cyclic group of order is denoted with , and when using additive notation it is identified with the additive group of the ring of integers modulo
Free groups. The free group on two different symbols and is denoted by . As a set it consists of all reduced words in the letters . The group operation is “concatenate two words and reduce the result”. The group , where is either in or , is defined similarly (in the case we consider the free group on countably many symbols)
Various matrix groups. Whenever is a ring (associative with identity, commutative or not) and is a positive integer we can define to be the group of invertible square matrices with entries in . If is commutative then we can also define the subgroup of of matrices whose determinant is equal to .
In particular it is sometimes useful to have in mind some more “concrete” models for the free group . For example, the subgroup of generated by the matrices and is free. In order to show this, one needs to invoke the ping-pong lemma, which we don’t cover here.
The subgroup of generated by the matrices and is also free. Of course the fact that this group is free follows from the fact that the group generated by and is free. However, it is also easy to show it directly.
The discrete Heisenberg group is the subgroup of of all the matrices of the form , where . It is an example of a nilpotent group - we will talk more about such groups in the later lectures.
Given a countable group and a commutative ring we let be the group ring of over , which is defined as follows. As a set, we have that consists of all formal finite -linear combinations of the elements of .
The addition in the ring is the obvious one, and the multiplication is induced by the multiplication of elements of .
The above definition is hopefully clear, but it is somewhat informal, because usually the notion of a “formal -linear combination” is an informal one (i.e. it does not appear in any of the Bourbaki’s texts). If we wanted to be more prudent we would say that consists of finitely supported -valued functions defined on . The addition of elements of is then defined as the addition of functions, and the multiplication in is defined as a convolution product.
If then consists of all the expressions of the form , where , , and for all we have - in other words, the ring can be identified with the ring of Laurent polynomials with coefficients in .
Let us recall some basic definitions about Hilbert spaces. A Hilbert space is a complex vector space together with a Hermitian inner product which is linear in the first variable and antilinear in the second variable, and such that is complete with respect to the norm defined by .
There are three basic examples of Hilbert spaces which we need to consider:
finite dimensional spaces , where with the standard Hermitian inner product
, where is a (typically infinite) set, is the Hilbert space of functions such that , with Hermitian inner product . The indicator function of will be denoted by
where is a space with a measure (typically interval with the Lebesgue measure, or the set of all complex number of modulus one, also with the Lebesgue measure), whose elements are measurable functions such that . The inner product is .
A linear map between Hilbert spaces is bounded if for some and for all with we have . If is bounded then the smallest which is a witness of it is called the norm (or the operator norm) of , and is denoted by .
The adjoint of an operator is the unique bounded operator such that for all and we have
It is easy to check that and .
If then we say that is self-adjoint if .
If with the standard inner product and is represented in the standard basis by a matrix then is the operator represented by the matrix , where denotes the transpose of . In particular the condition of being self-adjoint is equivalent to . Thus is self-adjoint if it is represented by a Hermitian matrix in the standard basis.
If is any bounded operator then , and are all self-adjoint.
We say that a bounded self-adjoint operator is positive if for all we have . Note that for any operator we have that (and hence also is positive: we have .
Given a countable group we define to be the Hilbert space of all those functions such that (i.e. is the Hilbert space of all -summable functions in ). Given , the function is defined by demanding that and when , i.e. is the indicator function of .
The scalar product on is defined by demanding that the functions , , form an orthonormal basis, i.e. for all and for . Thus every element of is a linear combination of the vectors , .
We have a natural left action , which is called the left regular representation, defined on the basis vector by the formula
Similarly we have the right action defined as .
Both and extend to actions of the group ring by linearity, i.e. if is equal to , then
and similarly for . In this way and become bounded linear operators on . Sometimes we simply say that is an operator on - in that case we will always mean the left regular representation.
The operation of taking the inverse in extends to an involutive operation on which we will denote with an asterisk:
On the other hand we have the operation of taking the adjoint operator defined on all bounded operator on which also denote by , i.e. if is a bounded operator then is the adjoint of .
The following lemma justifies the choice of notation.
For any we have
To prove the claim we need to check that and are adjoints of each other, i.e. for any we have
By linearity we can just as well assume that for some and we have , and . Then we need to show that
Clearly LHS is equal to if and to otherwise, and RHS is equal to if and to otherwise, which shows the desired equality. ∎
Recall that a bounded operator on a Hilbert space is self-adjoint iff : note that if then the above lemma gives a handy “visual condition” to recognize if is a self-adjoint operator. Namely, for every we need to compare the coefficients of and in and check if they are conjugate to each other.
If is a finite group then and are both finite-dimensional and isometric to each other as Hilbert spaces, by sending the linear combination to (If is not finite then this still gives a natural embedding of into ).
If is finite then the ring can be described as the direct sum of matrix rings
where the sum is over all iso-classes of irreducible linear representations of . On the other hand the space can be conveniently described as the direct sum , i.e. decomposes as the sum of irreducible representations of , and an iso-class of dimension appears exactly times.
Particular case of the previous example is when is a finite abelian group. In that case the space has a particularly nice orthogonal basis: the elements of it are characters, i.e. the homomorphisms , where denotes the set of complex numbers of norm . Given such a character , it can be checked that the action of is as follows: for we have , i.e. spans a one-dimensional -invariant subspace of .
If we denote by the set of all characters of , then can be identified with the set of all complex valued sfunctions on , via the map which sends to the function . The above remark implies that under this map the operator corresponds to an operator on of pointwise multiplication.
The self-adjoint elements of correspond exactly to those functions on which only take real values.
We have already seen that can be identified with the ring of Laurent polynomials with complex coefficients. On the other hand Fourier transform gives us an isomorphism of Hilbert spaces and (recall that the latter space is the space of all measurable function such that . We normalize the measure on so that ) Under this isomorphism the action of on corresponds simply to pointwise multiplication of functions on . As in the previous example self-adjoint elements of correspond to real-valued functions.
This and the previous example can be generalized to arbitrary countable abelian groups, using so-called Pontryagin transform in place of the Fourier transform.
Given a bounded operator on a Hilbert space , we define .
For any bounded operator between Hilbert spaces we have
For any bounded operator between Hilbert spaces we have
If and are bounded operators between Hilbert spaces and then the orthogonal complement of in is equal to
Clearly we have , so let . Then , and hence , and so as needed.
We have iff for any we have . This is equivalent to for all which is equivalent to , which by previous point is equivalent to .
We need to show that is equal to . By the previous points it is enouch to show that
The inclusion is obvious. For the other inclusion let . Since is positive, we must have , and hence , i.e. . Similarly , which finishes the proof.
∎
The final bit in this section is the definition of the von Neumann dimension. Let and for let be the vector whose -th coordinate is and all other coordinates are equal to .
Let be a closed subspace which is -invariant, and let be the orthogonal projection onto . We define the von Neumann dimension of to be equal to
The way arises in basic cases concerning -invariants is as follows. Every element gives rise (via the left multiplication) to a bounded operator , and as such we have and similarly . It is easy to check that both and are right-invariant closed subspaces of and , respectively.
The formula simplifies when (i.e. is a matrix), in which case we have
It is very instructive to digest the definition of the von Neumann dimension in the case when is a finite group. For example, let be a -invariant subspace. From elementary linear algebra we know that
On the other hand we have that , , is an orthonormal basis of , so we can use it to compute the trace of in that basis. Thus we have
But now since is -invariant, and is an isometry for every , we have that commutes with for every . Therefore for every we have
Since acts by isometries, the above is equal to . In other words, we have
Recall that is isomorphic to and the action of on is by point-wise multiplication (since is abelian the left action is equal to the right action). All closed invariant subspaces of are of the form , where is a measurable subset of (it is clear that these subspaces are -invariant, but it takes more effort to show that all -invariant subspaces are of this form).
What is the von Neumann dimension of ? The projection is simply multiplication by the indicator function of , and is the constant function , and so
i.e. the von Neumann dimension recovers the Lebesgue measure on in this case.
If and then because there is no non-zero -function on with the property , when is a non-zero Laurent polynomial (because such a polynomial has only finitely many zero on ). and consequently .
On the other hand, let . In this case can easily be not equal to . However, we will shortly see that is an integer. Let us state the reason for this informally for now: the ring lies in the field of rational functions in one variable, and in fact can be identified with the field of fractions of . Using Gaussian elimination, we can find a matrix such that is in the row-echelon form.
Similarly to the case of the standard dimension, we have that , and the latter is equal to the number of non-zero rows. This statement is best proved using affiliated operators, so we will return to it later.
A closed subspace of which si -invariant is called a Hilbert -module.
For a Hilbert -module we have iff .
The direction is obvious, conversely if then , where is the projection onto . Since the action is by unitaries, it follows that for all , and hence also for all , which shows that , and so and hence . ∎
Recall from Jessie Peterso’s lecture that the bounded -invariant operators on form, by the bicommutant theorem, the group von Neumann algebra of which we will denote by .
We will say that two Hilbert -modules and are isomorphic if there exists a -invariant isometry between them, and that they are weakly isomorphic if there exists which is injective on and such that .
For we have .
For every and let be be the vector whose -th coordinate is and all other coordinates are .
We now have .
In other words, if we write
and
for some complex numbers and then we have . Hence
which proves the claim. ∎
If and are isomorphic then .
By taking a suitable direct sum we can just as well assume that for some . Let be a -invariant isometry, let be the projection onto , let be projection onto , and let .
Then and one can check that . It follows that and . It follows that
and similarly
Now the claim follows from the previous lemma. ∎
If and are weakly isomorphic then they are isomorphic.
By passing to a suitable direct sum we may assume that for some , and that we have an element such that which is injective on , and which is equal to on the orthogonal complement of . In particular we have .
On the other hand is positive self-adjoint, so by the spectral theorem we can find a positive self-adjoint with . This is injective on . Now consider the (unbounded) operator which is defined as for , and finally let .
For we have which is equal to .
Finally we note that is dense in since , and hence we can extend to an isometry defined on all of . ∎
For we have
Indeed, induces a weak isomorphism from to , and it is easy to check from definitions that for any Hilbert -module , ∎
If is a field, is a group, and are such that then we also have (i.e. is directly finite).
is directly finite for any group.
IF then for any , and hence . Hence , so and therefore , i.e. is an injection.
The rest is routine, for example we can argue as follows: since and is injective we have for some , and hence , so . The last equality implies that in . ∎
We will return to Kaplansky’s conjecture for arbitrary fields later.
For simplicity of notation we consider only simplicial complexes, although all the definitions can be easily generalized to the context of CW-complexes.
Let be a simplicial complex, let be the set of -dimensional cells and let be a filed. Let be the set of formal -linear combinations of elements of .
After we choose some arbitrary orientations on all cells, we get the boundary maps defined on the canonical basis as , where the sign depends on whether the chosen orientation of in agrees with the orientation of induced from .
Then the homology groups of with coefficients in are the -vector fields
To define -homology, we assume that has bounded geometry, i.e. there exists such that each -dimensional cell is contained in at most cells of dimension . Recall that is the Hilbert space of -summable functions on . In particular it is spanned by the indicator functions for .
We define
The -homology groups of are defined as
In particular, if is a finite complex then -homology is the same as the standard homology.
However, now let us take a finite simplicial complex and consider a normal covering of and denote with the deck transformation group of . We consider the group as acting from the right.
For each cell of let us choose a lift of it in . Note that acts on and the chosen lifts provide an identification of with a disjoint union of copies of . Thus we also get an isometry , which sends to the vector which is equal to on the coordinate corresponding to , and which is equal to on all other coordinates.
The boundary maps are -equivariant and under the identification above they induce maps given by certain matrices in . As such the -homology of can be naturally seen as a Hilbert module, namely
and so we can define the -Betti numbers of with respect to as
Consider the standard square tessellation of as a square complex . There is one orbit of 2-cells, two orbits of 1-cells and one orbit of 0-cells (See Figure 1). So the complex from which we compute -homology is
where , , , .
We easily (using Fourier transform to identify with , or directly), see that is dense in and that , which implies that for all .
Consider the Cayley graph of the free group (see Figure 2). The complex from which we compute homology is now
where , , . As before we can check that is dense (see the next example), from which it follows that and .
More generally, if is the Cayley graph of , then we can show that and .
If is a connected simplicial complex with infinite then . Indeed, we just need to argue that is dense. For this we note that for any we have . Therefore, any must be a constant function, and since is infinite, the only constant function in is the function.
Suppose that is a group and is a model of , i.e. we have that , and for . Then we define
where is the universal cover of
This definition is sensible because in fact -homology is a homotopy invariant, in the sense that homotopy of finite complexes and induces a weak isomorphism of and , where and are covers corresponding to the same subgroup of . For details see [ECK00] (or Thomas Schick’s lectures next week) It follows that no matter what model for we take, we get the same -betti numbers.
Note that we have defined only if there is a model for with bounded geometry. However can be defined for an arbitrary group , either via the theory of -homology developed by W. Lueck, or simply by taking exhaustion by finite dimensional complexes of (see Gaboriau’s papers for the latter approach). We will not cover this in these notes.
Also it is worth mentioning that by Gaboriau’s work, many groups (for example all amenable groups) admit a finite dimensional “measurable ”, and this is enough to define their -Betti numbers in the way which we do it in these notes.
In particular we have shown that for all , , and for all infinite groups (we have shown this last statement only for groups with a model for of bounded geometry, but it is true in general)
The -Euler characteristic of with respect to is
Using what we already know about the von Neumann dimension, we can show the following proposition
If is a finite simplicial complex, and is a normal cover with deck transformation group then
Recall that . By additivity of von Neumann dimension we have
so
and by additivity of von Neumann dimension we have . Thus the claim follows. ∎
Finally let us mention that by Lemma 1.10 we can define the Laplacian and we have , and so .
It is natural to ask the following question. Suppose that is finite simplicial complex, , and is the universal cover of . Suppose that is a chain of normal subgroup sof such that , and let be the cover corresponding to ,i.e. is the quotient of by the action of . Then we can ask the following question.
Is it true that for every we have ?
The general answer is not known, however the following is a classical theorem of Wolfgang Lueck.
The answer is ”yes” if all the groups are residually finite.
Lueck proved it when are finite. The above version was proved in a paper of Jozef Dodziuk, Peter Linnell, Varghese Mathai, Thomas Schick and Stuart Yates. In fact it is a folklore theorem (or Elek’s theorem? I couldn’t find a reference immediately) that it is enough to assume that are sofic (and the proof is essentially the same).
Later it was noticed by other researchers (as mentioned by Andrei in his course) that in fact the answer is positive when is sofic, and with a very similar proof.
Algebraically this corresponds to the following theorem.
Let , and let be sdefined by , where is the natural projection map. If all the groups is residually finite then we have .
Theorem 3.9 follows by taking to be a Laplacian on .
Since and , we can assume that is positive and self-adjoint. As such we can consider the spectral measure of and the spectral measures of (here we mean the scalar-valued spectral measure, i.e. the projection-valued spectral measure composed with the trace , as introduced in Jessie Peterson’s course).
Let , i.e. is the sum of absolute values of the coefficients of . If is any group homomorphism then using Cauchy-Schwartz we can check that the operator norm of is bounded by .
The next two claims are general statement about so called weak convergence of measures.
Claim: Let be any countable groups, let be any sequence of normal subgroups of with . Let and let . Let be the spectral measure of . Then the measures converge weakly to the spectral measure of , i.e. for all continuous functions on we have as .
Indeed, by Weierstrass approximation and linearity it is enough to check this when for some . But for almost all we have that is injective on the support of , and hence the coefficient of the neutral element of and of is the same, which by definition means that . Since and , this finishes the proof.
Claim: Let , and be probability measures on some interval . If weakly converge to then for any open interval we have as
Indeed, let and let be a non-negative function which is bounded by , equal to outside of , and such that . Since is bounded by on and outside we have for all , so
and so .
In particular we have that the spectral measures weakly converge to . In the following claim we will crucially use both that are residaull finite and that .
Claim: For all we have
Indeed, since is residually finite, by the previous claim it is enough to check to take a finite quotient and show that
where is the spectral measure of .
Let , and let be the non-zero eigenvalues of (which are positive real numbers). Note that is a coefficient of the characteristic polynomial of , so in particular .
Bounding eigenvalues which are less than by , and the other ones by , we therefore get
which after taking the logarithms shows . Since , we see that
which finishes the proof of the claim.
Now we can finish the proof of the theorem: Because of the previous claim, we can find a non-negative continuous function (supported on some small interval around ) such that and also for all we have . Therefore the statement follows from the weak convergence. ∎
In the proof above with a little bit more care we could have obtained that converges to and that the latter number is finite. This shows in particular that . In [GRA15] it is shown that this is not far from optimal: for every there exists some group and such that .
On the other hand, suppose is a group, and there is some function such that . If we have for all finite quotients of , then Lueck approximation holds for (with respect to residually finite quotients), by virtue of repeating the proof. Unfortunately, noone so far has been able to prove the Lueck approximation over using this strategy.
Recall that a countable group is amenable if there exists a sequence of finite subsets of such that for every we have . If is amenable then a sequence witnessing the amenability of is called a Foelner sequence and its elements are referred to as Foelner sets.
The following is a particular case of a theorem of Gabor Elek (presentation follows the simplified proof of Daniel Pape). Closely related statements were proved brefore by Cheeger and Gromov in a more geometric setting.
Let be amenable, let be a Foelner sequence and let . Let be the restriction of to . Then
Let be the support of . For let us define
and
Note that , and we extend it to by declaring for . Note that , so it is enough to show
Let and let , let be the spectral measure of and let be the spectral measure of (i.e. is “the set of eigenvalues of with multiplicities”).
Claim 1 The measures weakly converge to the spectral measure .
Indeed the proof is very similar to the proof of the analogous claim in Theorem 3.10: we need to show that for a fixed we have
where is the standard trace of a finite dimensional matrix.
Let . By definition of a Foelner sequence, for almost all we have
Let
It follows that . But if then so
Therefore we have
which is equal to
and so
where is the operator norm of . Since the latter is bounded by the norm of , it is in paricular independent of , which implies the claim.
Weak convergence by itself shows that
by taking a a non-negative continuous function supported on some interval around and such that .
Thus we have
and hence we also have
Claim 2 We have .
Indeed, let be the orthogonal projection onto , and let be the projection onto . Note that and so for any we have , which shows that for all we have . Now the result easily follows, because
and there are at most non-zero summands in this sum.
By the additivity of dimensions, claim 2 implies that , which finishes the proof. ∎
Using the same proof and slightly more involved notation we can show the analogous statement for . Note that this in particular shows that if , and then there exists which is a finite sum of the vectors (see the proof of Lemma 2.2 for the definition of . Such kernel elements are often referred to as finitely supported.
If is an amenable group then for all we have . In particular .
We show it only when there exists a model for with bounded geometry. If is the universal cover, thn is contractible, and hence . In other words there are no finitely supported elements in . But then by the previous remark we have that , which shows that . ∎
Elek and Szabo proved the Kaplansky’s conjecture on direct finiteness for sofic groups, i.e. they showed the following theorem.
Let be a sofic group and be a field. Then is directly finite, i.e. for all we have that implies .
For lack of time we will not define what sofic groups are. However we note that they include residually finite and amenable groups.
We present the argument only in the residually finite case (the sofic case is done in a very similar way). By passing to the subfield generated by coefficients of and we can just as well assume that is countable.
Consider a chain of finite-index normal subgroups of . By passing to a subchain (here we use that is countable) we may assume that for all the limit
exists, where is the image of in and is the -linear map given by left multiplication by .
Claim: If and then .
Indeed, note that is a finite set and let . Let be a maximal subset of such that for we have . By maximality we have that and hence . It follows that the vectors , have pair-wise disjoint supports, and so .
But if then clearly for all . Since is directly finite for any field and , we deduce that for all . Thus , and so . This finishes the proof. ∎
Let us recall a convenient characterisation of affiliated operators from Jessie Peterson’s course: A closed partially defined operator is an affiliated operator iff there exists a sequence of Hilbert -modules such that , , and for each we have that restricted to is equal to an element of restricted to (note that it follows from this description that is densely defined).
The set of all affiliated operators will be denoted by . Elements of will also be called affiliated operators.
The following is the Peter Linnell’s formulation of the Atiyah conjecture over for torsion-free groups:
If is a torsion-free group then there is a skew field which contains .
The more classical formulation is the following:
If is torsion free and then .
The equivalence of these two formulations is a theorem of Peter Linnell. Using the above characterisation of the affiliated operators it is easy to argue that if and then . This can be used to formalize example 1.15, and in a similar way via Gaussian elimination this easily shows the implication “Linnel’s formulation classical formulation”. The other direction relies on Cohn’s theory (see [LIN93])
Recall that a group is biorderable if there exists a linear order on such that for all we have that implies that and .
If is biorderable then there are no non-zero zero-divisors in .
Let , let be the largest elements in and respectively. Then the coefficient of in is non-zero because the only pair such that is .
Indeed, for every with and every we have , and similarly for every and every with we have . This finishes the proof. ∎
In fact for the lemma to hold it is enough if is one-sided orderable (see the excellent monograph [DNR14] for this and more information about orderable groups)
Let su recall what is a nilpotent group: if is a group then we let , and inductively . Then is said to be nilpotent if for some we have .
If is torsion-free nilpotent then is biorderable.
(Sketch from [DNR14]) Claim: If are groups, is a central biorderable subgroup of , and is biorderable, then is biorderable.
Indeed, let be the natural projection and let us defined the order on by declaring iff or if there’s a tie if . It is straightforward to check that this is a biorder on .
Now let us define as . If then we have is a chain of normal subgroups such that . and such that is torsion-free (in particular the smallest non-trivial is a central subgroup of .
Since is orderable, the claim follows by induction on the length of a minimal chain of subgroups such that and such that is torsion-free, ∎
For a torsion free amenable group the Atiyah conjecture is equivalent to the statement that if then for all we have (i.e. to the statement that is a domain).
(Sketch) Clearly Atiyah conjecture implies that is a domain, so assume conversely that is a domain.
Claim: We have that fulfils the Ore condition: for all we can find such that .
Indeed, let be a two-sided Foelner sequence, i.e. for all we have and . Such a sequence always exists and it is not very hard to construct it.
Let . Consider restricted to . Then by assumption and so , similarly . But we have that and are both subspaces in , where . In particular we have and hence . This shows the claim.
In particular can be embedded in its classical field of fractions, which we will denote by .
On the other hand Elek’s approximation along a Foelner sequence implies that if is a domain and is amenable then for each we have . As such has an inverse in . Therefore, by the universal property of we have that embeds in . This finishes the proof. ∎
Any group which is residually torsion-free nilpotent fulfils the Atiyah conjecture.
Note that we need Andrei’s result that Lueck’s approximation holds over to actually obtain this corollary. From the results in these notes we can deduce only the Atiyah conjecture over , i.e. that for we have (or equivalently that there is a skew field between and ).
Atiyah conjecture holds for the free groups.
Indeed, it’s enough to argue that is residually torsion-free nilpotent. One way to quickly check it is to consider the quotients of
of the form
These are easy check to be torsion-free nilpotent. ∎