Lewko's blog

The sharp large sieve inquality

Posted in expository, Fourier Analysis, math.NT, Uncategorized by Mark Lewko on August 31, 2009

This is the first post in a short sequence related to the large sieve inequality and its applications. In this post I will give a proof of the sharp (analytic) large sieve inequality. While the main result of this post is a purely analytic statement, one can quickly obtain a large number of arithmetic consequences from it. The most famous of these is probably a theorem of Brun which states that the sum of the reciprocals of the twin primes converges. I will present this and other arithmetic applications in a following post. This post will focus on proving the sharp (analytic) large sieve inequality. Much of the work in this post could be simplified if we were willing to settle for a less than optimal constant in the inequality. This would have little impact on the the arithmetic applications, but we’ll stick to proving the sharp form of this inequality (mostly for my own benefit). I should point out that there is an alternate approach to this inequality, via Selberg-Buerling extremal functions, which gives an even sharper result.  In particular, using this method one can replace the N+\delta^{-1} in Theorem 1 below by N-1+\delta^{-1}.  We will not pursue this further here, however.  Much of this post will follow this paper of Montgomery, however I have made a few attempts at simplifying the  presentation, most notably I have eliminated the need to use properties of skew-hermitian forms in the proof of the sharp discrete Hilbert transform inequality.

For {x \in \mathbb{R}} we will denote the distance of {x} to the nearest integer as {||x||}. Let \{a_n\} be a sequence of complex numbers for {M+1 \leq n \leq M+N}, and define

\displaystyle S(\alpha)= \sum_{n=M+1}^{M+N} a_n e^{2\pi i n\alpha}.

Clearly {S(\alpha)} is a periodic function of {\alpha}. The main result of this post is the following inequality.

Theorem 1 (large sieve inequality) If we let \{\alpha_i \} be a sequence of {\delta}-spaced points (this is to say {||\alpha_i - \alpha_j||\geq\delta} for every {i \neq j}), then

\displaystyle \sum_{r=1}^{R} | S(\alpha_r)|^2 \leq \left( N+\delta^{-1} \right) \sum_{n=M+1}^{M+N} |a_n|^2.\ \ \ \ \ (1)

\displaystyle \sum_{n=1}^{N} \left| \sum_{r=1}^{R} b_{r} e^{2 \pi i n \alpha_{r}} \right|^2 \leq \left( N+ \delta^{-1} \right) \sum_{r=1}^{R} |b_{r}|^2. \ \ \ \ \ (2)

While (1) is what is typically referred to as the analytic large sieve inequality, the inequality (2) is what is typically needed in applications. Moreover, most proofs proceed by establishing (2) and deducing (1) by a duality argument (in fact, it isn’t hard to argue that (2) and (1) are equivalent by duality). In any event, since (2) is easier to prove and all we will need for applications we will focus on only this inequality.

We now proceed to the proof of (2). Expanding out the square on the left-hand side of (2) gives us

\displaystyle\sum_{n=1}^{N}\sum_{r=1}^{R}\sum_{s=1}^{R}b_{r}\overline{b_{s}} e^{2\pi i n (\alpha_{r}-\alpha_{s})}=N\sum_{r=1}^{R}|b_r|^2+\sum_{r=1}^{R}\sum_{\substack{s=1 \\ s\neq r}}^{R} b_{r}\overline{b_{s}}\left(\sum_{n=1}^{N}e^{2 \pi i n (\alpha_{r}-\alpha_{s})}\right).

We are now left to show that the second term on the right is bounded by {\delta^{-1}\sum_{r=1}^{R}|b_r|^2}. Expanding this term out we have

\displaystyle\sum_{r=1}^{R} \sum_{\substack{s=1\\ s\neq r}}^{R} b_{r}\overline{b_{s}}\left(\frac{e^{2 \pi i (\alpha_{r}-\alpha_{s})}-e^{2\pi i(N+1) (\alpha_{r}-\alpha_{s})} }{1-e^{2 \pi i (\alpha_{r}-\alpha_{s})}}\right)\leq 2\left|\sum_{r=1}^{R} \sum_{\substack{s=1\\s\neq r}}^{R}c_{r}\overline{c_{s}}\frac{1}{1-e^{2 \pi i (\alpha_{r}-\alpha_{s})}}\right|

where we have let {c_r = b_r e^{2\pi i \alpha_{r}}} or {c_r = b_r e^{(N+1)2\pi i\alpha_{r}}}, whichever choice maximizes the right-hand side. Since {|c_r|=|b_r|} the {l^{2}} norm of the sequences are equal, and it suffices to bound the right-hand side in terms of the {l^2} norm of the sequence {\{c_r\}}. Exploiting this idea further, letting {d_r = c_r e^{\pi i \alpha_r}} gives that right-hand side of the previous expression is equal to

\displaystyle \sum_{r=1}^{R}\sum_{\substack{s=1 \\ s\neq r}}^{R}\frac{d_{r}\overline{d_{s}}}{\sin(\pi (\alpha_{r}-\alpha_{s}))}.

It now suffices to show that {\left| \sum_{r=1}^{R}\sum_{\substack{s=1 \\ s \neq r}}^{R}\frac{d_{r} \overline{d_{s}} }{\sin(\pi (\alpha_{r}-\alpha_{s}))}\right|\leq\delta^{-1}\sum_{r=1}^{R}|d_r|^2 }. Using the well-known identity {\frac{1}{\pi}\lim_{k\rightarrow\infty}\sum_{k=-K}^{K}\left(1 - \frac{|k|}{K}\right)\frac{(-1)^k}{x+k} =\frac{1}{\sin(\pi x)}} allows us to expand the above expression as

\displaystyle\frac{1}{\pi }\left|\sum_{r=1}^{R}\sum_{\substack{s=1 \\ s\neq r}}^{R} \sum_{k=1}^{K} \left(1-\frac{|k|}{K} \right) \frac{(-1)^{k} d_{r}\overline{d_{s}} }{ \alpha_{r}-\alpha_{s}} \right| =\frac{1}{\pi K} \left| \sum_{r=1}^{R}\sum_{\substack{s=1 \\ s \neq r}}^{R}\sum_{1\leq m,n\leq K} \frac{(-1)^{m-n} d_{r} \overline{d_{s}} }{\alpha_{r}-\alpha_{s}+m-n} \right|

Now let {\lambda_{r,m}=\alpha_{r}+m} and {a_{r,m}=|(-1)^{m}d_{r}|=|d_{r}|}. Notice that the (double-indexed) sequence {\lambda_{r,m}} is still {\delta}-spaced. We can now rewrite the above expression as

\displaystyle \frac{1}{\pi K}\left| \sum_{r=1}^{R}\sum_{\substack{s=1 \\ s \neq r}}^{R} \sum_{1\leq m,n\leq K} \frac{ a_{r,m} \overline{a_{s,n}} }{ \lambda_{r,m}-\lambda_{s,n}} \right|=\frac{1}{\pi K} \left| \sum_{r,m} \sum_{\substack{s,n \\ (r,m)\neq (s,n)}} \frac{ a_{r,m} \overline{a_{s,n}} }{\lambda_{r,m}-\lambda_{s,n}} \right|

\displaystyle \leq\frac{1}{\pi K}\left(\sum_{r,m} |a_{r,m}|^{2}\right)^{1/2} \left(\sum_{r,m}\left|\sum_{\substack{s,n \\ (r,m)\neq (s,n)}}\frac{ a_{r,m}\overline{a_{s,n}} }{\lambda_{r,m}-\lambda_{s,n}}\right|^2\right)^{1/2}

where the last inequality follows from Cauchy-Schwarz. Since {\sum_{r,m}|a_{r,m}|^2= K \sum_{r}|d_{r}|^{2}}, to conclude the proof of the large sieve inequality it suffices to show that {\sum_{r,m}\left|\sum_{\substack{s,n \\ (r,m)\neq (s,n)}} \frac{ a_{r,m}\overline{a_{s,n}}}{\lambda_{r,m}-\lambda_{s,n}}\right|^2\leq \frac{\pi^2}{\delta^{2}}\sum_{r,m}|a_{r,m}|^2}. This is exactly the sharp {L^2} estimate for the the discrete Hilbert transform. We have reduced matters to the following theorem.

Theorem 2 (sharp Hilbert inequality) In the notation of Theorem 1, we have that

\displaystyle \sum_{r=1}^{R}\left|\sum_{\substack{s=1 \\ s \neq r}}^{R}\frac{j_{r}}{(\lambda_{r}-\lambda_{s})}\right|^2 \leq \frac{\pi^2}{\delta^2} \sum_{r=1}^{R} |j_r|^2.

Expanding the left-hand side of this expression, we have that

\displaystyle \sum_{r=1}^{R} \sum_{\substack{s=1 \\ s \neq r}}^{R}\sum_{\substack{t=1 \\ t \neq r}}^{R}\frac{\overline{j_{s}}j_{r}}{(\lambda_{r}-\lambda_{s})(\lambda_{r}-\lambda_{t})}

\displaystyle = \sum_{r=1}^{R} \sum_{\substack{s=1 \\ s \neq r}}^{R}\frac{|j_{r}|^2}{(\lambda_{r}-\lambda_{s})^2} + \sum_{r=1}^{R} \sum_{\substack{s=1 \\ s \neq r}}^{R}\sum_{\substack{t=1 \\ t \neq r \\ r \neq s}}^{R}\frac{\overline{j_{s}}j_{r}}{(\lambda_{r}-\lambda_{s})(\lambda_{r}-\lambda_{t})}.

Evaluating the first term gives

\displaystyle \sum_{r=1}^{R}\sum_{\substack{s=1 \\ s\neq r}}^{R}\frac{|j_{r}|^2}{(\lambda_{r}-\lambda_{s})^2}\leq\frac{1}{\delta^2}\sum_{r=1}^{R} |j_{r}|^2 \sum_{s\neq r}\frac{1}{(s-r)^2}=\frac{\pi^2}{3\delta^2}\sum_{r=1}^{R}|j_{r}|^2.

Evaluating the second term, we have

\displaystyle \sum_{r=1}^{R}\sum_{\substack{s=1 \\ s\neq r}}^{R}\sum_{\substack{t=1 \\ t\neq r \\ r\neq s}}^{R}\frac{\overline{j_{s}} j_{t}}{(\lambda_{r}-\lambda_{s})(\lambda_{r}-\lambda_{t})}=\sum_{r=1}^{R}\sum_{\substack{s=1 \\ s \neq r}}^{R}\sum_{\substack{t=1 \\ t\neq r \\ r \neq s}}^{R}\frac{j_{s} j_{t}}{\lambda_s -\lambda_t} \left( \frac{1}{\lambda_{r}-\lambda_{s}}-\frac{1}{\lambda_{r}-\lambda_{t}}\right)

\displaystyle =\sum_{\substack{s=1 }}^{R}\sum_{\substack{t=1 \\ t \neq s }}^{R}\frac{j_{s} j_{t}}{\lambda_s -\lambda_t}\sum_{\substack{r=1 \\ r \neq t \\ r \neq s}}^{R}\left( \frac{1}{\lambda_{r}-\lambda_{s}}-\frac{1}{\lambda_{r}-\lambda_{t}} \right) =\sum_{\substack{s=1 }}^{R}\sum_{\substack{t=1 \\ t \neq s }}^{R} \frac{j_{s} j_{t}}{\lambda_s -\lambda_t}\left( -\frac{1}{\lambda_{t}-\lambda_{s}}+\frac{1}{\lambda_{s}-\lambda_{t}} \right)

\displaystyle = 2\sum_{\substack{s=1 }}^{R}\sum_{\substack{t=1 \\ t \neq s }}^{R}\frac{j_{s} j_{t}}{(\lambda_s -\lambda_t)^2} \leq\sum_{\substack{s=1 }}^{R}\sum_{\substack{t=1 \\ t \neq s }}^{R}\frac{|j_{s}|^2 + |j_{t}|^2}{(\lambda_s -\lambda_t)^2}\leq \frac{2}{\delta^2}\sum_{\substack{s=1 }}^{R}\sum_{\substack{t=1 \\ t \neq s }}^{R}\frac{|j_{t}|^2}{(s -t)^2} =\frac{2 \pi^2}{3 \delta^2}\sum_{r=1 }^{R} |j_{t}|^2

where we have used the basic inequality for real numbers {2ab \leq a^2 + b^2}. Combining this with our bound on the first term completes the proof of the sharp discrete Hilbert inequality, Theorem 2, and hence the sharp large sieve inequality (in dual form).

Let us conclude by recording a specialization of the (dual) analytic large sieve inequality that we have just proved.

Theorem 3 For each integer {1\leq a\leq N} consider a function of the form {s(a,q)} defined for each {q} such that {(a,q)=1}. We then have that

\displaystyle \sum_{n=1}^{N}\left|\sum_{q=1}^{Q} \sum_{\substack{a \mod q \\ (a,q)=1 }} S(a,q) e^{2\pi i an/q} \right|^2 \leq (N + Q^2) \sum_{q=1}^{Q} \sum_{\substack{a \mod q \\ (a,q)=1 }} |S(a,q)|^2.

This follows from the dual large sieve inequality (2) once we show that the admissible points {a/q} are {Q^{-2}}-spaced. This can be shown by the following computation

\displaystyle ||a/q - b/p|| = ||(ap-bq)/pq|| \leq 1/pq \leq 1/Q^2.

Updated 9/02/2009: corrected typos

About these ads

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: