Rudin’s inequality

After that whirlwind tour through Khintchine’s inequality, let’s take a look at the less well-known Rudin’s inequality. This says (at least in one form) that, given a finite abelian group G and a function f:G\to\mathbb{C} if the support of the Fourier transform of f is entirely contained inside a dissociated set (we’ll come to what that means later) then for any 2\leq p<\infty we have

\| f \|_p\ll \sqrt{p}\| f\|_2.

Oh ho ho, this looks suspiciously similar to our form of Khintchine’s inequality from the last post:

\| \sum\epsilon_na_n\|_p\ll \sqrt{p}\| \sum\epsilon_na_n\|_2.

What are the differences? Well, in Rudin’s inequality we deal with functions with Fourier support contained inside a dissociated set, and we’re taking L^p norms over the group G. In Khintchine’s inequality, however, the L^p norms are now being taken over the probability space, and our functions take in something from our probability space and spit out a distribution of signs.

Using the Fourier inversion formula, we can write out Rudin’s inequality in a way which highlights the similarity even more. Let S denote the support of the Fourier transform of f. The left hand side of Rudin’s inequality becomes

\left(\mathbb{E}_{x\in G}\left\lvert \sum_{\gamma\in S}\widehat{f}(\gamma)\gamma(x)\right\rvert^p\right)^{1/p},

while the left hand side of Khintchine’s inequality is, if we denote our coefficients a_n as \widehat{f}(\gamma) instead, indexed over  S (all we’re doing here is changing our notation),

\left(\mathbb{E}\left\lvert \sum_{\gamma\in S}\widehat{f}(\gamma)\epsilon_\gamma\right\rvert^p\right)^{1/p}.

The difference is now staring us in the face. All we’ve done in moving from Khintchine’s inequality to Rudin’s inequality is swapped our random variable (\epsilon_\gamma)_{\gamma\in S}\in \{-1,1\}^{\lvert S\rvert} to a function (\gamma(x))_{\gamma \in S}\in \mathbb{C}^{\lvert S\rvert}.

In other words, the characters from a dissociated set behave like independent random variables taking values in \{-1,1\}. We will now try to make this heuristic precise, and deduce Rudin’s inequality as a corollary of Khintchine’s inequality. The following follows the proof of Rudin’s inequality given in [Gr1].

We first randomise the sum f(x)= \sum_{\gamma\in S}\widehat{f}(\gamma)\gamma(x) by introducing a random assignment of signs to get f_\epsilon(x)= \sum_{\gamma\in S}\epsilon(\gamma)\widehat{f}(\gamma)\gamma(x), where \epsilon(\gamma) are independent random variables taking the values \pm1. Khintchine’s inequality then gives, for any fixed x,

\mathbb{E}\lvert f_\epsilon(x)\rvert^p\leq (Cp)^{p/2}\| f\|_2^p.

Taking the expectation over all x\in G we get (taking L^p norms over G now)

\mathbb{E}\| f_\epsilon\|_p^p\leq (Cp)^{p/2}\| f\|_2^p.

By the pigeonhole principle there is some \epsilon such that

\| f_\epsilon \|_p\leq C\sqrt{p}\| f\|_2.

The left hand side is nearly what we want; if we could replace the f_\epsilon by f we would have Rudin’s inequality. To do this, I’ll borrow a trick I found in [Gr1], but first I should say what dissociated means: the set S is dissociated if and only if there are no non-trivial equations of the form \gamma_1+\cdots+\gamma_k-\gamma_{k+1}-\cdots-\gamma_{l}=0. This is useful because it allows us to write

f(x)=2f_\epsilon(x)\ast\left(\prod_{\gamma\in S}\left(1+\frac{\epsilon_\gamma}{2}(\gamma(x)+\gamma(-x))\right)\right)

for any distribution of signs \epsilon, as can be checked by expanding out the product and convolution. Furthermore, by the same argument we check that the product on the right has L^1 norm equal to 1. It follows that we can bound \| f\|_p\leq 2\| f_\epsilon \|_p for any p\geq2 and we have proven Rudin’s inequality. Notice that here was where we really needed p\geq 2, to pass from a randomised version of f to f itself.

The ‘randomisation’ technique we used here was to use probabilistic arguments and the pigeonhole principle to show that the desired result was true for some random twist of the original function f, and then to argue that the original function is suitably controlled by every random twist of f. This latter condition is where we needed to invoke dissociativity.

To recap: Rudin’s inequality can be proved using randomisation plus Khintchine’s inequality, and dissociativity is invoked the control the randomisation part. We will soon see how an analogous sort of argument in physical space gives the powerful new method of Croot and Sisask.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s