# Trigonometric sums related to the Verlinde formula

Original question (see also the revised, possibly simpler, version below): Let $g > 1, r > 1$ be integers. Playing around with the Verlinde formula (see below), I came across the expression

My goal is to reduce the complexity in $r$ of this expression; that is, to find a closed form of the sum avoiding the dependence on $r$ in the number of summands. Is this possible? Here’s a related example:

The Verlinde formula, which e.g. has applications in conformal field theory, algebraic geometry, and quantum topology, is

In this case, one can use a trick by Szenes to reduce the complexity of the sum: The sum can be written as

where $z_n = e^{\pi i n/r}$, for a suitable meromorphic function $f : \mathbb{C} \to \mathbb{C}$ having poles only at $1$ and $-1$. Now the trick essentially is to find a suitable meromorphic form $\mu_r$ having poles at $2r$‘th roots of unity and apply the Residue Theorem to $f\mu_k$ to rewrite the above sum as

which then turns out to be a polynomial in $r$ of degree $2g-2$.

This trick doesn’t seem to apply to my slightly more complicated sum though. Another possibility might be to somehow rewrite the sum as a Gauss sum, but that doesn’t quite seem to work either.

“Revised” question: So, maybe the question above does not have a straightforward answer, but I believe it might suffice to be able to work out the following (at least, it’s a similar problem). Say we just have a sum like

as above (almost, anyway). Then we may apply a quadratic reciprocity theorem to simplify matters. But say now that we throw in a power of $n$ to get something like

for $k > 0$. Can sums like these be treated in a similar manner as the quadratic Gauss sum above (perhaps just in special cases like $k = 1$, or $k = 2$); can we somehow describe the large $r$ asymptotics? Standard tricks in this field seem to involve summation by parts and the Euler–Maclaurin formula but it doesn’t seem to quite work out. For example, in the case $k = 1$, summation by parts (or elementary combinatorial considerations) will imply

Now, the first term is simple to handle as mentioned above, but the second one seems to be worse. Any suggestions?

Proposition 1 answers the “revised” question and Proposition 2 the original one. For completeness we give a self-contained proof of Proposition 2.

Proposition 1

Let $n\in\mathbb{N}$ and let $p(x)=\alpha x^2+\beta x +\gamma$ be a polynomial with real coefficients where $\alpha >0$ such that $p'(n)> 0$ and $p'(n+1)<1$. Then for $k\in \mathbb{Z}\,$ if $n>0$ and for $k\in \mathbb{N}\,$ if $n=0$ we have

where $\phi_k$ is the function defined on $A\times ]0,\infty[\times]0,1[$ with $A=\mathbb{N}$ or $\mathbb{N}^*$ according to $k\geq 0$ or $k<0$ by

The main ingredient in the proof of Proposition 1 is Lemma 1 which is proved in my paper "Sommes exponentielles, splines quadratiques et fonction zêta de Riemann" published in the "Comptes-rendus de l'Académie des sciences" in 2001 and avalaible at this address

http://math.heig-vd.ch/fr-ch/Recherche/Recherches/Philippe_Blanc_novembre_2000.pdf

A detailed version is avalaible at this address

http://math.heig-vd.ch/fr-ch/Recherche/Recherches/Philippe_Blanc_mars_2001.pdf

Lemma 1

Let $n$ be an integer, $p(x)=\alpha x^2+\beta x +\gamma$ be a polynomial with real coefficients where $\alpha >0$ and let $z(\cdot)$ be the unique function satisfying $p'(z(y))=y$ for all $y \in \mathbb{R}$. Then

where $\lfloor \cdot \rfloor$ and $\{\cdot\}$ denote respectively the floor and fractional part functions and $\phi$ is the function defined on $]0,\infty[\times[0,1]$ by

Proof of Proposition 1

With the assumptions on the derivatives of $p$ the sum which appears in Lemma 1 is void and we have

which proves the case $k=0$.

Considering the terms of (1) as a function of $\beta$ and differentiating $k$ times with respect to $\beta$ we get the proposition for $k>0$.

Then we replace the polynomial $p$ by $p_z(x)=p(x)+zx$ where $z\in \mathbb{C}$. The left hand side of (1), considered as a function of $z$, is holomorphic. The right hand side of (1) is also holomorphic in the band $B=\{z\in \mathbb{C}\vert -p'(n)< \Re z<1-p'(n+1) \}$. Since identity (1) holds for real $z\in B$, it holds in $B$. We set $p_{it_1}(x)=p(x)+it_1 x$ in identity (1), integrate a first time with respect to $t_1$ on the interval $[t_2,\infty[$, a second time with respect to $t_2$ on the interval $[t_3,\infty[$,..., and finally integrate a k-th time with respect to $t_k$ on the interval $[0,\infty[$ to complete the proof in the case $k<0\,.\hspace{3mm}\Box$

The functions $\phi_k$ extend by continuity on $A\times ]0,\infty[ \times [0,1]$. Choosing, for example, $k=1$, $p(x)=\frac{x^2}{4r}$ in the identity of Proposition 1 and summing these identities from $n=0$ to $r-1$ we get

which implies that

Now for $x\in]0,\pi[$ and an integer $g>1$ we have

and Proposition 1 suggests the following result.

Proposition 2

Let $r,\,g$ and $n$ be integers such that $r>1$, $g>1$ and $1\leq n \leq \frac{r}{2}-1$. Then

where $\Psi_{r,g}$ is the function defined on $\{1,2,\ldots,\lfloor \frac{r}{2}\rfloor\}$ by

Proof of Proposition 2

Introducing the function $f(x)=(\sin x)^{2-2g}$ we have

for $m\in \{1,2,\ldots,\lfloor \frac{r}{2}\rfloor\}$.
Now we compute

where $C_R$ is the boundary of the rectangle with vertices $-R$, $R$, $R+i$ and $-R+i$ and taking the limit as $R \to \infty$ and using the residue theorem we get

Finally we multiply this identity by $-\frac{i}{2}e^{2\pi i \frac{n^2}{r}}$ and we complete the proof observing that

and using relation (2).$\hspace{3mm}\Box$

Coming back to the original question in the case $g=2$, assuming $r$ odd for simplicity and summing the identities of Proposition 2 from $n=1$ to $\frac{r-3} {2}$ we have

Proposition 3

Proof of Proposition 3

By introducing the real-valued functions $g_{re}$ and $g_{im}$ defined by the relation

and setting $\displaystyle \mu=\frac{2}{r}$ for ease of notations we have

where

with

and

As $\displaystyle \int_0^t e^{-i\pi \mu x^2}\,dx=O(\mu^{-\frac{1}{2}})$ we can use the second mean value theorem to check that $\displaystyle \Phi=O(\mu^{-\frac{1}{2}})$.
Further we use the identity

to write $A=A_1 + A_2$ and we bound the modulus of $A_2$ by the integral of the modulus to get $A_2=O(\mu )$.
Using an integration by parts we have

We make use of relations (8.256.3) and (8.256.4) of Gradshteyn and Ryzhik to conclude that

Similarly we use the identity

to write $B=B_1+B_2$ where $B_2=O(\mu)$.
We have, using the identity $\displaystyle \coth \pi x =1 +\frac{2}{e^{2\pi x}-1}$, relation (3.415.2) of Gradshteyn and Ryzhik and an integration by parts

Finally

and using the second mean value theorem we get

Proposition 3 together with the fact that
$\displaystyle \Psi_{r,2}(\frac{r-1}{2})=O(r)$ imply
Note that it is easy to prove that the previous sum is a $\displaystyle O(r^{\frac{3}{2}})$ by using the bound $\displaystyle \vert e^{it}-1 \vert \leq \min (t,2)$.