# Can a Power Series tell when to stop?

The naive description of the radius of convergence of a complex power series is as the largest radius so that the ball avoids poles and branch cuts. This makes sense in a world where analytic functions are at worst meromorphic on $\mathbb{C}$ or involve the complex logarithm, but is patently false when you consider things like which is continuous on the boundary $|z|=1$ but has $\mathbb{D}$ as its full domain. A better way of talking about power series is to say that a power series terminates when it encounters a singular point: a point that the function cannot be analytically continued through.

Question: Is there a sufficient way of looking at properties of a function within its domain to tell if you are “getting close” (in some sense or another) to a singular point? Can this be done while avoiding statements about points on the boundary explicitly, perhaps by talking exclusively about the behavior of the function or its derivatives? This is to say, can a power series tell when it’s time to stop?

Consider these examples.

• When $f$ has a pole at an isolated point in its domain, then clearly the power series cannot extend beyond such a point. This follows from a simple argument about the modulus of $|z|$ in the power series as we near a point where $|f(z)| \to \infty$. Specifically, we have a sequence of values $z_k$ in the domain approaching the singular point with $\lim_k |f(z_k)| = \infty$.

• On the other hand, $|f(z)|$ might be bounded on a domain and still cause the power series to fail. One example of a bounded function on $\mathbb{D}$ with natural boundary $\partial \mathbb{D}$ is a Blaschke product whose zeros accumulate at every point on the boundary. Here we can understand the failure of the power series for $f$ as arising from the data of $f$ itself, since $f$ cannot have its zeros accumulating. This is to say, there is a sequence $z_k$ in the domain approaching the singular point(s) with $|f(z_k)|=0$.

• We can consider lacunary series like the one mentioned above. One way of explaining the failure of the power series of $f$ in this case is to think of it as being constructed as the Fourier decomposition of $f(Re^{i\theta})$ for some $\{|z|=R\}$ contained in the domain of $f$. In this case we have Take, for example, $\sum_{n=0}^\infty \frac{z^{2^n}}{n^2}$. When $|z| =r < 1$ then the geomtric decay of $z^n$ smooths out the Fourier series for $f(re^{i\theta})$, but as we approach the boundary $\partial \mathbb{D}$ we begin to see the lacunary Fourier series $\sum_{n=0}^\infty \frac1{n^2}e^{i2^n \theta}$. If fast decaying Fourier coefficients represent smooth functions, these incredibly slow decaying Fourier coefficients emphasize the fact that the graph of this Fourier series is fractal, and so it can't possibly be used to continue the function. In this situation, radial limits of $f$ exist ($f$ is in $H^2(\mathbb{D})$), but points approaching the boundary don't depend on each other smoothly enough.

• One last perspective. Constructing functions that are continuous on $\overline{\mathbb{D}}$ but have $\partial\mathbb{D}$ as their natural boundary usually involves taking lacunary series and adding in decaying coefficients. If the coefficients are in $\ell^2(\mathbb{N})$ then the function is in $H^2(\mathbb{D})$ and has radial boundary values almost everywhere on $\partial\mathbb{D}$. On the other hand, if the gaps are big enough then the coefficients of the derivative(s) will be wild (this is essentially the statement of Ostrowski–Hadamard). If the power series ends because of a point that $f$ extends to continuously along a radial line, we can think of trying to analytically continue the function along that line. In this case we get something like the following picture of with real part plotted over the line going from 0 to $i$.

$\hskip1in$ Final Comments: I have decided against removing any of these points in case there are students who find them interesting. I suppose a reformulation of my question might be: Is there a more geometric way of explaining when a power series decides to stop, in contrast to simply looking at successive derivative operations applied to the coefficients (which is more or less how we actually compute the radius of convergence)?

There is a simple criterion, due to Euler. Suppose that your function is known
to be analytic in the unit disk. Expand it into the Taylor series at the point
$e^{i\theta}/2$. The radius of convergence of the resulting series is $1/2$
if and only if $e^{i\theta}$ is a singularity.

As the radius of convergence is simply $1/\limsup_{n\to\infty}|f^{(n)}(e^{i\theta}/2)/n!|^{1/n}$ this certainly satisfies your requirement: it is in terms of the "behaviour inside the domain" as $z$ approaches singularity.
It does not even have to approach too much:-)

This looks trivial, but actually you can extract much from this simple criterion, for example Pringsheim's theorem that a series with non-negative coefficients has a singularity at the point where the circle of convergence intersects the positive ray, or Hadamard's gap theorem.
See L. Bieberbach, Analytische Fortsetzung, Springer 1955.