# What are the rules for equals signs with big-O and little-o?

This question is about asymptotic notation in general. For simplicity I will use examples about big-O notation for function growth as $n\to\infty$ (seen in algorithmic complexity), but the issues that arise are the same for things like $\Omega$ and $\Theta$ growth as $n\to\infty$, or for little-o as $x\to 0$ (often seen in analysis), or any other combination.

The interaction between big-O notation and equals signs can be confusing. We write things like

But we’re not allowed to conclude from these two statements that $3n^2+4=5n^2+7n$. Thus it seems that transitivity of equality fails when big-O is involved. Also, we never write things such as
so apparently commutativity is also at risk.

Many textbooks point this out and declare, with varying degrees of indignation, that notations such as $(1)$ and $(2)$ constitute an “abuse of notation” that students are just going to have to get used to. Very well, but then what are the rules that govern this abuse? Mathematicians seem to be able to communicate using it, so it can’t be completely random.

One simple way out is to define that the notation $O(n^2)$ properly denotes the set of functions that grow at most quadratically, and that equations like $(1)$ are just conventional abbreviations for

Some authors even insist that writing $(1)$ is plain wrong, and that $(4)$ is the only correct way to express the estimate.

However, other authors blithely write things like

which does not seem to be easily interpretable in terms of sets of functions.

The question: How can we assign meaning to such statements in a principled way such that $(1)$, $(2)$ and $(4)$ are true but $(3)$ is not?

My take on this notation agrees with David Speyer’s comment. As Henning’s answer notes, our answers are just different ways to look at the same thing. Often I resort to rules similar to his as the operative definition, but I find the my approach easier to motivate.

We proceed in two steps: (a.) understanding an expression involving asymptotic notation, and (b.) understanding the use of the equals sign in this notation.

Rule for Interpreting Expressions. Every (well-formed) expression involving standard functions and operations (e..g, ${ +, \times, -, \div, \exp, \log }$) and asymptotic notations $\{ O, o, \Omega, \omega, \Theta\}$ corresponds to a set of functions. [For simplicity, we will restrict ourselves to only the $O$ notation.] This set is built recursively exactly as one would expect:

This interpretation of expressions is perfectly natural; it’s analogous to the Minkowski sum of two sets, extended for more general operations. We can even enrich this with a slightly more involved rule for $O$:

These rules are not complete; one needs to append the appropriate base cases when $E$, $E_1$ and $E_2$ are single functions rather than a set of functions. The most interesting base case is the expression $O(f(n))$ which is defined exactly as in the post. As a final point, we can agree to identity a single function $f(n)$ with the set $\{ f(n) \}$; this turns out to be very convenient in practice.

For example, an expression of the form $5 + O(n) + O(n^2)\log(O(n^3))$ stands for the set

Similarly we can interpret the expression like $O(2^{n^2 + O(n)})$ by applying the $O(\cdot)$ rule twice.

Rules for Interpreting the Equals sign. When one asserts that $E_1 = E_2$ where $E_1$ and $E_2$ are sets represented by expressions like the above, one should always interpret it to mean that $E_1 \subseteq E_2$. E.g., is really a shorthand for

In addition, there is the case $f(n) = E$; e.g., $n = O(n^2)$. In this case, we can identity the function $f(n)$ with the set $\{ f(n) \}$ and apply the above interpretation. This tells us that $f(n) = E$ is the same as saying $f(n) \in E$ (because $f(n) \in E \iff \lbrace f(n) \rbrace \subseteq E$).

This is arguably unnatural since this clashes starkly with the standard interpretation of the equals sign as (set) equality. In fact, this is an inherent difficulty since we would expect any reasonable definition of equality to be symmetric:

However, as we all know, this is seldom true with the asymptotic notation. Thus the only way out seems to be to override the intuitive notion of equality in favor of a less natural one. [At this stage, it is worth pointing out that Henning’s rules also resorts to redefining the equals sign suitably.]

A sample of properties. Many commonly used rules for manipulating asymptotic notation follow directly from the above interpretation. As a case study, I consider some of the properties already studied by Henning in his answer:

• “Local scope” or Change of meaning. For example, In $O(n) + O(n) = O(n)$, you can substitute distinct functions $f(n)$, $g(n)$ and $h(n)$. The definition of “$+$” in the set-of-functions interpretation is clearly based on this idea.

• The asymptotic notation is not symmetric. E.g., $O(n) = O(n^2)$ whereas $n^2 \stackrel{\color{Red}{?}}{=} O(n)$ is false. This is for the same reason that set inclusion is not.

• However it is transitive; i.e., if $E_1 = E_2$ and $E_2 = E_3$, then $E_1 = E_3$. This simply follows from the transitivity of set inclusion.

Final words. One main complaint against the asymptotic notation seems to be that it is usually not symmetric as is implied by the equals sign. This is legitimate, but in this case, the real issue is with the abuse of equals sign, rather than the functions-as-sets idea. In fact, as far as I remember, I am yet to come across a single use of the asymptotic notation that is technically wrong even after one mentally replaces the equals sign with either $\in$ or $\subseteq$ as appropriate.