# Understanding Gödel’s Incompleteness Theorem

I am trying very hard to understand Gödel’s Incompleteness Theorem. I am really interested in what it says about axiomatic languages, but I have some questions:

Gödel’s theorem is proved based on Arithmetic and its four operators: is all mathematics derived from these four operators (×, +, -, ÷) ?

Operations such as log(x) and sin(x) are indeed atomic operations, like those of arithmetic, but aren’t there infinitely many such operators that have inverses (that is, + and – are “inverse” operations, × and ÷ are inverse operations).

To me it seems as though making a statement about the limitations of provability given 4 arbitrary operators is absurd, but that probably highlights a gap in my understanding, given that he proved this in 1931 and its unlikely that I have found a counter-argument.

As a follow-up remark, why the obsession with arithmetic operators? They probably seem “fundamental” to us as humans, but to me they all seem to be derived from four possible graphical arrangements of numbers (if we consider four sides to a digit), and fundamentally derived from addition.

[][] o [][] addition and, its inverse, subtraction

[][]
[][] multiplication (iterative addition) and, its inverse, division

There must be operators that are consistent on the natural numbers that we certainly aren’t aware of, no?

Please excuse my ignorance, I am hoping I haven’t offended any real mathematicians with this posting.

edit: I think I am understanding this a lot more, and I think my main difficulty in understanding this was that:

There are statements that are true that are unprovable.

Seemed like an impossible statement. It does, however, make sense to me at the moment in the context of an axiomatic language with a limited number of axioms. Ultimately, suggesting that there are statements that are true and expressible in the language, but are unprovable in the language (because of the limited set of axioms), is what I believe to be the point of the proof — is this correct?

There’s a fair amount of historical context to make some of Gödel’s choices in his original proof clear. For a good overview of the proof, Nagel and Newman’s Goedel’s Proof is pretty good (though not without its detractors). I also highly recommend the late Torkel Franzen’s Godel’s Theorem: An Incomplete Guide to its Use and Abuse. I may myself be guilty of some of those abuses below; I’m trying to shy away from a lot of the technical details, and this usually invites imprecision and even abuse in this particular field. I hope those who know better than me will keep me honest via comments and appropriate ear-pulling.

Some history. Sometimes I think of the 19th century as the Menagerie century. A “menagerie” was like a zoo of weird animals. During a lot of the 19th century, people were trying to clean up some of the logical problems that abounded in the foundations of mathematics. Calculus plainly worked, but a lot of the notions of infinitesimals were simply incompatible with some ‘facts’ about real numbers (eventually solved by Weierstrass’s notion of limits using $\epsilon$s and $\delta$s). A lot of assumptions people had been making implicitly (or explicitly) were shown to be false through the construction of explicit counterexamples (the “weird animals” that lead me to call it the menagerie century; many mathematicians were like explorers bringing back weird animals nobody had seen before and which challenged people’s notions of what was and was not the case): the Dirichlet function to show that you can have functions that are discontinuous everywhere; functions that are continuous everywhere but nowhere differentiable; the failure of Dirichlet’s principle for some functions; Peano’s curve that filled-up a square; etc. Then, the antinomies (paradoxes and contradictions) in the early set theory. Even some work which today we find completely without problem caused a lot of debate: Hilbert’s solution of the problem of a finite basis for invariants in any number of variables was originally derided by Gordan as “theology, not mathematics” (the solution was not constructive and involved the use of an argument by contradiction). Many found a lot of these developments deeply troubling. A foundational crisis arose (see the link in Qiaochu’s answer).

Hilbert, one of the leading mathematicians of the late 19th century, proposed a way to settle the differences between the two main camps. His proposal was essentially to try to use methods that both camps found unassailably valid to show that mathematics was consistent: that it was not possible to prove both a proposition and its negation. In fact, his proposal was to use methods that both camps found unassailable to prove that the methods that one camp found troublesome would not introduce any problems. This was the essence of the Hilbert Programme.

In order to be able to accomplish this, however, one needed to have some way to study proofs and mathematics itself. There arose the notion of “formal proof”, the idea of an axiomatization of basic mathematics, etc. There were several competing axiomatic systems for the basics of mathematics: Zermelo attempted to axiomatize set theory (later expanded to Zermelo-Fraenkel set theory); Peano had proposed a collection of axioms for basic arithmetic; and famously Russell and Whitehead had, in their massive book Principia Mathematica attempted to establish an axiomatic and deductive system for all of mathematics (as I recall, it takes hundreds of pages to finally get to $1+1=2$). Some early successes were achieved, with people showing that some parts of such theories were in fact consistent (more on this later). Then came Gödel’s work.

Consistency and Completeness. We say a formal theory is consistent if you cannot prove both $P$ and $\neg P$ in the theory for some sentence $P$. In fact, because from $P$ and $\neg P$ you can prove anything using classical logic, it is equivalent that a theory is consistent if and only if there is at least one sentence $Q$ such that there is no proof of $Q$ in the theory. By contrast, a theory is said to be complete if given any sentence $P$, either the has a proof of $P$ or a proof of $\neg P$. (Note that an inconsistent theory is necessarily complete). Hilbert proposed to find a consistent and complete axiomatization of arithmetic, together with a proof (using only the basic mathematics that both camps agreed on) that it was both complete and consistent, and that it would remain so even if some of the tools that his camp used (which the other found unpalatable and doubtful) were used with it.

Why arithmetic? Arithmetic was a particularly good field to focus on in the early efforts. First, it was the basis of the “other camp”. Kronecker famously said “God gave us the natural numbers, the rest is the work of man.” It was hoped that an axiomatization of the natural numbers and their basic operations (addition, multiplication) and relations (order) would be both relatively easy, and also have a hope of being both consistent a complete. That is, it was a good testing ground, because it contained a lot of interesting and nontrivial mathematics, and yet seemed to be reasonably simple.

Gödel. Gödel focused on arithmetic for this reason. As it happens, multiplication is key to the argument (there is something special about multiplication; some theories of the natural numbers that include only addition can be shown to be consistent using only the kinds of tools that Hilbert allowed). To answer one of your questions along the way, Gödel even defined new operations and relations on natural numbers that had little to do with addition and multiplication along the way, so that yes, there are operations other than those (no fetishism about them at all). But in fact, Gödel did not restrict himself solely to arithmetic. His proof is, on its face, about the entire system of mathematics set forth in Russell and Whitehead’s Principia, though as Gödel notes it can easily be adapted to other systems so long as they satisfy certain criteria (that’s why Gödel’s original paper has a title that explicitly refers to the Principia “and related systems”).

What Gödel showed was that any theory, subject to some technical restrictions (for example, you must have a way of recognizing whether a given sentence is or is not an axiom), that is “strong enough” that you can use it to define a certain portion of arithmetic will necessarily be either incomplete or inconsistent (that is, either you can prove everything in that theory, or else there is at least one sentence $P$ such that neither $P$ nor $\neg P$ can be proven). It’s not a limitation based on four operations, or an obsession with those operations: quite the opposite. What it says if that if you want your theory to include at least some arithmetic, then your theory is going to be so complicated that either it is inconsistent, or else there are propositions that can neither be proven nor disproven using only the methods that both camps found valid.

That is: what it shows is a limitation of those particular (logically unassailable) methods. If we use other methods, we are able to establish consistency of arithmetic, for example, but if you had your doubts about the consistency of arithmetic in the first place, chances are you will find those methods just as doubtful.

Now, about you coda, and the statement “statements that are true but unprovable”; this is not very apt. You will find a lot of criticism to this paraphrase in Franzen’s book, with good reason. It’s best to think that you have statements that are neither provable nor disprovable. In fact, one of the things we know is that if you have such a statement $P$, in a theory $M$, then you can find a model (an interpretation of the axioms of $M$ that makes the axioms true) in which $P$ is true, and a different model in which $P$ is false. So in a sense, $P$ is neither “true” nor “false”, because whether it is true or false will depend on the model you are using. For example, the Gödel sentence $G$ that proves the First Incompleteness Theorem (that if arithmetic is consistent then it is incomplete, since there can be no proof of the sentence $G$ and no proof of $\neg G$) is often said to be “true but unprovable” because $G$ can be interpreted as saying “There is no proof of $G$ in this theory.” But in fact, you can find a model of arithmetic in which $G$ is false, so why do we say $G$ is “true”? Well, the point is that $G$ is true in what is called “the standard model.” There is a particular interpretation of arithmetic (of what “natural number” means, of what $0$ means, of what “successor of $n$” means, of what $+$ means, etc) which we usually have in mind; in that model, $G$ is true but not provable. But we know that there are different models (where ‘natural number’ may mean something completely different, or perhaps $+$ means something different) where we can show that $G$ is false under that interpretation of the axioms. I would stay away from “true” and “false”, and stick with “provable” and “unprovable” when discussing this; it tends to prevent problems.

First Summary. So: there were historical reasons why Gödel focused on arithmetic; the limitation is not of arithmetic itself, but rather of the formal methods in question: if your theory is sufficiently “strong” that it can represent part of arithmetic (plus it satisfies the few technical restrictions), then either your theory is inconsistent, or else the finitistic proof methods at issue cannot suffice to settle all questions (there are sentences $P$ which can neither be proven nor disproven).

Can something be rescued? Well, ideally of course we would have liked a theory that was complete and consistent, and that we could show is complete and consistent using only the kinds of logical methods which we, and pretty much everyone else, finds beyond doubt. But perhaps we can at least show that the other methods don’t introduce any problems? That is, that the theory is consistent, even if it is not complete? That at least would be somewhat of a victory.

Unfortunately, Gödel also proved that this is not the case. He showed that if your theory is sufficiently complex that it can represent the part of arithmetic at issue (and it satisfies the technical conditions alluded to earlier), and the theory is consistent, then in fact one of the things that it cannot settle is whether the theory is consistent! That is, one can write down a sentence $C$ which makes sense in the theory, and which essentially “means” “This theory is consistent” (much like the Gödel sentence $G$ essentially “means” that “there is not proof of $G$ in this theory”), and which one can prove that if the theory is consistent then the theory has no proof of $C$ and no proof of $\neg C$.

Again, this is a limitation of those finitistic methods that everyone finds logically unassailable. In fact, there are proofs of the consistency of arithmetic using transfinite induction, but as I alluded to above, if you harbored doubts about arithmetic in the first place, you are simply not going to be very comfortable with transfinite induction either! Imagine that you are not sure that $X$ is being truthful, and $X$ suggests that you ask $Y$ about $X$s truthfulness; you don’t know $Y$, but $X$ assures you that $Y$ is very trustworthy. Well, that’s not going to help you, right?

Key take-away: Because the theorem applies to any theory that is sufficiently complex (and satisfies the technical restrictions), we are not even in a position of enlarging our set of axioms to escape these problems. So long as we restrict ourselves to enlargement methods that we find logically unassailable, the technical restrictions will still be satisfied, so that the new theory, stronger and larger though it will be because it has more axioms, will still be incomplete (though it will possibly be other sentences that are now incomplete or unprovable; remember that by adding axioms, we are also potentially expanding the kinds of things about which we can talk). So the theorems are not about shortcomings of particular axiomatic systems, but rather about those finitistic methods within a very large class of systems.

What about those ‘technical restrictions’? They are important. Suppose that arithmetic were consistent. That means that there is at least one model for it. We could pick a model $M$, and then say “Let’s make a theory whose axioms are exactly those sentences that are true when interpreted in $M$.” This is a complete and consistent axiomatic system for arithmetic. Complete, because each sentence is either true in $M$ (and hence an axiom, hence provable in this theory) or else false in $M$, in which case its negation is true (and hence an axiom, and hence provable). And consistent, because it has a model, $M$, and a theory is consistent if and only if it has a model. The problem with this axiomatic theory is that if I give you a sentence, you’re going to have a hard time deciding if it is or it is not an axiom! We didn’t really achieve anything by taking this axiomatic system. The “technical restrictions” are both in the form of making the system actually usable, and also certain technical issues that arise from the mechanics of the proof. But the restrictions are mild enough that pretty much everyone agrees that most reasonable theories will likely satisfy them.

Second summary. So: if you have a formal axiomatic system which satisfies certain technical (but mild) restrictions, if the theory is large enough that you can represent (a certain part of) arithmetic in it, then the theory is either inconsistent, or else the finitistic methods that everyone agrees are unassailable are insufficient to prove or disprove every sentence in the theory; worse, one of the things that the finitistic methods cannot prove or disprove is whether the theory is in fact consistent or not.

Hope that helps. I really recommend Franzen’s book in any case. It will lead you away from potential misinterpretations of what the theorem says (I am likely guilty of a few myself above, which will no doubt be addresssed in comments by those who know better than I do).