I was just wondering whether or not there have been mistakes in mathematics. Not a conjecture that ended up being false, but a theorem which had a proof that was accepted for a nontrivial amount of time before someone found a hole in the argument. Does this happen anymore now that we have computers? I imagine not. But it seems totally possible that this could have happened back in the Enlightenment.
Feel free to interpret this how you wish!
In 1933, Kurt Gödel showed that the class called [∃∗∀2∃∗,all,(0)] was decidable. These are the formulas that begin with ∃a∃b…∃m∀n∀p∃q…∃z, with exactly two ∀ quantifiers, with no intervening ∃s. These formulas may contain arbitrary relations amongst the variables, but no functions or constants, and no equality symbol. Gödel showed that there is a method which takes any formula in this form and decides whether it is satisfiable. (If there are three ∀s in a row, or an ∃ between the ∀s, there is no such method.)
In the final sentence of the same paper, Gödel added:
In conclusion, I would still like to remark that Theorem I can also be proved, by the same method, for formulas that contain the identity sign.
Mathematicians took Gödel’s word for it, and proved results derived from this one, until the mid-1960s, when Stål Aanderaa realized that Gödel had been mistaken, and the argument Gödel used would not work. In 1983, Warren Goldfarb showed that not only was Gödel’s argument invalid, but his claimed result was actually false, and the larger class was not decidable.
Gödel’s original 1933 paper is Zum Entscheidungsproblem des logischen Funktionenkalküls (On the decision problem for the functional calculus of logic) which can be found on pages 306–327 of volume I of his Collected Works. (Oxford University Press, 1986.) There is an introductory note by Goldfarb on pages 226–231, of which pages 229–231 address Gödel’s error specifically.