Why do mathematicians sometimes assume famous conjectures in their research?

I will use a specific example, but I mean in general. I went to a number theory conference and I saw one thing that surprised me: Nearly half the talks began with “Assuming the generalized Riemann Hypothesis…” Almost always, the crux of their argument depended on this conjecture.

Why would mathematicians perform research assuming a conjecture? By definition, it is not known to be true yet. In the off-chance that it turns out to be false, wouldn’t all of the papers that assumed the conjecture be invalidated? I may be answering my own question, but I speculate that:

  1. There is such strong evidence in support of the particular conjecture (Riemann Hypothesis in particular) and lack of evidence against it, that it is “safe” to assume it.

  2. It’s not so much about result obtained, but the methods and techniques used to prove it. Perhaps by assuming the conjecture, in the case of the Riemann Hypothesis, it leads to development of new techniques in analytic number theory.

Answer

For four main reasons:

  1. If the famous conjecture is proven true, the demonstrated results are proven true too.

  2. If the reasoning is correct but demonstrated results are proven false, the famous conjecture is proven false too.

  3. Others may be able to prove further results based on the demonstrated results which may themselves be proven false, thus again proving the famous conjecture false.

  4. Showing interesting consequences of the famous conjecture and surprising connections into other areas of research generates interest in proving that the famous conjecture is true.

Attribution
Source : Link , Question Author : Joseph DiNatale , Answer Author : David Schwartz

Leave a Comment