# Why rationalize the denominator?

In grade school we learn to rationalize denominators of fractions when possible. We are taught that $\frac{\sqrt{2}}{2}$ is simpler than $\frac{1}{\sqrt{2}}$. An answer on this site says that “there is a bias against roots in the denominator of a fraction”. But such fractions are well-defined and I’m failing to see anything wrong with $\frac{1}{\sqrt{2}}$ – in fact, IMO it is simpler than $\frac{\sqrt{2}}{2}$ because 1 is simpler than 2 (or similarly, because the former can trivially be rewritten without a fraction).

So why does this bias against roots in the denominator exist and what is its justification? The only reason I can think of is that the bias is a relic of a time before the reals were understood well enough for mathematicians to be comfortable dividing by irrationals, but I have been unable to find a source to corroborate or contradict this guess.

One simple example is the following: When you calculate the angle between two vectors, often you get a fraction containing roots. In order to recognize the angle, whenever when possible, it is good to have a standard form for these fractions [side note, I saw often students not being able to find the angle $\theta$ so that $\cos(\theta)=\frac{1}{\sqrt{2}}$]. The simplest way to define a standard form is by making the denominator or numerator integer.
Note that bringing fractions to the same denominator is usually easier if the denominator is an integer. And keep in mind that in many problems you start with quantities which need to be replaced by fractions in standard form [for example in trigonometry, problems are set in terms of $\cos(\theta)$ where $\theta$ is some angle].
But at the end of the day, it is just a convention. And while you think that $\frac{1}{\sqrt{2}}$ looks simpler, and you are right, the key with conventions is that they need to be consistent for the cases where you need recognition. The one which looks simpler is often relative…