A friend of mine was trying to explain to me how all infinities are equal. For example, they were saying that there are the same amount of numbers between 0–1 as there are between 0–2.

The way they explained it, you could prove that there are the same amount of numbers by getting a match to each number.

For example, for any number in the range from 0–2, you can find a matching number in the range 0–1 by dividing the number by 2. In every case, the ending number will end up between 0–1. In addition, if you multiplied any number in the range of 0–1 by 2, you will always end up with a number between 0–2 Therefore, there should be the same infinite amount of numbers between 0–1 as there are between 0–2.

Is this principle accurate?

The problem that I had with it was, if in-fact there are the same amount of numbers between 0 and 1 as there are between 0 and 2, why is 2 greater than 1?

**Answer**

At first, it is probably a good idea to specify more clearly what is meant when you say “infinite”. There are many different concepts of infinity which are quite different. For example, in the context of limits you can say that a quantity or function “goes to infinity”, but in that case it just means “it gets (and remains) arbitrary large”. That’s a completely different type of infinity than the infinity you are speaking about when you say “how many”. “How many elements does this set have” is called the *cardinality* of that set, and that’s what the argument of your friend is about. Note that “how large is” may or may not refer to cardinality, i.e. “how many” (I’ll come to another notion later). This other notion also has a concept if infinite, but that concept is different from the cardinality concept. Since your friend’s argument was about cardinality (“how many”), in the following I’ll use “infinity” in the cardinality sense, as “infinitely many”.

OK, so what does it mean if you ask “how many”? Well, with finite sets, there’s a well known way to answer that question, and that way is known as “counting”: You choose one element of that set and say “one”. You choose another element of that set and say “two”. As soon as you run out of elements, the last number you’ve said is the number of elements in that set.

Now let’s look at what you have done that way: You’ve chosen one object and assigned it the number 1 (that is, from now on, it is the “first object”). You chosen another object and assigned it the number 2, and so on. So finally, assuming there are n objects in the set, every object got a number between 1 and n inclusive, and to each number there’s a unique object which got that number. In other words, by counting the objects, you’ve established a one-to-one relation between that set and the objects in the set “numbers from 1 to n“. Of course you could as well start counting with 0 (as you might do if you are a C programmer used to zero-based arrays, or a mathematician used to natural numbers to include the 0), and end up with n−1 as last number. But that doesn’t matter because there are *as many* natural numbers between 0 and n−1 as there are between 1 and n, as you can easily check by counting them (either count the set {0,…,n−1} starting with 1, or count the set {1,…,n} starting with 0; in both case what you do is to establish a one-to-one relation between the sets). Indeed, you can even *define* the natural numbers as sets of all natural numbers below, with 0 being the empty set, in that case, the number n has exactly n elements, and therefore there’s a one-to-one relation between the number n and any n-element set. (Side remark: This construction of the natural numbers can also be extended to infinite numbers and gives rise to *yet another* concept of infinity, which I won’t talk about here.)

So the point is, whenever you have a one-to-one relation between two sets, they have the same number of elements. This is a useful definition because it can be used even for infinite sets, where the literal procedure of counting one after the other would never end. So two sets have, by definition, the same number of elements if there exists a one-to-one relation between the two sets.

With this knowledge we can now see that, in the context of cardinality, the question “are all infinities equal” means “does there exist a one-to-one mapping between any two infinite sets?” And in the concrete case of the numbers between 0 and 1 (that is, the interval (0,1)) and the numbers between 0 and 2 (that is, the interval (0,2)), the question can be rephrased as: “Is there a one-to-one mapping between the numbers in the interval (0,1) and the interval (0,2)“?

It is not true that all infinities are equal, but it *is* true that the interval (0,1) contains as many numbers as the interval (0,2). However, while there are infinitely many numbers in (0,1) and also infinitely many natural numbers, there are more numbers in the interval (0,1) than there are natural numbers.

To see that there are as many numbers in (0,1) and in (0,2), just consider the function f:x↦2x. That function gives for each number in (0,1) a number in (0,2), and each number from (0,2) can indeed be reached. Thus there’s a one-to-one mapping between the numbers in (0,1) and (0,2).

However there’s no one-to-one mapping between the integers and the numbers in (0,1). The classic proof for this is Cantor’s diagonal argument: Assume you’d have a one-to-one mapping between the natural numbers and the numbers in (0,1), i.e. a mapping N→(0,1),n↦an. Then write the numbers in (0,1) in decimal. Then you can construct a number x∈(0,1) by the following rule: The number starts with 0., and the n-th decimal digit of x is 3, *unless* the n-th decimal digit of an is 3, in which case you choose 5 instead. Now the number x is clearly in the interval (0,1), therefore it should be one of the ans. However for each n, it differs from an in the nth decimal, thus it *cannot* be in the list, and therefore the list cannot be complete.

OK, but given that there are as many numbers between 0 and 1 as there are between 0 and 2, so how can 2 be larger than 1? Well, the simplest answer is that the number is *not* an indication of how many numbers are below it (this is different to the natural numbers, where there are indeed exactly n natural numbers below n). However there’s defined an *order* between real numbers, which coincides with that of the natural numbers embedded with it. Basically, every positive number x is larger than 0 and larger than any other positive number between 0 and x, and the reverse is true for negative numbers.

However, you may still insist that the interval (0,2) is *twice as large* as the interval (0,1). And you’re right! But how does *that* fit with the fact that there are as many numbers in (0,1) as there are in (0,2)? Well, the point is that when determining the size of the interval, you do *not* count the numbers in it (indeed, as shown above in the diagonal argument, you *cannot* count them). Instead you *define* the size of the interval (and of more general sets of numbers). Such a function which tells you how large a set is is called measure. You just have to make sure that the measure has some obvious properties: The size of the set should of course not change if you just move it around, the empty set (that is, when you have *no numbers at all*) should have size 0, and if you have two distinct sets (like the numbers between 1 and 2 and in addition the numbers between 4 and 6), the total size should be the sum of the sizes. With those basic properties, and the assumption that (0,1) should have a finite size (this implies that a set consisting of a single number, i.e. a single point on the real line, has size 0), you already get that the interval (0,2) is twice as large as the interval (0,1): You get the numbers between 0 and 2 as the numbers between 0 and 1, the number 1 (but that has size 0), and the numbers between 1 and 2 (but those are just the numbers between 0 and 1 shifted to the right by 1). So if the interval (0,1) has size x, the interval (0,2) has size x+0+x=2x, so it is indeed twice as large.

**Attribution***Source : Link , Question Author : Ephraim , Answer Author : Pacerier*