13 Arithmetic
The introduction of addition for natural numbers is a typical example of definition by induction. Indeed, it follows from the recursion theorem that for each natural number there exists a function \(s_{m}\) from \(\omega\) to \(\omega\) such that \(s_{m}(0) = m\) and such that \(s_{m}(n^{+}) = (s_{m}(n))^{+}\) for every natural number \(n\); the value \(s_{m}(n)\) is, by definition, the sum \(m + n\). The general arithmetic properties of addition are proved by repeated applications of the principle of mathematical induction. Thus, for instance, addition is associative. This means that
\[ (k + m) + n = k + (m + n) \]
whenever \(k\), \(m\), and \(n\) are natural numbers. The proof goes by induction on \(n\) as follows. Since \((k + m) + 0 = k + m\) and \(k + (m + 0) = k + m\), the equation is true if \(n = 0\). If the equation is true for \(n\), then \((k + m) + n^{+} = ((k+ m) + n)^{+}\) (by definition) \(= (k + (m + n))^{+}\) (by the induction hypothesis) \(= k + (m + n)^{+}\) (again by the definition of addition) \(= k + (m + n^{+})\) (ditto), and the argument is complete. The proof that addition is commutative (i.e, \(m + n = n + m\) for all \(m\) and \(n\)) is a little tricky; a straightforward attack might fail. The trick is to prove, by induction on \(n\), that (i) \(0 + n = n\) and (ii) \(m^{+} + n = (m + n)^{+}\), and then to prove the desired commutativity equation by induction on \(m\), via (i) and (ii).
Similar techniques are applied in the definitions of products and exponents and in the derivations of their basic arithmetic properties. To define multiplication, apply the recursion theorem to produce functions \(p_{m}\) such that \(p_{m}(0) = 0\) and such that \(p_{m}(n^{+}) = p_{m}(n) + m\) for every natural number \(n\); then the value \(p_{m}(n)\) is, by definition, the product \(m \cdot n\) (The dot is frequently omitted.) Multiplication is associative and commutative; the proofs are straightforward adaptations of the ones that worked for addition. The distributive law (i.e, the assertion that \(k \cdot (m + n) = k \cdot m + k \cdot n\) whenever \(k\), \(m\), and \(n\) are natural numbers) is another easy consequence of the principle of mathematical induction. (Use induction on \(n\).) Anyone who has worked through sums and products in this way should have no trouble with exponents. The recursion theorem yields functions \(e_{m}\) such that \(e_{m}(0) = 1\) and such that \(e_{m}(n^{+}) = e_{m}(n) \cdot m\) for every natural number \(n\); the value \(e_{m}(n)\) is, by definition, the power \(m^{n}\). The discovery and establishment of the properties of powers, as well as the detailed proofs of the statements about products, can safely be left as exercises for the reader.
The next topic that deserves some attention is the theory of order in the the set of natural numbers. For this purpose we proceed to examine with some care the question of which natural numbers belong to which others. Formally, we say that two natural numbers \(m\) and \(n\) are comparable if \(m \in n\), or \(m = n\), or \(n \in m\). Assertion: two natural numbers are always comparable. The proof of this assertion consists of several steps; it will be convenient to introduce some notation. For each \(n\) in \(\omega\), write \(S(n)\) for the set of all \(m\) in \(\omega\) that are comparable with \(n\), and let \(S\) be the set of all those \(n\) for which \(S(n) = \omega\). In these terms, the assertion is that \(S = \omega\). We begin the proof by showing that \(S(0) = \omega\) (i.e., that \(0 \in S\)). Clearly \(S(0)\) contains \(0\). If \(m \in S(0)\), then, since \(m \in 0\) is impossible, either \(m = 0\) (in which case \(0 \in m^{+})\), or \(0 \in m\) (in which case, again, \(0 \in m^{+}\)). Hence, in all cases, if \(m \in S(0)\), then \(m^{+} \in S(0)\); this proves that \(S(0) = \omega\). We complete the proof by showing that if \(S(n) = \omega\), then \(S(n^{+}) = \omega\). The fact that \(0 \in S(n^{+})\) is immediate (since \(n^{+} \in S(0)\)); it remains to prove that if \(m \in S(n^{+})\), then \(m^{+} \in S(n^{+})\). Since \(m \in S(n^{+})\), therefore either \(n^{+} \in m\) (in which case \(n^{+} \in m^{+}\)), or \(n^{+} = m\) (ditto), or \(m \in n^{+}\). In the latter case, either \(m = n\) (in which case \(m^{+} = n^{+}\)), or \(m \in n\). The last case, in turn, splits according to the behavior of \(m^{+}\) and \(n\): since \(m^{+} \in S(n)\), we must have either \(n \in m^{+}\), or \(n = m^{+}\), or \(m^{+} \in n\). The first possibility is incompatible with the present situation (i.e., with \(m \in n\)). The reason is that if \(n \in m^{+}\), then either \(n \in m\) or \(n = m\), so that, in any case, \(n \subset m\), and we know that no natural number is a subset of one of its elements. Both the remaining possibilities imply that \(m^{+} \in n^{+}\), and the proof is complete.
The preceding paragraph implies that if \(m\) and \(n\) are in \(\omega\), then at least one of the three possibilities (\(m \in n\), \(m = n\), \(n \in m\)) must hold; it is easy to see that, in fact, always exactly one of them holds. (The reason is another application of the fact that a natural number is not a subset of one of its elements.) Another consequence of the preceding paragraph is that if \(n\) and \(m\) are distinct natural numbers, then a necessary and sufficient condition that \(m \in n\) is that \(m \subset n\). Indeed, the implication from \(m \in n\) to \(m \subset n\) is just the transitivity of \(n\). If, conversely, \(m \subset n\) and \(m \neq n\), then \(n \in m\) cannot happen (for then \(m\) would be a subset of one of its elements), and therefore \(m \in n\). If \(m \in n\), or if, equivalently, \(m\) is a proper subset of \(n\), we shall write \(m < n\) and we shall say that \(m\) is less than \(n\). If \(m\) is known to be either less than \(n\) or else equal to \(n\), we write \(m \le n\). Note that \(\le\) and \(<\) are relations in \(\omega\). The former is reflexive, but the latter is not; neither is symmetric; both are transitive. If \(m \le n\), and \(n \le m\), then \(m = n\).
Exercise 13.1 Prove that if \(m < n\), then \(m + k < n + k\), and prove that if \(m < n\) and \(k \neq 0\), then \(m \cdot k <n \cdot k\). Prove that if \(E\) is a non-empty set of natural numbers, then there exists an element \(k\) in \(E\) such that \(k \le m\) for all \(m\) in \(E\).
Two sets \(E\) and \(F\) (not necessarily subsets of \(\omega\)) are called equivalent, in symbols \(E \sim F\), if there exists a one-to-one correspondence between them. It is easy to verify that equivalence in this sense, for subsets of some particular set \(X\), is an equivalence relation in the power set \(\mathcal{P}(X)\).
Every proper subset of a natural number \(n\) is equivalent to some smaller natural number (i.e , to some element of \(n\)). The proof of this assertion is inductive. For \(n = 0\) it is trivial. If it is true for \(n\), and if \(E\) is a proper subset of \(n^{+}\), then either \(E\) is a proper subset of \(n\) and the induction hypothesis applies, or \(E = n\) and the result is trivial, or \(n \in E\). In the latter case, find a number \(k\) in \(n\) but not in \(E\) and define a function \(f\) on \(E\) by writing \(f(i) = i\) when \(i \neq n\) and \(f(n) = k\). Clearly \(f\) is one-to-one and \(f\) maps \(E\) into \(n\). It follows that the image of \(E\) under \(f\) is either equal to \(n\) or (by the induction hypothesis) equivalent to some element of \(n\), and, consequently, \(E\) itself is always equivalent to some element of \(n^{+}\).
It is a mildly shocking fact that a set can be equivalent to a proper subset of itself. If, for instance, a function \(f\) from \(\omega\) to \(\omega\) is defined by writing \(f(n) = n^{+}\) for all \(n\) in \(\omega\), then \(f\) is a one-to-one correspondence between the set of all natural numbers and the proper subset consisting of the non-zero natural numbers. It is nice to know that even though the set of all natural numbers has this peculiar property, sanity prevails for each particular natural number. In other words, if \(n \in \omega\), then \(n\) is not equivalent to a proper subset of \(n\). For \(n = 0\) this is clear. Suppose now that it is true for \(n\), and suppose that \(f\) is a one-to-one correspondence from \(n^{+}\) to a proper subset \(E\) of \(n^{+}\). If \(n \notin E\), then the restriction of \(f\) to \(n\) is a one-to-one correspondence between \(n\) and a proper subset of \(n\), which contradicts the induction hypothesis. If \(n \in E\), then \(n\) is equivalent to \(E - \{ n \}\), so that, by the induction hypothesis, \(n = E - \{ n \}\). This implies that \(E = n^{+}\), which contradicts the assumption that \(E\) is a proper subset of \(n^{+}\).
A set \(E\) is called finite if it is equivalent to some natural number; otherwise \(E\) is infinite.
Exercise 13.2 Use this definition to prove that \(\omega\) is infinite.
A set can be equivalent to at most one natural number. (Proof: we know that for any two distinct natural numbers one must be an element and therefore proper subset of the other; it follows from the preceding paragraph that they cannot be equivalent.) We may infer that a finite set is never equivalent to a proper subset; in other words, as long as we stick to finite sets, the whole is always greater than any of its parts.
Exercise 13.3 Use this consequence of the definition of finiteness to prove that \(\omega\) is infinite.
Since every subset of a natural number is equivalent to a natural number, it follows also that every subset of a finite set is finite.
The number of elements in a finite set \(E\) is, by definition, the unique natural number equivalent to \(E\); we shall denote it by \(\# (E)\). It is clear that if the correspondence between \(E\) and \(\# (E)\) is restricted to the finite subsets of some set \(X\), the result is a function from a subset of the power set \(\mathcal{P}(X)\) to \(\omega\). This function is pleasantly related to the familiar set-theoretic relations and operations. Thus, for example, if \(E\) and \(F\) are finite sets such that \(E \subset F\), then \(\# (E) \le \# (F)\). (The reason is that since \(E \sim \# (E)\) and \(F \sim \# (F)\), it follows that \(\# (E)\) is equivalent to a subset of \(\# (F)\).) Another example is the assertion that if \(E\) and \(F\) are finite sets then \(E \cup F\) is finite, and, moreover, if \(E\) and \(F\) are disjoint, then \(\# (E \cup F) = \# (E) + \# (F)\). The crucial step in the proof is the fact that if \(m\) and \(n\) are natural numbers, then the complement of \(m\) in the sum \(m + n\) is equivalent to \(n\); the proof of this auxiliary fact is achieved by induction on \(n\). Similar techniques prove that if \(E\) and \(F\) are finite sets, then so also are \(E \times F\) and \(E^{F}\), and, moreover, \(\# (E \times F) = \# (E) \cdot \# (F)\) and \(\# (E^{F}) = \# (E)^{\# (F)}\).
Exercise 13.4 The union of a finite set of finite sets is finite. If \(E\) is finite, then \(\mathcal{P}(E)\) is finite and, moreover, \(\# (\mathcal{P}(E)) = 2^{\# (E)}\). If \(E\) is a non-empty finite set of natural numbers, then there exists an element \(k\) in \(E\) such that \(m \le k\) for all \(m\) in \(E\).