This is a more complete description of the weird way of expressing "natural" numbers, which means just ints that are not negative, except that "int" makes us think of programming where ints only have 32 bits so a finite range. Here, no limits exist. These are usually called the Peano (pronounced piano) Axioms, or sometimes the Peano-Fraenkel axioms, or sometimes the Peano-Zermello-Fraenkel axioms. In this, z, s, (, ), = are constant symbols, other things (capital letters except for M) can be consistently replaced by any natural number. A. z is a natural number B. if A is a natural number then A = A C. if A and B are natural numbers and A = B then B = A D. if A and B and C are natural numbers and A = B and B = C then A = C E. if A is a natural number and A = B then B is a natural number F. if A is a natural number then s(A) is a natural number G. if A and B are natural numbers and s(A) = s(B) then A = B H. if A is a natural number then not ( s(A) = z ) I. for any set M, if z is in M, and a natural number A being in M means that s(A) is in M then every natural number is in M. It may seem that these definitions go too far in some directions and not far enough in others, but they don't really. They were built upon Set Theory and Boolean Algebra, both of which already had an exact definition. That means there was no need to define what a set is or what "is in" means or what "and" and "not" mean. But the definition of = for sets only states when two sets are the same, and the = from Boolean logic is only about true/false values being the same. The natural numbers are neither sets nor true nor false, so = needs a precise definition. A and F tell us exactly what a natural number can be, B to E tell us exactly what equality means, at first sight E seems totally pointless, but it does say something important, and that thing can not be deduced from the other axioms. G and H restrict possible interpretations of s to those that "make sense" I makes proof by induction and the notion of "for all" possible. All this together provides enough to completely define arithmetic (for some reason it seems easier when the operations are written as functions) for example: Addition: add(X, z) = X add(X, s(Y)) = s(add(X, Y)) Multiplication: mul(X, z) = z mul(X, s(Y)) = add(X, mul(X, Y)) Decrement, or predecessor: p(s(X)) = X Subtraction: sub(X, z) = X sub(s(X), s(Y)) = sub(X, Y) Exponentiation: pow(z, s(Y)) = z pow(s(X), z) = s(z) pow(X, s(Y)) = mul(X, pow(X, Y)) These definitions use "pattern matching" to select the right rule to use, for example if you want to know how to handle pos(s(s(s(z))), z), the axioms tell us that s(...) can never = z so the first pow rule doesn't apply, and or the same reason the third doesn't either. But with a simple substitution s(X) = s(s(s(z))) and z = z, so the second rule does apply. The substitution mentioned tells us that the value of X has to be s(s(z)), although X does not appear anywhere else in that rule so that knowledge is irrelevant in this case. By adopting a coding scheme, such as in logic regarding s(z) to represent true and z to represent false, you can also construct definitions for if, and, or, not, <, >, division, modulo, and in fact everything in arithmetic, and construct fool-proof proofs of all sorts of claims.