The History of Mathematical Symbols


Tessa Gallant

History of Math

Galileo once said, “Mathematics is the language with which God wrote the Universe.” He was correct in calling mathematics a language, because like any dialect, mathematics has its own rules, formulas, and nuances. In particular, the symbols used in mathematics are quite unique to its field and are deeply rooted in history. The following will give a brief history of some of the most famous symbols employed by mathematics. Categorized by discipline within the subject, each section has its own interesting subculture surrounding it.  



Arithmetic most likely began with the need to keep track of animals or bundles of food. It was a necessary tool utilized by our ancestors to make it through the winter. Arithmetic is the most basic part of mathematics and encompasses addition, subtraction, multiplication, and the division of numbers. One category of numbers are the integers,  -n,…-3,-2,-1,0,1,2,3,…n , where we say that n is in \mathbb{Z} \!\,. The capital letter Z is written to represent whole numbers and comes from the German word, Zahlen, meaning numbers (Gallian, 41). Two fundamental operations in mathematics, addition, +, and subtraction, -, credit the use of their symbols to fourteenth and fifteenth century mathematicians.  Nicole d' Oresme, a Frenchman who lived from 1323-1382, used the + symbol to abbreviate the Latin “et”, meaning “and”, in his Algorismus Proportionum.

In 1489 the plus and minus symbols were printed in Johannes Widmann’s Mercantile Arithmetic. The German’s work can be seen in the picture below (Washington State Mathematics Council). 









Followers soon adopted the notation for addition and subtracting. The fourteenth century Dutch mathematician Giel Vander Hoecke, used the plus and minus signs in his Een sonderlinghe boeck in dye edel conste Arithmetica and the Brit Robert Recorde used the same symbols in his 1557 publication, The Whetstone of Witte (Washington State Mathematics Council). It is important to note that even though the Egyptians did not use the + and – notation, the Rhind Papyrus does use a pair of legs walking to the right to mean addition and a pair of legs walking to the left to mean subtraction (see below)(Weaver and Smith).  Similarly, the Greeks and Arabs never used the + sign even though they used the operation in their daily calculations (Guedj, 81).



The division and multiplication signs have an equally interesting past.  The symbol for division,¸, called an obelus, was first used in 1659, by the Swiss mathematician Johann Heinrich Rahn in his work entitled Teutsche Algebr. The symbol was later introduced to London when the English mathematician Thomas Brancker translated Rahn’s work (Cajori, A History of Mathematics, 140). Before the explanation of how the letter “x” became used to mean multiplication, a short biography must be presented for the man who has contributed so much, both directly and indirectly, to mathematical notation, William Oughtred. Oughtred lived in England during the late 1500’s and into the early 1600’s and was educated at Eaton and King’s College Cambridge. He then went on to teach some very studious pupils, one of whom was John Wallis, whose name will come up again in the history of mathematical notation (O'Connor and Robertson). Oughtred is credited with using 150 different symbols in his work, however, one of the few modern survivors is the “x,” meaning multiplication. Oughtred’s cross can be        seen below (Weaver and Smith).



It was not all smooth sailing for Oughtred, as he received some opposition from Leibniz, who wrote, "I do not like (the cross) as a symbol for multiplication, as it is easily confounded with x; .... often I simply relate two quantities by an interposed dot and indicate multiplication by ZC.LM." (Weaver and Smith). It wasn’t until the 1800’s that the symbol “x” was popular in arithmetic. However, its confusion with the letter “x” in algebra led the dot to be more widely accepted to mean multiplication (Weaver and Smith). Oughtred’s name will appear again in the history of math, his contributions were significant and widespread.


Equality and Congruence

The contributions of Oughtred’s fellow countryman, Robert Recorde, are also notably profound. In his 1557 book on algebra, The Whetsone of Witte, Recorde wrote about his invention of the equal sign, "To avoide the tediouse repetition of these woordes: is equalle to: I will settle as I doe often in woorke use, a paire of paralleles, or gemowe [twin] lines of one lengthe: =, bicause noe .2. thynges, can be moare equalle" (Smoller).

A similar looking symbol, º, meaning “congruent,” was credited to Johann Gauss in 1801. He stated “-16º9(mod. 5),” which means that negative sixteen is congruent to nine modulo five (Cajori, A History of Mathematical Notation, 34). During the same time period, Adrien-Marie Legendre tried to employ his own notation for congruence. However, he was a bit careless because he used the “=” twice to mean congruence and once for equality, which, needless to say, angered Gauss. (Cajori, A History of Mathematical Notation, 34). Gauss’ notation stuck and that is what is still used today in number theory and other branches of mathematics.


Three British mathematicians, Harriot, Oughtred and Barrow, popularized the early symbols for “>” and “<”, meaning strictly greater than and strictly less than. They were first used in Thomas Harriot’s The Analytical Arts Applied to Solving Algebraic Equations, which was published in 1631 after he died (Weaver and Smith). In 1647, Oughtred used the symbol on the left to stand for greater than and the symbol in the middle for less than (see below). Then in 1674, Isaac Barrow used the notation on the right in his Lections Opticae & Geometricae meaning "A minor est quam B" (symbols below from Weaver and Smith).



Almost one hundred years later, in 1734, the French mathematician Pierre Bouguer, put a line under the inequalities to form the symbols representing less than or equal to and greater than or equal to, “£” and “³”(UC Davis, 2007). Bouguer’s notation, like variations of the British inequality signs, is still in use today.



The factorial, like other symbols in math, has a multinational background, with roots in Switzerland, Germany and France. In 1751 Euler represented the multiplication of (1)(2)(3)…(m) by the letter M and in 1774 the German Johann Bernhard Basedow used “*” to mean 5*=(5)(4)(3)(2)(1).  It wasn’t until 1808, with Christian Kramp’s contributions, that the term n! meant n(n-1)(n-2)…(3)(2)(1) (Cajori, A History of Mathematical Notations, 72). 




The radical sign, originating from Italy and Germany, has a Middle Eastern connection as well. Initially, it was used by the Italian mathematician Rafael Bombelli, who lived in the sixteenth century, in his l’Algebra. He wrote that “R.q.[2]” is the square root of 2 and “R.c.[2]” is the cube root of 2 (Derbyshire, 84). During this time, Arab mathematicians had the

symbol pictured at the left to mean square root, however it was not widely adopted elsewhere (Weaver and Smith). It wasn’t until the seventeenth century, with the help of Descartes, that the symbol that we still use today was employed (see below) (Weaver and Smith).




Descartes, who lived in the early part of the 1600’s, turned the German Cossits “Ö” into the square root symbol that we now have, by putting a bar over it (Derbyshire, 92-93).



The symbol “¥” meaning infinity, was first introduced by Oughtred’s student, John Wallis, in his 1655 book De Sectionibus Conicus (UC Davis). It is hypothesized that Wallis borrowed the symbol ¥ from the Romans, which meant 1,000 (A History of Mathematical Notations, 44). Preceding this, Aristotle (384-322 BC) is noted for saying three things about infinity: i) the infinite exists in nature and can be identified only in terms of quantity, ii) if infinity exists it must be defined, and iii) infinity can not exist in reality. From these three statements Aristotle came to the conclusion that mathematicians had no use for infinity (Guedj, 112).  This idea was later refuted and the German mathematician, Georg Cantor, who lived from 1845-1918, is quoted as saying; “I experience true pleasure in conceiving infinity as I have, and I throw myself into it…And when I come back down toward fitineness, I see with equal clarity and beauty the two concepts [of ordinal and cardinal numbers] once more becoming one and converging in the concept of finite integer” (Guedj, 115). Cantor not only accepted infinity, but used aleph, the first letter of the Hebrew alphabet, as its symbol (see below) (Reimer). Cantor referred to it as “transfinite” (Guedj, 120). Another interesting fact is that Euler, while accepting the concept of infinity, did not use the familiar ¥ symbol,

but instead he wrote a sideways “s”.



One of the most studied constants of all time, p, the ratio of the circumference of a circle to its diameter, 3.141592654, has been long studied and closely approximated. It was originally written by Oughtred as p/d, where p was the periphery and d was the diameter.  In 1689, J. C. Sturmn, from the University of Altdorf in Bavaria, used the letter e to represent the ratio of the length of a circle to its diameter; however it did not catch on. Pi was introduced again in 1706 by William Jones. Jones was looking over the work of John Machin and found that he used p to mean the ratio of circumference to diameter. In Jones’ book, Synopsis Palmariarum Mathesos, he praises his intelligence by calling him “the Truly Ingenious Mr. John Machin” whom states “in the Circle, the Diameter is to the Circumference as 1 is to 16/3 -4/239 –(1/3)(16/53) – 4/2393  + (1/5)(16/55) - (4/2395)-…= 3.14159…= p   (Arndt, Haenel, 166). In subsequent years Johann Bernoulli used “c” to represent pi and Euler used “p” in 1734 and then “c” in 1736 to represent the constant. Then Euler changed his mind again, and later in 1736 used p in his Mechanica sive motus scientia analytice exposita and then cemented it into mathematical culture with his 1748 work entitled Introductio in analysin infinitorum. (Arndt, Haenel, 166).

Another important mathematical constant is e, 2.718281828. This irrational number, meaning the base of natural logarithms, as studied by John Napier, was originally called M by the English mathematician Roger Cotes, who lived from 1682 to 1716 (Trinity). Newton first used the exponential notation a2 to mean “e”, and Leonhard Euler replaced the “a” with an “e” most likely because e comes after a in the procession of vowels (Trinity). His “e” appeared in Mechanica and was later used by Daniel Bernoulli and Lambert (Cajori, A History of Mathematical Notations, 13). Euler’s choice of letter went down in history.

The square root of negative one is another important constant, with a simpler, less varied background. Again Euler’s approach to notation has been wedged in time. In 1777 he published Institutionum calculi integralis, where i is the square root of negative one, and has been undisputed ever since (UC Davis).

The mathematical symbols discussed here have long and convoluted pasts, quarreled over by different mathematicians spanning the ages, and some revised at a later date. Certain representations came into existence through mercantile records and others were born out of necessity to provide mathematicians with convenient shorthand for repetitious calculations. Although their creators have perished with the passage of time, their notations are still prevalent today and continue to play an integral part of our mathematical world.