Mikhail Katz on Euler

Mikhail Katz started a question about Euler's mathematics being expressed in modern terms at http://mathoverflow.net/questions/126986/eulers-mathematics-in-terms-of-modern-theories:

Euler’s mathematics in terms of modern theories?


Some aspects of Euler's work were formalized in terms of modern infinitesimal theories by Laugwitz, McKinzie, Tuckey, and others. Referring to the latter, G. Ferraro claims that "one can see in operation in their writings a conception of mathematics which is quite extraneous to that of Euler." Ferraro concludes that "the attempt to specify Euler's notions by applying modern concepts is only possible if elements are used which are essentially alien to them, and thus Eulerian mathematics is transformed into something wholly different"; see http://dx.doi.org/10.1016/S0315-0860(03)00030-2.

Meanwhile, P. Reeder writes: "I aim to reformulate a pair of proofs from [Euler's] "Introductio" using concepts and techniques from Abraham Robinson's celebrated non-standard analysis (NSA). I will specifically examine Euler's proof of the Euler formula and his proof of the divergence of the harmonic series. Both of these results have been proved in subsequent centuries using epsilontic (standard epsilon-delta) arguments. The epsilontic arguments differ significantly from Euler's original proofs." Reeder concludes that "NSA possesses the tools to provide appropriate proxies of the inferential moves found in the Introductio"; see http://philosophy.nd.edu/assets/81379/mwpmw_13.summaries.pdf (page 6).

Historians and philosophers thus appear to disagree sharply as to the relevance of modern theories to Euler's mathematics. Can one meaningfully reformulate Euler's infinitesimal mathematics in terms of modern theories?

Patrick Reeder on Euler

Patrick Reeder (Dept of Philosophy, The Ohio State University), A `Non-standard Analysis' of Euler's Introductio in Analysin Infinitorum. Summary:

In Leonhard Euler's seminal work Introductio in Analysin Infinitorum (1748), he readily used infinite numbers and infinitesimals in many of his proofs. In this presentation, I aim to reformulate a pair of proofs from the Introductio using concepts and techniques from Abraham Robinson's celebrated non-standard analysis (NSA). I will specifically examine Euler's proof of the Euler formula and his proof of the divergence of the harmonic series. Both of these results have been proved in subsequent centuries using epsilontic (standard epsilon-delta) arguments. The epsilontic arguments differ significantly from Euler's original proofs. I will compare and contrast the epsilontic proofs with those I have developed by following Euler more closely through NSA.  I claim that NSA possesses the tools to provide appropriate proxies of the inferential moves found in the Introductio. With the remaining time, I will offer some preliminary discussion of the purity of the methods behind the proofs. Most notably, the theory behind NSA is conservative over the theory behind ordinary analysis (in effect, due to the crucial Transfer Principle of NSA.)  This peculiar feature of NSA raises special questions regarding purity.  Does the use of ideal elements count as impure when the theory that includes the ideal elements is conservative over the theory without ideal elements?  Do these methods capture the letter of purity even if they do not capture the spirit of purity?  These and closely related questions will be considered.

Infinitesimals, Imaginaries, Ideals, and Fictions

A new paper by David Sherry and Mikhail G. Katz. Abstract:

Leibniz entertained various conceptions of infinitesimals, considering them sometimes as ideal things and other times as fictions. But in both cases, he compares infinitesimals favorably to imaginary roots. We agree with the majority of commentators that Leibniz's infinitesimals are fictions rather than ideal things. However, we dispute their opinion that Leibniz's infinitesimals are best understood as logical fictions, eliminable by paraphrase. This so-called syncategorematic conception of infinitesimals is present in Leibniz's texts, but there is an alternative, formalist account of infinitesimals there too. We argue that the formalist account makes better sense of the analogy with imaginary roots and fits better with Leibniz's deepest philosophical convictions. The formalist conception supports the claim of Robinson and others that the philosophical foundations of nonstandard analysis and Leibniz's calculus are cut from the same cloth.

Is mathematical history written by the victors?

Is mathematical history written by the victors? A paper by Bair et al. From the abstract:

We examine prevailing philosophical and historical views about the origin of infinitesimal mathematics in light of modern infinitesimal theories. By removing epsilontist blinders, we show the works of Fermat, Leibniz, Euler, Cauchy and other giants of infinitesimal mathematics in a new light. We also detail several procedures of the historical infinitesimal calculus that were only clarified and formalized with the advent of modern infinitesimals. These procedures include Fermat's adequality; Leibniz's law of continuity and the transcendental law of homogeneity; Euler's principle of cancellation and infinite integers with the associated infinite products; Cauchy's "Dirac" delta function. Such procedures were interpreted and formalized in Robinson's framework in terms of concepts like the standard part principle, the transfer principle, and hyperfinite products. We evaluate the critiques of historical and modern infinitesimals by their foes from Berkeley and Cantor to Bishop and Connes. We analyze the issue of the consistency, as distinct from the issue of the rigor, of historical infinitesimals, and contrast the methodologies of Leibniz and Nieuwentijt in this connection.

The discussion of this question, Is mathematical history written by the victors? at MathOverflow was closed within minutes. Still, what is your opinion?

Tools, Objects, and Chimeras: Connes on the Role of Hyperreals in Mathematics

A paper by Vladimir KanoveiMikhail G. Katz, and Thomas Mormann.  Abstract:

We examine some of Connes' criticisms of Robinson's infinitesimals starting in 1995. Connes sought to exploit the Solovay model S as ammunition against non-standard analysis, but the model tends to boomerang, undercutting Connes' own earlier work in functional analysis. Connes described the hyperreals as both a ``virtual theory'' and a ``chimera'', yet acknowledged that his argument relies on the transfer principle. We analyze Connes' ``dart-throwing'' thought experiment, but reach an opposite conclusion. In S, all definable sets of reals are Lebesgue measurable, suggesting that Connes views a theory as being ``virtual'' if it is not definable in a suitable model of ZFC. If so, Connes' claim that a theory of the hyperreals is ``virtual'' is refuted by the existence of a definable model of the hyperreal field due to Kanovei and Shelah. Free ultrafilters aren't definable, yet Connes exploited such ultrafilters both in his own earlier work on the classification of factors in the 1970s and 80s, and in Noncommutative Geometry, raising the question whether the latter may not be vulnerable to Connes' criticism of virtuality. We analyze the philosophical underpinnings of Connes' argument based on Goedel's incompleteness theorem, and detect an apparent circularity in Connes' logic. We document the reliance on non-constructive foundational material, and specifically on the Dixmier trace (featured on the front cover of Connes' magnum opus) and the Hahn-Banach theorem, in Connes' own framework. We also note an inaccuracy in Machover's critique of infinitesimal-based pedagogy.

Misha Gabrilovich: Infinity and category theory

(Joint work with Assaf Hasson [HG].)

Take a computer scientist perspective on the notion of a model category, something fashionable some ten years ago. By definition, a model category is a category with three distinguished classes of morphisms satisfying diagram chasing axioms.

What is a category ? Well, it is a directed  =  oriented graph, with a class of distinguished subgraphs called commutative diagrams such that every string of consequitive arrows  →  → ...  → fits into a unique distinguished triangle subgraph


where the bottom arrow is called the composition of the string.

. "Three distinguished classes of morphisms" -- its edges = arrows are labelled by combinations of three labels (c), (w) and (f).

The axioms? Rules for diagram chasing: adding an arrow or a label to the part of graph you already constructed (as a computer scientist, you can not really have the whole of an infinite graph, can you?). For example, axiom M2 of model categories says that any arrow is equal to a composition


of a (c)-labelled arrow 


and a (wf)-labelled arrow


For a computer scientist, that's a diagram chasing rule: given an arrow


 in the model category graph, add the arrow


 and a commutative diagram of these arrows.

What a computer scientist would do now? Use these rules for a script generating a dungeon for a roguelijk: rooms are objects, doors are arrows/morphisms and labels, random seed is a labelled commutative diagram in a model category.

Keeping track of commutative diagrams is a mess -- the morphisms sets between even moderate objects are usually unlistable [Gr]. So s/he assumes all diagrams commute, in v0 of the script\dots

But are these rules consistent? Is there a model category where all the diagrams commute---in other words, is there a partial order admitting structure of a model category and say there is an arrow with no labels?

A set theorist quickly finds some: there is one for every regular cardinal \kappa including 0. As they say, a model category is a tool to prove results about its homotopy category, and let's describe the homotopy categories of such model categories.

Objects of Ht_0 are clasess of countable sets; for \kappa>0 objects of Ht_\kappa's are (arbitrary) classes of sets of size \kappa.

X\longrightarrow Y in Ht_0 iff every x\in X exists y\in Y such that |y \setminus x| is finite.

X\longrightarrow Y in Ht_\kappa iff for every x\in X exists x_S\subseteq Yof |x_S|<\kappa such that |x\setminus \bigcup x_S |<\kappa

A homotopy theorist tells that the ((cwf)-labelled) model category wrt its homotopy category  is akin to a category wrt its isomorphism classes of objects: for purposes of a good ("meaningful") question, isomorphic, resp. homotopic, objects are aways equivalent, but to prove anything you need to pick representatives and work in the category, resp. the category with its cwf-labels.  A typical task is to calculate some homotopy invariants, e.g. some properties of homotopy categories or of canonical functors from the homotopy categories.

functor L:Ht_\kappa \longrightarrow C is another word for an order preserving (or order reversing); it is canonical iff "it depends on the category itself only", says the homotopy theorist. S/he means that for any automorphism s:Ht_\kappa\longrightarrow Ht_\kappa of the partial order ("category"), F\circ s=F are "equivalent"; a logician (almost correctly) understands it to say "definable in the language of a category in an strong enough logic".

The easiest canonical functor to think of is perhaps

\Bbb L_c^\kappa {\rm card}(X)=\min \{|X|: X\longrightarrow Y \}

 cardinality  made homotopy invariant.

And then the revisited continuum hypothesis theorem of Shelah:

\Bbb L_c^\kappa {\rm card}(\{A: |A|=\kappa,\;\;A\subseteq \lambda\})=\lambda

for many \kappa<\lambda---is a homotopy invariant continuum hypothesis. The equality

\lambda^{\leq\kappa}=2^{\kappa}+\Bbb L_c^\kappa {\rm card}(\{A:|A|=\kappa,A\subseteq \lambda\})

shows this homotopy invariant helps in a classical question: the cardinality of the set of all subsets of size \kappa splits into the homotopy invariant part and non-homotopy invariant part. [Sh:460,p.4]

But a set theorist may ask a simpler question first: are homotopy categories Ht_\kappa dense (as partial orders)? Ht_0 is dense; Ht_\omega is not; Ht_\kappa is dense iff \kappa is not measurable.

Proof : I\leq_\kappa \kappa means that I is a <\kappa-closed ideal on \kappa; I is maximal such (in Ht_\kappa) iff there is nothing strictly <_\kappa-between I and \kappa. Finally, note that there is nothing between X\leq\kappa Y implies that for every y\in Y, there is nothing in between \{x\cap y:x\in X\}\leq_\kappa{ y} and X<_\kappa Y implies that at least one of these inequalities is strict.

To summarise: Amazingly, this language of a labelled category has "sufficient generality to cover in a uniform way the different homotopy theories encountered" if "properly, often non-obviously, developed" to express "a large number of arguments that [are] formally similar to well-known ones in algebraic topology", and that these "homotopy theories" are the (non-obvious) structure the homotopy category does inherit. [Qu] An optimist may hope that thereby the homotopy theory shall "contribute new insights to old Cantorian problems of the scale of infinities" [Ma], e.g. one indeed finds in set theory  "a large number of arguments that [are] formally similar to well-known ones in algebraic topology".

As always in mathematics, there's an interplay between "too much structure, too much information" and "too little information, too little to say".  A model category is a means of  forgetting some of the "too much information" while (hopefully) not getting  lost in the "too little to say". A key fact in  this construction is that it collapses the power function, probably, the greatest creator of chaos in set theory. But, maybe because this is the whole point of model categories, it also creates some replacement (in the form, e.g., (c)-labelled arrows --(c)--> and (wf)-arrows ---(wf)--> which turn out to be closely related to Shelah's covering families and numbers), which turn out to be of interest.

[HG] M. Gavrilovich and A. Hasson. Exercises de style: A homotopy theory for set theory.Part I and II.

[Gr] M. Gromov, Ergobrain. \S2.5 On Categories and Functors, p.69.

[Ma] Yu. Manin, Foundations as Superstructure. (Reflections of a practicing mathematician).

[Qu] D. Quillen, Homotopical Algebra. Available online.

[Sh:460] S. Shelah, The Generalized Continuum  Hypothesis revisited.

Reification in Internal Set Theory

From  Edward Nelson’s introduction to his book:

Ordinarily in mathematics, when one introduces a new concept one defines it. For example, if this were a book on “blobs” I would begin with a definition of this new predicate: x is a blob in case x is a topological space such that no uncountable subset is Hausdorff. Then we would be all set to study blobs. Fortunately, this is not a book about blobs, and I want to do something different. I want to begin by introducing a new predicate “standard” to ordinary mathematics without defining it.

The reason for not defining “standard” is that it plays a syntactical, rather than a semantic, role in the theory. It is similar to the use of “fixed” in informal mathematical discourse. One does not define this notion, nor consider the set of all fixed natural numbers. The statement “there is a natural number bigger than any fixed natural number” does not appear paradoxical. The predicate “standard” will be used in much the same way, so that we shall assert “there is a natural number bigger than any standard natural number.” But the predicate “standard”— unlike “fixed”—will be part of the formal language of our theory, and this will allow us to take the further step of saying, “call such a natural number, one that is bigger than any standard natural number, unlimited.”

We shall introduce axioms for handling this new predicate “standard” in a consistent way. In doing so, we do not enlarge the world of mathematical objects in any way, we merely construct a richer language to discuss the same objects as before. In this way we construct a theory extending ordinary mathematics, called Internal Set Theory that axiomatizes a portion of Abraham Robinson’s nonstandard analysis. In this construction, nothing in ordinary mathematics is changed.

Nelson appears to claim that his approach allows one to avoid reification of abstract quantities. Anti-reification stance is deeply personal for him and has spiritual roots, as explained in his paper  Mathematics and Faith:


I must relate how I lost my faith in Pythagorean numbers. One morning at the 1976 Summer Meeting of the American Mathematical Society in Toronto, I woke early. As I lay meditating about numbers, I felt the momentary overwhelming presence of one who convicted me of arrogance for my belief in the real existence of an infinite world of numbers, leaving me like an infant in a crib reduced to counting on my fingers. Now I live in a world in which there are no numbers save those that human beings on occasion construct.

He continues:

During my first stay in Rome I used to play chess with Ernesto Buonaiuti. In his writings and in his life, Buonaiuti with passionate eloquence opposed the reification of human abstractions. I close by quoting one sentence from his Pellegrino di Roma “For [St. Paul] abstract truth, absolute laws, do not exist, because all of our thinking is subordinated to the construction of this holy temple of the Spirit, whose manifestations are not abstract ideas, but fruits of goodness, of peace, of charity and forgiveness.”

This is really remarkable because the the Idealisation Axiom of Nelson's Internal Set Theory is a mathematical reformulation of the process of reification:

Let R=R(x,y) be a classical relation.

In order to be able to find an  x with R(x,y) for all standard y,

a necessary and sufficient condition is

for each standard finite part F, it is possible to find an x - x_F such that R(x,y) holds for all y \in F.

Nelson hides reified objects in the non-standard part of his universe!


The aim of this blog is to act as a forum for discussion of  wide range of issues related to infinitesimals in all their manifestations in mathematics and its applications (first of all, in physics), in mathematical didactics, in history and philosophy of mathematics.

The blog is edited by Alexandre Borovik and Mikhail Katz. It is still not set up properly, but hopelfully, will be fully functional in a week.