# (ε, δ)-definition of limit

In calculus, the (εδ)-definition of limit ("epsilondelta definition of limit") is a formalization of the notion of limit. The concept is due to Augustin-Louis Cauchy, who never gave an (ε, δ) definition of limit in his Cours d'Analyse, but occasionally used ε, δ arguments in proofs. It was first given as a formal definition by Bernard Bolzano in 1817, and the definitive modern statement was ultimately provided by Karl Weierstrass. It provides rigor to the following informal notion: the dependent expression f(x) approaches the value L as the variable x approaches the value c if f(x) can be made as close as desired to L by taking x sufficiently close to c.

## History

Although the Greeks examined limiting processes, such as the Babylonian method, they probably had no concept similar to the modern limit. The need for the concept of a limit arose in the 1600s, when Pierre de Fermat attempted to find the slope of the tangent line at a point $x$ to the graph of a function such as $f(x)=x^2$. Using a non-zero but almost zero quantity $E$, Fermat performed the following calculation:

\begin{align} \text{slope} & = \frac{f(x+E)-f(x)}{E} \\ & = \frac{(x+E)^2-x^2}{E}\\ & = \frac{x^2+2xE+E^2-x^2}{E} \\ & = \frac{2xE+E^2}{E} = 2x+E = 2x. \end{align}

The key to the above calculation is that since $E$ is non-zero, one can divide $f(x+E)-f(x)$ by $E$, but since $E$ is close to 0, $2x+E$ is essentially $2x$. Quantities such as $E$ are called infinitesimals. The problem with this calculation is that mathematicians of the era were unable to rigorously define a quantity with properties of $E$, even though it was common practice to 'neglect' higher power infinitesimals and this seemed to yield correct results.

This problem reappeared later in the 1600s at the center of the development of calculus, where calculations such as Fermat's are important to the calculation of derivatives. Isaac Newton first developed calculus via an infinitesimal quantity called a fluxion. He developed them in reference to the idea of an "infinitely small moment in time..." However, Newton later rejected fluxions in favor of a theory of ratios that is close to the modern $\varepsilon\text{–}\delta$ definition of the limit. Moreover, Newton was aware that the limit of the ratio of vanishing quantities was not itself a ratio, as he wrote:

Those ultimate ratios ... are not actually ratios of ultimate quantities, but limits ... which they can approach so closely that their difference is less than any given quantity...

Additionally, Newton occasionally explained limits in terms similar to the epsilon–delta definition. Gottfried Wilhelm Leibniz developed an infinitesimal of his own and tried to provide it with a rigorous footing, but it was still greeted with unease by some mathematicians and philosophers.

Augustin-Louis Cauchy gave a definition of limit in terms of a more primitive notion he called a variable quantity. He never gave an epsilon–delta definition of limit (Grabiner 1981). Some of Cauchy's proofs contain indications of the epsilon–delta method. Whether or not his foundational approach can be considered a harbinger of Weierstrass's is a subject of scholarly dispute. Grabiner feels that it is, while Schubring (2005) disagrees.[dubious ] Nakane concludes that Cauchy and Weierstrass gave the same name to different notions of limit.[unreliable source?]

Eventually, Weierstrass and Bolzano are credited with providing a rigorous footing for calculus, in the form of the modern $\varepsilon\text{–}\delta$ definition of the limit. The need for reference to an infinitesimal $E$ was then removed, and Fermat's computation turned into the computation of the following limit:

$\lim_{h\to 0} \frac{f(x+h)-f(x)}{h}.$

This is not to say that the limiting definition was free of problems as, although it removed the need for infinitesimals, it did require the construction of the real numbers by Richard Dedekind. This is also not to say that infinitesimals have no place in modern mathematics, as later mathematicians were able to rigorously create infinitesimal quantities as part of the hyperreal number or surreal number systems. Moreover, it is possible to rigorously develop calculus with these quantities and they have other mathematical uses.

## Informal statement

A viable informal (that is, intuitive or provisional) definition is that a "function f approaches the limit L near a (symbolically, $\lim_{x \to a}f(x) = L \,$) if we can make f(x) as close as we like to L by requiring that x be sufficiently close to, but unequal to, a."

When we say that two things are close (such as f(x) and L or x and a), we mean that the difference (or distance) between them is small. When f(x), L, x, and a are real numbers, the difference/distance between two numbers is the absolute value of the difference of the two. Thus, when we say f(x) is close to L, we mean that |f(x) − L| is small. When we say that x and a are close, we mean that |xa| is small.

When we say that we can make f(x) as close as we like to L, we mean that for all non-zero distances, ε, we can make the distance between f(x) and L smaller than ε.

When we say that we can make f(x) as close as we like to L by requiring that x be sufficiently close to, but, unequal to, a, we mean that for every non-zero distance ε, there is some non-zero distance δ such that if the distance between x and a is less than δ then the distance between f(x) and L is smaller than ε.

The informal/intuitive aspect to be grasped here is that the definition requires the following internal conversation (which is typically paraphrased by such language as "your enemy/adversary attacks you with an ε, and you defend/protect yourself with a δ"): One is provided with any challenge ε > 0 for a given f, a, and L. One must answer with a δ > 0 such that 0 < |xa| < δ implies that |f(x) − L| < ε. If one can provide an answer for any challenge, then one has proven that the limit exists.

## Precise statement and related statements

### Precise statement for real-valued functions

The $(\varepsilon, \delta)$ definition of the limit of a function is as follows:

Let $f$ be a real-valued function defined on a subset $D$ of the real numbers. Let $c$ be a limit point of $D$ and let $L$ be a real number. We say that

$\lim_{x\to c}f(x) = L$

if for every $\varepsilon > 0$ there exists a $\delta > 0$ such that, for all $x\in D$, if $0 < |x-c| < \delta$, then $|f(x)-L| < \varepsilon$.

Symbolically:

$\lim_{x \to c} f(x) = L \iff (\forall \varepsilon > 0,\,\exists \ \delta > 0,\,\forall x \in D,\,0 < |x - c| < \delta \ \Rightarrow \ |f(x) - L| < \varepsilon)$

If $D=[a,b]$ or $D=\mathbb{R}$, then the condition that $c$ is a limit point can be replaced with the simpler condition that c belongs to D, since closed real intervals and the entire real line are perfect sets.

### Precise statement for functions between metric spaces

The definition can be generalized to functions that map between metric spaces. These spaces come with a function, called a metric, that takes two points in the space and returns a real number that represents the distance between the two points. The generalized definition is as follows:

Suppose $f$ is defined on a subset $D$ of a metric space $X$ with a metric $d_X(x,y)$ and maps into a metric space $Y$ with a metric $d_Y(x,y)$. Let $c$ be a limit point of $D$ and let $L$ be a point of $Y$.

We say that

$\lim_{x\to c}f(x) = L$

if for every $\varepsilon > 0$, there exists a $\delta$ such that for all $x\in D$, if $0 < d_X(x,c) < \delta$, then $d_Y(f(x),L) < \varepsilon$.

Since $d(x,y) = |x-y|$ is a metric on the real numbers, one can show that this definition generalizes the first definition for real functions.

### Negation of the precise statement

The logical negation of the definition is as follows:

Suppose $f$ is defined on a subset $D$ of a metric space $X$ with a metric $d_X(x,y)$ and maps into a metric space $Y$ with a metric $d_Y(x,y)$. Let $c$ be a limit point of $D$ and let $L$ be a point of $Y$.

We say that

$\lim_{x\to c}f(x) \neq L$

if there exists an $\varepsilon > 0$ such that for all $\delta > 0$ there is an $x\in D$ such that $0 < d_X(x,c) < \delta$ and $d_Y(f(x),L) > \varepsilon$.

We say that $\lim_{x\to c}f(x)$ does not exist if for all $L\in Y$, $\lim_{x\to c}f(x) \neq L$.

For the negation of a real valued function defined on the real numbers, simply set $d_Y(x,y) = d_X(x,y)= |x-y|$.

### Precise statement for limits at infinity

The precise statement for limits at infinity is as follows:

Suppose $f$ is real-valued that is defined on a subset $D$ of the real numbers that contains arbitrarily large values. We say that

$\lim_{x\to\infty}f(x) = L$

if for every $\varepsilon > 0$, there is a real number $N > 0$ such that for all $x\in D$, if $x > N$ then $|f(x) - L| < \varepsilon$.

It is also possible to give a definition in general metric spaces.[citation needed]

## Worked examples

### Example 1

We will show that

$\lim_{x\to 0} x\sin{\left(\frac{1}{x}\right)} = 0$.

We let $\varepsilon > 0$ be given. We need to find a $\delta >0$ such that $|x-0| < \delta$ implies $\left|x\sin\left(\frac{1}{x}\right) - 0\right| < \varepsilon$.

Since sine is bounded above by 1 and below by −1,

\begin{align} \left|x\sin{\left(\frac{1}{x}\right)} - 0\right| &= \left|x\sin{\left(\frac{1}{x}\right)}\right|\\  &= |x|\left|\sin{\left(\frac{1}{x}\right)}\right| \\ &\leq |x|.  \end{align}

Thus, if we take $\delta = \varepsilon$, then $|x| =|x-0| < \delta$ implies $\left|x\sin{\left(\frac{1}{x}\right)} - 0\right| \leq |x| < \varepsilon$, which completes the proof.

### Example 2

Let us prove the statement that

$\lim_{x\to a} x^2 = a^2$

for any real number $a$.

Let $\varepsilon>0$ be given. We will find a $\delta > 0$ such that $|x-a|<\delta$ implies $|x^2-a^2|<\varepsilon$.

We start by factoring:

$|x^2-a^2| = |(x-a)(x+a)|=|x-a||x+a|.$

We recognize that $|x-a|$ is the term bounded by $\delta$ so we can presuppose a bound of 1 and later pick something smaller than that for $\delta$.

So we suppose $|x-a| < 1$. Since $|x| - |y| \leq |x-y|$ holds in general for real numbers $x$ and $y$, we have

$|x| - |a| \leq |x-a| < 1.$

Thus,

$|x| < 1 + |a|.$

Thus via the triangle inequality,

$|x+a| \leq |x| + |a| < 2|a| + 1.$

Thus, if we further suppose that

$|x-a| < \frac{\varepsilon}{2|a| +1}$

then

$|x^2-a^2| <\varepsilon.$

In summary, we set

$\delta = \min{\left\{ 1,\frac{\varepsilon}{2|a| +1}\right\} }.$

So, if $|x-a|<\delta$, then

\begin{align} |x^2-a^2| &= |x-a||x+a| \\  &< \frac{\varepsilon}{2|a| +1}(|x+a|)\\ &< \frac{\varepsilon}{2|a| +1}(2|a|+1)\\ &=\varepsilon.  \end{align}

Thus, we have found a $\delta$ such that $|x-a| < \delta$ implies $|x^2-a^2| <\varepsilon$. Thus, we have shown that

$\lim_{x\to a} x^2 = a^2$

for any real number $a$.

### Example 3

Let us prove the statement that

$\lim_{x \to 5} (3x - 3) = 12.$

This is easily shown through graphical understandings of the limit, and as such serves as a strong basis for introduction to proof. According to the formal definition above, a limit statement is correct if and only if confining $x$ to $\delta$ units of $c$ will inevitably confine $f(x)$ to $\varepsilon$ units of $L$. In this specific case, this means that the statement is true if and only if confining $x$ to $\delta$ units of 5 will inevitably confine

$3x - 3$

to $\varepsilon$ units of 12. The overall key to showing this implication is to demonstrate how $\delta$ and $\varepsilon$ must be related to each other such that the implication holds. Mathematically, we want to show that

$0 < | x - 5 | < \delta \ \Rightarrow \ | (3x - 3) - 12 | < \varepsilon .$

Simplifying, factoring, and dividing 3 on the right hand side of the implication yields

$| x - 5 | < \varepsilon / 3 ,$

which immediately gives the required result if we choose

$\delta = \varepsilon / 3 .$

Thus the proof is completed. The key to the proof lies in the ability of one to choose boundaries in $x$, and then conclude corresponding boundaries in $f(x)$, which in this case were related by a factor of 3, which is entirely due to the slope of 3 in the line

$y = 3x - 3 .$

## Continuity

A function f is said to be continuous at c if it is both defined at c and its value at c equals the limit of f as x approaches c:

$\lim_{x\to c} f(x) = f(c).$

The $(\varepsilon, \delta)$ definition for a continuous function can be obtained from the definition of a limit by replacing $0<|x-c|<\delta$ with $|x-c|<\delta$, to ensure that f is defined at c and equals the limit.

A function f is said to be continuous on an interval I if it is continuous at every point c of I.

## Comparison with infinitesimal definition

Keisler proved that a hyperreal definition of limit reduces the logical quantifier complexity by two quantifiers. Namely, $f(x)$ converges to a limit L as $x$ tends to a if and only if the value $f(x+e)$ is infinitely close to L for every infinitesimal e. (See Microcontinuity for a related definition of continuity, essentially due to Cauchy.)

Infinitesimal calculus textbooks based on Robinson's approach provide definitions of continuity, derivative, and integral at standard points in terms of infinitesimals. Once notions such as continuity have been thoroughly explained via the approach using microcontinuity, the epsilon–delta approach is presented as well. Karel Hrbáček argues that the definitions of continuity, derivative, and integration in Robinson-style non-standard analysis must be grounded in the εδ method, in order to cover also non-standard values of the input. Błaszczyk et al. argue that microcontinuity is useful in developing a transparent definition of uniform continuity, and characterize the criticism by Hrbáček as a "dubious lament". Hrbáček proposes an alternative non-standard analysis, which (unlike Robinson's) has many "levels" of infinitesimals, so that limits at one level can be defined in terms of infinitesimals at the next level.