The Maclaurin Series


ABDULLAH ABDULLAH

The Maclaurin series represent a fascinating type of series in mathematics, since they approximate complicated functions as a polynomial series. It is interesting and somewhat uncanny that a simple series of polynomial terms can exactly define a different function, albeit in a strict domain for the input of the function.

The general equation for series of this type is shown below:

            \(\sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!}(x-a)^n \)

where:

  • \(n!\) is the factorial of n
  • \(a\) is a chosen real or complex number
  • \(f^{(n)}(a)\) is the nth derivative of the function \(f(x)\) evaluated at \(x = a\)
What is shown above is a generalised version (Taylor series), since it includes a constant a,
but for Maclaurin series, the constant a is zero. 

Of course, simply showing the formula outright doesn’t help much in understanding what Maclaurin series are all about, so here are a few step by step examples that will clarify how Maclaurin series work. I will also discuss why they are important in practical applications.

The General Problem

Let's say that we want to define a general function \( f(x) \) as a Maclaurin series expansion, or put more simply, to approximate it with an infinite number of terms which are either constant or \(x\), \(x^2\), \(x^3\) etc.

Let \(p(x)\) be the approximate function. 

What if we start by saying that \(p(x) = f(0)\)? This means that the function p(x) is a straight horizontal line, only able to approximate one value for f(x), which is zero:



[Figure 1: A basic (first term) approximation of f(x)]

To increase the accuracy of the approximate function p(x), we can add another term. But what should it be? If we restrict ourselves to using only bits of the original function at \(0\), what can we do to try and make the result apply better to values of \(x\) that aren't \(0\)? It turns out that the other term should follow this rule:
\(p'(0) = f'(0)\). 
The reason as to why we are looking at the gradient of the function is because we need to
make sure that the rate at which the actual function is increasing is the same as the approximate function’s rate of increase, which will mean it more closely approximates values around \(0\) as well as at \(0\) itself.

Let’s return to the main function \(p(x)\). We first defined it to be \(p(x) = f(0)\). One way to make the derivatives match is to write \(p'(x) = f'(0)\), i.e. the derivative of \(p(x)\) matches \(f'(0)\) not just at \(0\), but everywhere. Integrating both sides, we get: \(\int (p'(x)) dx = \int (f'(0)) dx\). This leads us to \(p(x) = f'(0)x + c\). Note that \(f'(0)\) is a constant. But to preserve the constraint we had originally (namely that the values of the functions match at \(x=0\)), as well as this new derivative constraint, we should set \(c = f(0)\). This makes the function \( p(x) = f(0) + f’(0)x \). 

[Figure 2: This is a better approximation for \(f(x)\)]

To make the function’s approximation even more accurate, we can add yet another term. Maybe you can see where this is going. \(p''(0) = f''(0)\), so let's say (because this satisfies the constraint), that \(p''(x) = f’’(0)\).  If we repeat the integration steps in the previous method, we can make a new definition for the \(p(x)\) function. \(p''(x) = f''(0)\), so integrating once, \(p'(x) = f'(0) + f''(0)x\). We can now use this definition and integrate back to the main p(x) function:

\(p(x) = f(0) + f'(0)x + \frac{1}{2}f''(0)x^2\)

It's obvious that the reason as to why the \(\frac{1}{2}\) appears is because if we were to differentiate this equation, the power of x in the final term will cancel out the fraction.
[Figure 3: Three terms of the Maclaurin series for a different function]

Repeating these steps gives us the following for a general \(f(x)\):
\(p(x) = f(0) + f'(0)x + \frac{1}{2!}f''(0)x^2 + \frac{1}{3!}f'''(0)x^3 + \frac{1}{4!}f''''(0)x^4 +... \)

This is an infinite series, in which p(x) is approximated to f(x) only for a certain range. If we want to, we can write this in sigma notation.
\(p(x) = \sum_{n = 0}^{\infty}(\frac{f^{(n)}(0)}{n!}(x^n))\)

As we said above, we now see that this is similar to the first formula of the Taylor series, but in this case, \(a = 0\). In general, Maclaurin series are Taylor series for \(a=0\). The more terms we sum from the series, the closer the polynomial in question will approximate the function we were trying to approximate. But do remember that this only works for a small range of values of \(x\), namely those for which the series converges. 

A specific example 

Let’s use this formula to find an estimated function for \(\cos{x}\):
We can assume here that
\(f(x) = \cos{x}\)
\(f'(x) = -\sin{x}\)
\(f''(x) = -\cos{x}\)
\(f'''(x) = \sin{x}\)
\(f''''(x) = \cos{x}\) 
Thus the sequence of derivatives continues in a pattern.

Therefore:
\(f(0) = \cos{0} = 1\)
\(f'(0) = -\sin{0} = 0\)
\(f''(0) = -\cos{0} = -1\)
\(f'''(0) = \sin{0} = 0\)
\(f''''(0) = \cos{0} = 1\), and so on…

Using the formula for the Maclaurin series, we can derive an approximate function:
\(f(x) = f(0) + f'(0)x + \frac{1}{2}f''(0)x^2 + \frac{1}{6}f'''(0)x^3 + \frac{1}{24}f''''(0)x^4 + ...\)
\(f(x) = 1 + 0x + \frac{1}{2}(-1)(x^2) + \frac{1}{6}(0)(x^3) + \frac{1}{24}(1)(x^4) + ...\)
\(f(x) = 1 - \frac{1}{2}x^2 + \frac{1}{24}x^4 - …\)
Therefore \(\cos{x}\) can be approximated by:
\(\cos{x} \approx 1 - \frac{1}{2}x^2 + \frac{1}{24}x^4 - …\), which most mathematicians would agree is pretty cool.

But most people are not mathematicians.

[Figure 4: We can see that the more terms we have, the better the approximation gets, and the more values for which it becomes valid]

So why is this useful?

There are many real world uses of the Maclaurin series which we should all know about. To name just a few:
  • Computer Science & Engineering: Evaluating polynomials is easy - you just substitute values in and perform repeated multiplication to get exponentiation. If we want computers to be able to produce models of bridges swaying in the wind, for example, they need to be able to calculate functions such as sine and cosine. One way to do this is to simply evaluate the Maclaurin series to the required number of terms, such that the answer becomes "accurate enough". Otherwise computers would need tables of values for all the known functions, which becomes problematic in terms of computation time and memory usage.
  • Disease Modelling: It is well known (and highly relevant), that diseases spread exponentially. Well the function \(f(x) = e^x\) has its own Maclaurin series. When we need to perform lots of calculations very quickly on this function, using the Maclaurin approximations could help.
  • Physics: We can use this formula to help solve differential equations, such as those that underpin systems in simple harmonic motion, which, since everything in the universe is oscillating to some extent, is quite important.
  • Mathematics in itself: Differentiation & Integration of polynomials is much easier than with other functions. We can use Taylor series to derive results for other functions than polynomials, such as that the derivative of sine is cosine. We can also use Taylor series to evaluate limits, generate small angle approximations for trigonometric functions, and to derive the "most beautiful" equation in mathematics, \(e^{i\pi}+1 = 0\).
In summary, the Maclaurin Series is pretty important. I hope that I have given you an insight into how it works, how to use it on a given function, and most importantly why, in today's world, we need it.