Numerical Methods at work

Disclaimer:
Permission to use, copy, and distribute this software, and It’s documentation for any non-commercial purpose is hereby granted without fee, provided: THE SOFTWARE IS PROVIDED "AS-IS" AND WITHOUT WARRANTY OF ANY KIND, EXPRESS, IMPLIED OR OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL Henrik Vestermark, BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT OR CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER ADVISED OF THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

Numerical Papers

Below summary of papers I have written over the past years.

Fast Polynomial Root Finder, Part One
In general Newton’s method for finding roots of polynomials is an effective and easy algorithm to both implement and use. However certain weakness is exposed when trying to find roots in a polynomial with multiple roots. This paper highlights the weakness and devises a modification to the general Newton algorithm that can effectively cope with the multiple roots issue and deal with the usual pitfalls in using the Newton method to find polynomial roots. This paper is part of a multi-series of papers on how to use the same framework to implement different root finder methods.
Read more ...

October
2023

New Tool


Fast Polynomial Root Finder, Part Two
We elaborated on the part one paper and devised a modified Newton method dealing efficiently with Polynomials with real coefficients. This paper is part of a multi-series of papers on how to use the same framework to implement different root finder methods.
Read more ...

October
2023

New Tool


Fast Polynomial Root Finder, Part Three
We elaborated in the part three paper on higher order method for finding Polynomial roots and devised a modified Halley method dealing efficiently with Polynomials with real coefficients. This paper is part of a multi-series of papers on how to use the same framework to implement different root finder methods.
Read more ...

October
2023

New Tool


Fast Polynomial Root Finder, Part Four
We elaborated in the part Four paper on the Laguerre method for finding Polynomial roots and devised a modified version dealing efficiently with Polynomials with real coefficients. This paper is part of a multi-series of papers on using the same framework to implement different root finder methods.
Read more ...

November
2023

New Tool


Fast Polynomial Root Finder, Part Five
We elaborated in the part five paper on higher order method for finding Polynomial roots and devised a modified Durand-Kerner method dealing efficiently with Polynomials with real coefficients. This paper is part of a multi-series of papers on using the same framework to implement different root finder methods.
Read more ...

December
2023

New Tool


Fast Polynomial Root Finder, Part Six
We elaborated in the part Sixth paper on Ostrowski's multi-point method for finding Polynomial roots and devised a modified version dealing efficiently with Polynomials with real coefficients. This paper is part of a multi-series of papers on how to use the same framework to implement different root finder methods.
Read more ...

January
2024

New Tool


Fast Computation of Factorial in Arbitrary Precision
This paper is the subsequent chapter in a series that systematically addresses the real-world challenges of constructing an arbitrary-precision arithmetic toolkit. This installment specifically targets the algorithms and methodologies for calculating factorials, as well as falling and rising factorials, and binomial coefficients. Often categorized as basic operations in the literature, these functions are typically explained using straightforward recursion formulas. However, the paper argues that the simplistic approaches prove inadequate when you scale these calculations to arbitrary-precision arithmetic. The computational intricacies and performance considerations come to the forefront, necessitating more sophisticated algorithms and optimization techniques. The paper aims to unpack these complexities, offering a more comprehensive understanding and effective methods for arbitrary-precision computation of these mathematical functions.
Read more ...

September
2023

New Tool


Fast Computation of Fibonacci Numbers in Arbitrary Precision
This paper addresses the computation of the Fibonacci sequence with arbitrary precision, recognizing that while the Fibonacci sequence is straightforward from a mathematical perspective, its calculation encounters limitations when using a 64-bit environment, specifically above the 93rd Fibonacci number. To overcome this limitation, we explore various methods for computing the Fibonacci numbers, including the classic method, the Matrix exponentiation method, and the fast-doubling method. Additionally, Binet's direct formula is considered, but its accuracy is found to be constrained when using IEEE754 64-bit floating-point arithmetic. However, in the context of arbitrary precision, we propose an improved version that provides accurate results. Ultimately, a hybrid approach is presented as the preferred solution for efficiently computing the Fibonacci sequence with arbitrary precision.
Read more ...

August
2023

New Tool


Fast Computation of Pseudo-Random Numbers in Arbitrary Precision
This paper highlights Pseudo Random Number Generators (PRNGs) and how to use common PRNGs found in C++ libraries to generate random numbers with arbitrary precision. We look at the current PRNGs available in the C++ libraries, mt19937, ranlux24, ranlux48, and others plus we show the implementations of new ones like the Xoshiro family of PRNGs and the ChaCha20 PRNG. The latter is graded for cryptographic applications.
Read more ...

June
2023

New Tool


Fast Computation of Prime Numbers in Arbitrary Precision.
In this paper, we highlight some methods for determining if a number is a prime or a composite number. We look at the traditional brute force methods and some of the probabilistic methods like Miller-Rabin, Solovay-Strassen, and Baillie PSW methods and discuss what is the best strategy when dealing with a primality testing number exceeding the limit of a 64-bit environment.
Read more ...

May
2023

New Tool


Fast Computation of Stirling numbers in Arbitrary Precision.
This is a continuation of the series of papers written dealing with the practical aspect of implementing an arbitrary precision math package. The paper describes how to efficiently generate the Stirling number of the first, second, and third kind (the third kind is also known as the Lah number), needed for making a complete Arbitrary precision Math package.
Read more ...

March
2023

New Tool


Fast Computation of Math Constants in Arbitrary Precision.
This is a continuation of the series of papers written dealing with the practical aspect of implementing an arbitrary precision math package. The paper describes the many constants, like the Euler-Mascheroni, Catalan, and the Apéry (zeta(3) constant, needed for making a complete Arbitrary precision Math package.
Read more ...

February
2023

New Tool


Fast Gamma, Beta, Error, Zeta function & Bernoulli numbers for Arbitrary Precision.
This is a continuation of a series of papers written, dealing with the practical aspect of implementing an arbitrary precision math package. The paper describes the gamma, beta, error, zeta, and Lambert W0 function in arbitrary precision. Some of the mentioned functions rely on factorial, binomial coefficients, and Bernoulli numbers, etc. These low-level functions are also described leading up to the description of the gamma function, beta, error, zeta, and Lambert W0 function. We show various methods and how to compute them with arbitrary precision.
Read more ...

January
2023

New Tool


The Math behind arbitrary precision
We are all used to the fast microprocessors available nowadays and the computational speed of basic arithmetic, trigonometric or logarithmic functions is done at a lightning-fast speed. However, when building arbitrary integers and floating point packages, that can handle decimals in the range from a few to several million digits it is all back to the basic of math to build an arbitrary precision package with reasonable speed.
This paper describes the underlying math behind this package and is a completely updated and expanded version of the original paper from 2013.
Read more ...

February
2023

New Tool
Original
2013


Fast Exp() calculation for Arbitrary Precision number
Usually, when implementing an arbitrary precision math package you would use the standard Taylor series calculation for calculating ex for arbitrary precisions. The Taylor series for ex is not particularly fast in its raw form. However, you can apply techniques that significantly improved the performance of the method. We will discuss the various method for calculating ex and elaborate on the techniques like clever argument reduction and coefficient scaling to improve the performance of the method.
As usual, we will show the actual C++ source for the computation using the author’s own arbitrary precision Math library, see.
Read more ...

June
2024

New Tool
Original
2022


Practical Implementation of Spigot Algorithms for transcendental constants
This paper examined the various modern version of spigot algorithm for calculating transcendental constant like π, e, ln(2) and ln(10) to unlimited precision. It layout the algorithm and the timing for the constants and compare it with a traditional implementation using arbitrary precision arithmetic. It is found that the performance of spigot algorithm beats the traditional method using arbitrary precision with several factors and it is therefore recommend to be used instead, when performance is needed.
Read more ...

August
2022


Original
2017


Practical Implementation of π Algorithms
This paper examined the various modern version of algorithm for calculating π. That potential could be the based for using these algorithms when using arbitrary precision arithmetic that opens up for calculating π to Billions or Trillions of digits. Let it be noted that there is no engineering or practical reason why you need to calculate π beyond the limitation in the IEEE754 standard for floating point precision as found in PC, computers etc. However, the quest for more precision has let to a lot of new discovery of modern and faster algorithm that is presented in this paper. This revision contains a new section on the Binary splitting methods.
Currently the record in 2022 is 100 trillions digits of π.
Read more ...

August
2022


Original
2016


Fast Hyperbolic functions for Arbitrary Precision number
This is a follow-up to a previous paper that describes the math behind arbitrary precision numbers. First of all the original paper was written back in 2013 and quite a few things had happened since then, secondly, I came across some other interesting methods to do the calculation of the hyperbolic function. The paper describes in more detail how to do the hyperbolic functions sinh(x), cosh(x), tanh(x) and the inverse arcsinh(x), arccosh(x), and arctanh(x) calculation with arbitrary precision. Furthermore, we outline some traditional methods but also introduce an improved version that makes the calculation 5-10 times faster than the original method used in the author's own arbitrary precision math packages.
Read more ...

June
2024

New Tool
Original
2022


Fast Trigonometric functions for Arbitrary Precision number
When implementing an arbitrary precision math packages you would use the standard Taylor series calculation for calculating sin(x), cos(x), arcsin(x), and arctan(x) for arbitrary precisions, while tan(x) & arccos(x) can be derived from sin(x) or cos(x). The Taylor series for trigonometric functions is not particularly fast in its raw form. However, you can apply techniques that significantly improved the performance of the method. We will discuss the various method for calculating trigonometric functions and elaborate on the techniques like clever argument reductions and coefficient scaling to improve the performance of the method. Furthermore, we will analyze some Newton methods for calculating some of the trigonometric functions. As usual, we will show the actual C++ source for the computation using the author’s own arbitrary precision Math library.
Read more ...

July
2024

New Tool
Original
2022


Fast Log() calculation for Arbitrary Precision number
Usually, when implementing arbitrary precision math packages you would use the standard Taylor series calculation for calculating ln(x) (loge(x)) for arbitrary precisions. The Taylor series for ln(x) is not particularly fast in its raw form. However, you can apply techniques that significantly improved the performance of the method. We will discuss the various method for calculating ln(x) and elaborate on the techniques like clever argument reduction and coefficient scaling to improve the performance of the method. Furthermore, we will analyze the Newton and Halley method for calculating ln(x) and finally go over the AGM method, which by far is the fastest method of them all. As usual, we will show the actual C++ source for the computation using the author’s own arbitrary precision Math library.
Read more ...

February
2023

New Tool
Original
2022


Fast Square Root & Inverse (1/x) calculation for Arbitrary Precision number
Usually, when implementing arbitrary precision math packages you would use the standard Newton calculation as the preferred iterative method for calculating the square root, the inverse (1/x), or, the nrooth for arbitrary precisions. The Newton method has a convergence rate of two meaning that for each iteration the number of correct digits doubles. This is traditional and has been implemented in various arbitrary precision packages. However, there exist other methods with higher order convergence rates (e.g. Halley’s method with cubic convergence rate). We will examine if using higher order methods (e.g. Halley) is worth the extra work and compare it with second order methods (e.g. Newton). As usual, we will show the actual C++ source for the computation using the author’s own arbitrary precision Math library.
Read more ...

March
2023

New Tool
Original
2022


Fast Conversion from decimal string to Arbitrary Precision number
This is a follow-up to a previous paper that describes a very fast method to convert an arbitrary precision number to a string. This paper describes the opposite process of converting a string number to an arbitrary precision number. We all know how to convert a string number to simple language types like integer and float using basic encoding technics. However, how does it stack up when the string contains 1,000 to 1M string digits or even more when converting it to an arbitrary precision number?
This paper describes the issue with the basic conversion algorithm and outlines a new improved algorithm that speeds up the process by a factor of more than 20,000.
Read more ...

March
2023


Fast Conversion from Arbitrary Precision number to a decimal string
This paper come to light when I was finished writing a new version of my Arbitrary precision library after I converted the internal format from a decimal base string form (where each arbitrary precision number was stored as a decimal string) to a binary form (where the arbitrary precision number was stored as a vector of binary digits). In the conversion, processed everything when smoothly until I began to test the performance of the library. The internal arithmetic handling was fast for all the usual types like +,-,*,/ etc as expected when using binary digits instead of decimal digits. However when I was testing the initialization of an arbitrary precision number with a string and the conversion back from the internal binary form to output the decimal representation of the number, the performance when ‘south’ when handling more than a few thousand digits. To improve the performance I outline a new way to handle this conversion for large numbers of a million digits or more.
Read more ...

March
2023

New Tool
Original
2021


Practical Implementation of Polynomial root finders
This paper examined many of the practical issues arising from designing and implementing polynomial root finders. If you look into this, you will discover that there exist many different methods for how to find the roots of a polynomial. Some are in today’s standard consider to be obsolete, others are still hanging around and some are consider the state of the art.
This paper go through practical issues arising for designing and implementing these different methods. The paper also highlight some of the many difference between the methods and there is a discussion how to overcome the typical loss of accuracy when dealing with multiplicity of roots higher than one.
Read more ...

May
2020


Taylor Shifting of Polynomial
This paper present two classic method for Polynomial Taylor shifting. The Shaw-Traub and the Horner method. We present it using the Matrix form most often found in literature. However, both methods matrix form can be boiled down to a set of simple row operations.
Read more ...

May
2020


The Fundamental Financial Equation
The Fundamental Financial Equation link the 5 variables. Present value, Future value, period Payment, number of period and the interest rate together in this equation:
(PV+PMT(1+c·IR)/IR)((1+IR)NP-1)+PV+FV=0
That can be useful in a number of scenario including mortgage calculation. This paper highlight the equation and devise a method for how to calculate all the other variables when only 4 of the 5 variables are known, Including the Interest rate based on a Newton iteration.
Read more ...

May
2024


Original
2016


A Practical Implementation of Interval Arithmetic
Interval arithmetic is a mathematical technique that handles ranges of values rather than precise numbers. Instead of working with a single value, calculations are performed with intervals representing the entire range of possible values. Each interval has a lower and upper bound, representing the minimum and maximum possible values respectively.You can find examples of interval arithmetic template classes for the C++ programming language but none that are easy to understand and implement. That is why this paper was written “A Practical Implementation of Interval Arithmetic”. The paper highlights implementing a general interval template class supporting the float and double type of the C++ programming language.
Read more ...

February
2024

New Tool
Original
2015


A Modified Newton and higher orders Iteration for multiple roots
In general Newton method for finding roots of polynomials is an effective and easy algorithm to both implement and use. However, a certain weakness is exposed when trying to find roots in a polynomial with multiple roots. This paper highlights the weakness and devised a modification to the general Newton algorithm that effectively can cope with the multiple roots issue. Furthermore, we also address a solution for higher order methods as well, which include Halley’s and Householders 3rd order method.
Read more ...

March
2013


Polynomial deflation strategy when deflating polynomial roots
This paper outline polynomial deflation strategy when using root finding methods that find one or two roots at a time that in turn is deflated into the polynomial and the process is repeated to find one or two more roots until all roots has been found. Now technically you can use either forward deflating or backward deflating or a hybrid composite deflation that combines the advantages of both the forward and backward deflating technique to preserve the accuracy of the deflated polynomial.
Read more ...

March
2013


Stopping criteria for polynomial root finders
Finding adequate stopping criteria for polynomial root finders is not always easy. Too aggressive stopping criteria and you will never converge to an acceptable root, or too lax and you find roots with a lesser degree of accuracy than possible by the actual limitation of the machine precision. This paper discusses stopping criteria based on the round off errors when evaluation a polynomial at a given real or complex point.
Read more ...

February
2023


Original
2013