Re: Floating point: sine, cosine etc.

From: Jesus Cea <jcea_at_jcea.es>
Date: Thu, 13 Oct 2022 04:51:12 +0200
Message-ID: <a1d654a5-8386-d212-24f9-e06d659cca50_at_jcea.es>
On 30/9/22 12:55, David Roberts wrote:
> Search for Taylor expansions or Taylor series. I found a website this 
> morning that tells you how to calculate the coefficients. Unfortunately, 
> you have to keep differentiating the original function for each term you 
> require.
Beside Taylor series, check "chebychev" and even "minimax" polynomials. 
There is a huge math corpus about approximating a function via 
polynomials. This was high school material in Spain in my days, I don't 
know today.

This is quite a basic topic, I am surprised how many messages this 
thread is growing.

Search online:

function approximation polynomials
minimax approximation
Taylor series / Taylor expansion

You can find Taylor expansion for many popular functions everywhere 
online. For example, sin(x) is:

sin(x) ~= x - x^3/3! + x^5/5! - x^7/7 + .... (you can add more terms)

Where X is expressed in radians (and you do range reduction as needed). 
The length of your polynomial is a balance between speed and precision. 
Also, Taylor expansion is seeded from an base value (zero in this case), 
and getting far away from it will give worse approximations. That is, 
the error is not constant in the convergence interval (many other 
approaches are better than Taylor). For instance, sin(0) = 0, no error 
even using only a simple term expansion like sin(x) ~= x. But at pi/2, 
real sin(x) = 1. Taylor expansion gives back:

one term: 1.571 (massive error)
two term: 0.9248
Three terms: 1.00452
Four terms: 0.9998
Five terms: 1.0000035

As you can see, the error decreases with more terms. It will decrease 
too near 0, so range reduction is a must. You could even use symmetries 
to reduce the range to [0..pi/4].

Taylor is not the best approximation technique, but it is quite simple 
to understand and terms are easily available online for many practical 
functions, you don't need to derive them yourself.

To efficiently compute the polynomial, try Horner algorithm: 
<https://en.wikipedia.org/wiki/Horner%27s_method>.

A good practical book about this and other related topics is "Numerical 
Methods: Algorithms and Applications". Chapters 8-10 cover numerical 
methods for data interpolation and approximation. 
<https://www.amazon.com/Numerical-Methods-Applications-Laurene-Fausett/dp/0130314005>.

You could check the regular Commodore BASIC functions for trig and log 
functions. I kind of remember that they used Taylor expansions. For 
example 
<https://github.com/Project-64/reloaded/blob/master/c64/firmware/C64LD11.S>, 
line 8799, constants at 8879.

For an interesting twist for trigonometric/logarithmic functions, check 
CORDIC algorithm. Quite different approach. It is kind of magical when 
it "clicks" in your mind.

I think this email can be read as aggressive. Sorry. It is 5 AM here and 
my last couple of nights have been hard. This one is going to be hard too.

-- 
Jesús Cea Avión                         _/_/      _/_/_/        _/_/_/
jcea_at_jcea.es - https://www.jcea.es/    _/_/    _/_/  _/_/    _/_/  _/_/
Twitter: _at_jcea                        _/_/    _/_/          _/_/_/_/_/
jabber / xmpp:jcea_at_jabber.org  _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
Received on 2022-10-13 06:03:21

Archive generated by hypermail 2.3.0.