BBC Micro Compendium by Jeremy Ruston

discussion of beeb/electron applications, languages, utils and educational s/w
User avatar
Richard Russell
Posts: 544
Joined: Sun Feb 27, 2011 10:35 am
Location: Downham Market, Norfolk
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by Richard Russell » Sat Jun 16, 2018 9:30 pm

BigEd wrote:
Sat Jun 16, 2018 5:05 pm
It would be mildly interesting to run a program like the above on one of your Z80 Basics.
I haven't got a Z80 version to hand, but BBC BASIC for Windows (prior to version 6) uses the identical algorithm. I've added its result to your table:

Code: Select all

Basic 2: sum is 1.77639201E-8, worst is 1.24176343E-9
Basic 4: sum is 2.06907208E-8, worst is 8.27842289E-10 
Basic 4.30 with fix: sum is 1.41910074E-8, worst is 4.38269447E-10 
BB4W v5.95a: sum is 2.34891165E-8, worst is 4.38269447E-10
I'm rather surprised that the worst-case value is identical to that of 4.30.

Richard.

User avatar
BigEd
Posts: 2174
Joined: Sun Jan 24, 2010 10:24 am
Location: West
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by BigEd » Sat Jun 16, 2018 9:43 pm

Thanks for running that test!

It might be that both algorithms make a 1 or 2 ULP error on the same input argument: it looks like a complicated number when reduced to a ratio and printed in decimal, but perhaps it's a really simple number.

User avatar
1024MAK
Posts: 8102
Joined: Mon Apr 18, 2011 4:46 pm
Location: Looking forward to summer in Somerset, UK...
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by 1024MAK » Sat Jun 16, 2018 10:52 pm

On an Amstrad NC200...
IMG_7262.JPG
BASIC accuracy on an Amstrad NC200
Mark

User avatar
Richard Russell
Posts: 544
Joined: Sun Feb 27, 2011 10:35 am
Location: Downham Market, Norfolk
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by Richard Russell » Sun Jun 17, 2018 9:44 am

1024MAK wrote:
Sat Jun 16, 2018 10:52 pm
On an Amstrad NC200...
Yes, confirming that BBC BASIC (Z80) and BBC BASIC for Windows (prior to v6) use the same algorithm.

The worst case figure being the same as Basic 4.30, but the sum being higher, shows the Chebyshev optimisation in action: it attempts to 'even out' the errors so they are similar across the range and therefore their sum will be greater than if the distribution had been less even.

Think of the situation where you approximate a transcendental function simply by truncating its theoretical power series expansion; in that case the resulting error will typically increase monotonically as the parameter increases. The effect of optimising the coefficients is to reduce the error at the top end whilst increasing it at the bottom end.

If you are lucky you will be able to use fewer coefficients whilst still achieving an acceptable maximum error, but the sum (or strictly RMS) error across the range will remain roughly the same as it was pre-optimisation. If I was to comment on that test program, I think it should have calculated the RMS error not the sum-of-absolute-values.

BBC BASIC (Z80) uses only four non-trivial coefficients in its calculation of LN(x), where SQR(0.5) <= x < SQR(2):

Code: Select all

      @% = &0A10

      x = 1.2345
      t = (x-1) / (x+1)
      lnx = 2 * t + 0.666666620178 * t^3 + 0.400008009281 * t^5 + 0.285153826582 * t^7 + 0.239241783041 * t^9

      PRINT lnx, LN(x)
If x is outside that range it is scaled by a power-of-2 and the final result adjusted accordingly.

Richard.

dominicbeesley
Posts: 654
Joined: Tue Apr 30, 2013 11:16 am
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by dominicbeesley » Tue Jun 19, 2018 3:07 pm

I've just stumbled across this thread, I've got a bit of bug chasing to do in my 6809 port of BASIC 4.32. The transcendental functions, I can't remember which but one, gives slightly different results. I ported it more or less blind (i.e. without trying to understand too deeply what was going on) but I suspect I'll have to get my head into the algorithms properly.

Did anyone find a copy/scan of the Compendium anywhere CCH is a bit of trek for me, I'd love to get my hands on a copy!

D

User avatar
hoglet
Posts: 7694
Joined: Sat Oct 13, 2012 6:21 pm
Location: Bristol
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by hoglet » Tue Jun 19, 2018 3:14 pm

Dominic,

Did you port the bug fix detailed here?
https://github.com/hoglet67/BBCBasic4r32

Dave

dominicbeesley
Posts: 654
Joined: Tue Apr 30, 2013 11:16 am
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by dominicbeesley » Tue Jun 19, 2018 3:36 pm

Thanks Dave,

I'll have to check but I suspect that I did...inadvertently - I will have recoded any table addressing stuff, no doubt, to use the 16 bit facilities of the 6809. I just compared with some of the tests viewtopic.php?t=10111#p121157 and my port comes out better than 4r32 but not right. It had me pulling my hair out trying to get 4.32 and 6809 port to give identical results - it seems I was definitely barking up the wrong tree.

It's good to know I was probably on the right track with what I thought it was doing but not so good that it is still wrong! It will probably turn out to be something pretty stupid like a rounding error in the floating point routines!

D

User avatar
marcusjambler
Posts: 465
Joined: Mon May 22, 2017 11:20 am
Location: Bradford
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by marcusjambler » Tue Jun 19, 2018 9:44 pm

Did anyone find a copy/scan of the Compendium anywhere
Er.... I bought a copy on Amazon last week :shock:

User avatar
marcusjambler
Posts: 465
Joined: Mon May 22, 2017 11:20 am
Location: Bradford
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by marcusjambler » Tue Jun 19, 2018 10:22 pm

BASIC on Z88
IMG_4031a.jpg

User avatar
Richard Russell
Posts: 544
Joined: Sun Feb 27, 2011 10:35 am
Location: Downham Market, Norfolk
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by Richard Russell » Wed Jun 20, 2018 9:00 am

BB4W v6 and BBCSDL perform their floating-point calculations using the FPU, which on x86 CPUs has an 80-bit precision. This makes it possible to evaluate 40-bit algorithms without the native accuracy affecting the results. Here you can see just how well the algorithm used by BBC BASIC (Z80) works:

Code: Select all

      REM This is intended to be run in BB4W v6 or BBCSDL
      @% = &0B14
      FOR x = 0.71 TO 1.41 STEP 0.05
        t = (x-1) / (x+1)
        lnx = 2 * t + 0.666666620178 * t^3 + 0.400008009281 * t^5 + 0.285153826582 * t^7 + 0.239241783041 * t^9
        PRINT x, lnx, LN(x), lnx-LN(x)
      NEXT

Code: Select all

                0.71      -0.34249030894      -0.34249030895    1.1441024316E-11
                0.76      -0.27443684569       -0.2744368457    1.0152711489E-11
                0.81      -0.21072103131      -0.21072103132    7.1939415552E-12
                0.86      -0.15082288973      -0.15082288973    6.9060718515E-12
                0.91     -0.094310679468     -0.094310679471     3.273291306E-12
                0.96      -0.04082199452      -0.04082199452    3.6760918203E-13
                1.01     0.0099503308532     0.0099503308532   -5.7004114563E-15
                1.06      0.058268908123      0.058268908124   -9.9050646428E-13
                1.11       0.10436001532       0.10436001532   -4.0421972681E-12
                1.16       0.14842000511       0.14842000512   -6.8158964936E-12
                1.21        0.1906203596       0.19062035961   -7.3381262411E-12
                1.26       0.23111172096       0.23111172096   -7.3755692444E-12
                1.31        0.2700271372       0.27002713721   -9.7487294929E-12
                1.36       0.30748469974       0.30748469975   -1.2511730584E-11
                1.41       0.34358970438       0.34358970439   -1.1411078408E-11

User avatar
BigEd
Posts: 2174
Joined: Sun Jan 24, 2010 10:24 am
Location: West
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by BigEd » Fri Jun 22, 2018 4:18 am

An interesting exercise - and handy to have an 80-bit world to investigate it. One thought, though: I think I'm right in saying that 0.6666666201781482 is a 40-bit clean version of the constant 0.666666620178 - is the result any different if you tweak the decimal representations of the constants such that your 80 bit version has all the trailing zeros that it should have, to be the same as the 40-bit constants in the original code?

User avatar
hoglet
Posts: 7694
Joined: Sat Oct 13, 2012 6:21 pm
Location: Bristol
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by hoglet » Fri Jun 22, 2018 6:14 am

BigEd wrote:
Fri Jun 22, 2018 4:18 am
An interesting exercise - and handy to have an 80-bit world to investigate it.
I think the exercise we did in C also ends up using 80-bit maths to measure the error in the approximations (it uses long double):
https://github.com/hoglet67/BBCBasic4r3 ... log_test.c

What's curious is that non of the log implementation we tried (Basic 2, 4.00 and 4.32) quite match Richard's implementation above. Maybe it would if it was represented in non-recursive form. Time for Wulfram Alpha again?

Dave
Last edited by hoglet on Fri Jun 22, 2018 6:16 am, edited 2 times in total.

User avatar
BigEd
Posts: 2174
Joined: Sun Jan 24, 2010 10:24 am
Location: West
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by BigEd » Fri Jun 22, 2018 9:14 am

(Ahem, yes, maybe we need one more decimal digit to get a 32 bit mantissa - but I see now that we actually want a 31 bit mantissa (because there's a sign bit) which is interesting, because (if I have it right) a 31 bit rounded version of that first coefficient is 0.666666620410978794097900390625 exactly - a tad larger. But maybe I'm confused about 31 vs 32 bit mantissas.)

Edit: I'm not sure we expect exact agreement between the Padé and the Chebyshev approaches to calculating an approximation?
Last edited by BigEd on Fri Jun 22, 2018 9:15 am, edited 1 time in total.

dominicbeesley
Posts: 654
Joined: Tue Apr 30, 2013 11:16 am
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by dominicbeesley » Fri Jun 22, 2018 10:40 am

Hi Dave,

I think the algorithms are right, I've been comparing to my BASIC port, which to my chagrin turns out I forked from a 4.00 source not 4.32 (it's about 5 years since I started this and it turns out I started with the wrong disassembly file!).

Anyway I was playing with it and wondering why it was giving the different results until I re-encoded the intermediate results into BASIC style 5 byte hex values I noticed that the constants you've used were coming out "wrong" (well different to those in the source).

i.e. The code on git hub has

Code: Select all

const DOUBLE C1 =  0.546254168   ;
   const DOUBLE C2 = -0.0513882861  ;
   const DOUBLE C3 =  0.583293331   ;
   const DOUBLE C4 = -0.0374986753  ;
   const DOUBLE C5 =  0.750000063   ;
   const DOUBLE C6 =  0.33333334    ;
   const DOUBLE C7 = -0.5;
I ended up using

Code: Select all

const DOUBLE C[7] = { 
 0.54625416757L   ,
-0.051388286127L  ,
 0.58329333133L   ,
-0.037498675345L  ,
 0.75000006333L   ,
 0.33333333989L  ,
-0.5           };
Which gives closer (but not identical) results. I suspect that whoever first trans-coded from the BASIC fp constants to decimals missed off a few digits (I find >12 dp is requred but YMMV).

A quick n dirty to convert BASIC fp hex values to decimals

Code: Select all

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>

void usage(FILE *fp, const char * msg) {

	if (msg) {
		fprintf(fp, "ERROR: %s\n\n", msg);
	}

    fprintf(fp, "fpdecode <hex>\nWhere <hex> is a 5 byte (10 digits, no spaces, no qualifiers) floating point value\n");
}

int hex_dec_nyb(const char *p) {
	if (*p >= '0' && *p <= '9')
		return (*p) - '0';
	else if (*p >= 'A' && *p <= 'F')
		return (*p) - 'A' + 10;
	else if (*p >= 'a' && *p <= 'f')
		return (*p) - 'a' + 10;
	else
	{
		usage(stderr, "Unrecognized hex digit");
	}
}


int hex_dec_byte(const char *p) {
	return (hex_dec_nyb(p) << 4) | hex_dec_nyb(p+1);
}


int main(int argc, char *argv[]) {

	if (argc != 2 || strlen(argv[1]) != 10) {
		usage(stderr, "incorrect arguments");
		return 2;
	}

	const char *hex = argv[1];

	int exp = hex_dec_byte(hex);
	long mant = 
		((long)hex_dec_byte(hex+2) << 24) +
		((long)hex_dec_byte(hex+4) << 16) + 
		((long)hex_dec_byte(hex+6) << 8) +
		((long)hex_dec_byte(hex+8));
	int sign = (mant & 0x80000000);
	mant = mant | 0x80000000L;

	long double num = ((long double)mant/(long double)0x100000000) * pow(2, exp-0x80);

	printf("%2.12Lg\n", num);

	return 0;
}

I used this and then "tweaked" the constants a bit until they reencoded to the correct values using (a deliberately simplistic routine):

Code: Select all

void format_fp(DOUBLE d)
{
	int exp = 0;
	int s = 0;

	if (d < 0)
	{
		s = 0x80;
		d=-d;
	}
	while (d > 1.0) {
		d = d / 2.0;
		exp++;
	}
	while (d < 0.5 && d > 0.0) {
		d = d * 2.0;
		exp--;
	}

	printf("|%02.02X|%08.08X", 0x80+exp, (long int)((s << 24) | ((long int)(d*(double)0x100000000LL) & 0x7FFFFFFF)));

}
After spending a few hours last night it turned out that my initial understanding of the algorithms wasn't at fault when porting, just a couple of daft errors with carry flags (one with incorrect sense, one where the code was entered with it assuming C was set when it wasn't!).

I've still got something that is giving occasional rounding errors (or differences to a real master) in the reciprocal functions but I need to double check all my assumptions now I realise I'm porting basic 4.00 not 4.32!

Hope this helps?

D

User avatar
Richard Russell
Posts: 544
Joined: Sun Feb 27, 2011 10:35 am
Location: Downham Market, Norfolk
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by Richard Russell » Fri Jun 22, 2018 11:44 am

BigEd wrote:
Fri Jun 22, 2018 4:18 am
I think I'm right in saying that 0.6666666201781482 is a 40-bit clean version of the constant 0.666666620178 - is the result any different if you tweak the decimal representations of the constants such that your 80 bit version has all the trailing zeros that it should have, to be the same as the 40-bit constants in the original code?
Here are the precise decimal values of the 40-bit coefficients, for what it's worth:

Code: Select all

a(3) = 0.66666662017814815044403076171875
a(5) = 0.4000080092810094356536865234375
a(7) = 0.285153826582245528697967529296875
a(9) = 0.2392417830415070056915283203125
Although the resulting 'difference' values change (because they are listed by the program with a meaningless precision) the actual LN values are identical to my earlier listing, to the 11 significant figures displayed.

Richard.

User avatar
Richard Russell
Posts: 544
Joined: Sun Feb 27, 2011 10:35 am
Location: Downham Market, Norfolk
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by Richard Russell » Fri Jun 22, 2018 11:52 am

BigEd wrote:
Fri Jun 22, 2018 9:14 am
I see now that we actually want a 31 bit mantissa (because there's a sign bit) which is interesting
BBC BASIC (Z80) uses a slightly different floating-point representation from the 6502 version, but I think in both cases the mantissa is effectively 32 bits, because the MSB is implied - it's always '1'.
I'm not sure we expect exact agreement between the Padé and the Chebyshev approaches to calculating an approximation?
Indeed not, and the fact that the Chebyshev optimisation was itself carried out using 40-bit floats might make the results slightly different.

Richard.

User avatar
Richard Russell
Posts: 544
Joined: Sun Feb 27, 2011 10:35 am
Location: Downham Market, Norfolk
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by Richard Russell » Fri Jun 22, 2018 12:00 pm

hoglet wrote:
Fri Jun 22, 2018 6:14 am
I think the exercise we did in C also ends up using 80-bit maths to measure the error in the approximations (it uses long double):
It depends on what compiler you used. In GCC (targetting an x86 CPU) 'long double' is 80-bits but in (Microsoft's) Visual C it's 64-bits.

Edit: And beware, at least some versions of GCC don't set the FPU to 80-bit mode so by default give 64-bit results anyway!

Richard.
Last edited by Richard Russell on Fri Jun 22, 2018 12:17 pm, edited 1 time in total.

User avatar
BigEd
Posts: 2174
Joined: Sun Jan 24, 2010 10:24 am
Location: West
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by BigEd » Fri Jun 22, 2018 12:27 pm

Ah, of course, the 32nd bit can be implicit. I knew that once, and now I know it again.

There's a lot to be said about converting between binary and decimal, although we don't normally need to know all the gory details. Here's an article with further pointers.

User avatar
ctr
Posts: 190
Joined: Wed Jul 16, 2014 2:53 pm
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by ctr » Fri Jun 22, 2018 10:54 pm

C99 and C++17 both support hexadecimal floating point constants so it should be possible to exactly represent the beeb's 40-bit constants.
BigEd wrote:
Fri Apr 27, 2018 3:45 pm
Somehow I'd got the idea that early BBC Basic versions used the ratio of two polynomials, but that seems not to be the case.
You were right. The beeb uses continued fractions in this form:

Nn+x/
...
N3+x/
N2+x/
N1

I tried multiplying this out to see if it I could figure out how to calculate the coefficients. This was a dead end but the sequence you get as you add more terms (working up from the bottom) goes like this:

N2+x/N1
(N1N2N3+(N1+N3)x)/(N1N2+x)
(N1N2N3N4+(N1N4+N1N2+N3N4)x+x^2)/(N1N2N3+(N1+N3)x)

Each extra term flips it over and multiples in another x (and stuff), so it is in fact a ratio of two polynomials. With seven constants N1..N7 the polynomials are both cubic. This is probably a well known fact about continued fractions.

I still wanted to know how to figure out the constants without using Wolfram Alpha. Euler's method is straightforward but produces continued fractions like this:

c(0)+c(1)x/
1-c(2)x/
1+c(2)x-c(3)x/
1+c(3)x-c(4)x/
1+...

This has variables in both the addends and the numerators. I couldn't find a way to convert this to a beeb-style continued fraction.

So how does Wolfram Alpha produce continued fractions in this form?:

c(0)+c(1)x/
1+c(2)x/
1+c(3)x/
1+...

I found this olde PhD thesis (PDF). On page 71 algorithm 3.16 produces continued fractions in this form:

c(0)/
1+c(1)x/
1+c(2)x/
1+...

I implemented this [N.B. detail missing from the paper: b(n,0)=a(n) n=1,2,3...] and it works but doesn't directly correspond to the beeb's or Wolfram's version because they expect an undivided x at the top. But this is fixable.

Given a power series:

a(0)+a(1)z+a(2)z^2+a(3)z^3...

use the algorithm from the paper to get a continued fraction for

a(1)+a(2)z+a(3)z^2...

(i.e. discard a(0) and divide by z). Then

a(0)+c(0)x/
1+c(1)x/
1+c(2)x/
1+...

is the Wolfram-style continued fraction, which is easily converted to beeb-style.

I implemented this and it gives exactly the same values as Wolfram Alpha. However, once converted to beeb-style these are slightly different to the values used in BASIC 2. I guess this is because of the Chebychev optimisation but I have no idea how to do that.

User avatar
BigEd
Posts: 2174
Joined: Sun Jan 24, 2010 10:24 am
Location: West
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by BigEd » Sat Jun 23, 2018 9:14 am

Aha, I see now - I was expecting two calls to a polynomial function and then a single division, but some rearrangement has been done. It's a good idea to compute the smallest terms first when summing a series, to get better rounding - possibly there's something like that going on here.

User avatar
Richard Russell
Posts: 544
Joined: Sun Feb 27, 2011 10:35 am
Location: Downham Market, Norfolk
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by Richard Russell » Sat Jun 23, 2018 11:32 am

ctr wrote:
Fri Jun 22, 2018 10:54 pm
I guess this is because of the Chebychev optimisation but I have no idea how to do that.
I don't know if this is helpful, but somewhat to my amazement I've been able to find the original BBC BASIC program that I wrote in 1982 to do it, and which was used to calculate the coefficients used in BBC BASIC (Z80), BBC BASIC (86) and BBC BASIC for Windows (prior to v6):

Code: Select all

   10 PRINT "POLYNOMIAL OPTIMISATION BY THE CHEBYSHEV METHOD"
   20 PRINT "(C) R.T.RUSSELL, 1982"
   30 PRINT '"What is the maximum order of polynomial"
   40 INPUT "you wish to use (>> required number)",order
   50 DIM c(order,order),a(order)
   60 PRINT '"Calculating Chebyshev polynomials..."
   70 PROCCHEBY(order)
   80 PRINT '"Enter the required function (SIN, COS, EXP, LN, ATN, ASN)"
   90 INPUT "or <RETURN> to specify your own: "func$
  100 ON ERROR ON ERROR OFF:GOTO 80
  110 IF func$<>"" REPEAT READ F$:UNTIL F$=func$:READ term$,idea
  120 ON ERROR OFF
  130 IF func$="" PRINT '"Input the general form of the coefficient"
  140 IF func$="" INPUT "of the term in x^n: "term$
  150 ON ERROR ON ERROR OFF:GOTO 80
  160 FOR n=1 TO order
  170   a(n)=EVAL(term$)
  180 NEXT
  190 ON ERROR OFF
  200 PRINT '"What is the maximum value of x";
  210 IF func$<>"" PRINT " (I suggest ";idea;")";
  220 INPUT range$:range=EVALrange$
  230 n=order-1:REPEAT n=n+1:error=ABS(EVALterm$*range^n):UNTIL EVALterm$<>0
  240 accumulated=error
  250 :
  260 REPEAT
  270   PRINT
  280   @%=&2080B
  290   FOR n=1 TO order
  300     PRINT "a("+STR$(n)+")="TAB(7)a(n)
  310   NEXT
  320   @%=&90A
  330   PRINT '"Error introduced: ";error
  340   PRINT "Accumulated error: ";accumulated'
  350   REPEAT
  360     INPUT "Test data (RETURN for next iteration): "test
  370     IF test<0 PROCSAVE
  380     IF test>0 PROCRESULT
  390   UNTIL test=0
  400   PRINT '"Calculating new coefficients..."
  410   FOR n=1 TO order-1
  420     a(n)=a(n)-a(order)*c(n,order)*range^(order-n)/c(order,order)
  430   NEXT n
  440   error=a(order)*range^order/c(order,order)
  450   accumulated=accumulated+ABS(error)
  460   order=order-1
  470 UNTIL FALSE
  480 :
  490 DEF PROCCHEBY(order)
  500 LOCAL n,i
  510 c(0,0)=1:c(1,1)=1:n=0
  520 REPEAT n=n+1
  530   c(0,n+1)=-c(0,n-1)
  540   FOR i=1 TO n+1
  550     c(i,n+1)=2*c(i-1,n)-c(i,n-1)
  560   NEXT i
  570 UNTIL n+1=order
  580 ENDPROC
  590 :
  600 DEF PROCSAVE
  610 REPEAT
  620   INPUT '"Filename: "F$
  630   F=OPENOUT(F$)
  640 UNTIL F<>0
  650 FOR n = 0 TO order
  660   PRINT#F,a(n)
  670 NEXT
  680 CLOSE #F
  690 ENDPROC
  700 :
  710 DEF PROCRESULT
  720 result=0
  730 FOR n=0 TO order
  740   result=result+a(n)*test^n
  750 NEXT
  760 @%=&A0A:PRINT result
  770 ENDPROC
  780 :
  790 DEF FNfactorial(n):IF n<2 =1 ELSE =n*FNfactorial(n-1)
  800 DEF FNfact2(n): IF n<2 = 1 ELSE =n*FNfact2(n-2)
  810 :
  820 DATA SIN,(PI/4)^n*-1^(n DIV2)*(n MOD2)/FNfactorial(n),1.0
  830 DATA COS,(PI/4)^n*-1^((n+1)DIV2)*((n+1)MOD2)/FNfactorial(n),1.0
  840 DATA EXP,1/FNfactorial(n),.70
  850 DATA LN,2*(n MOD2)/n,.18
  860 DATA ATN,1/PI*-1^(n DIV2)*(n MOD2)/n,.42
  870 DATA ASN,(n MOD2)*FNfact2(n-2)^2/FNfactorial(n),1.0
To use the program you initially specify an order of polynomial much larger than what you want to end up with (e.g. 20) then each time you press Enter it eliminates the highest-order coefficient and attempts to adjust the remainder to compensate. You keep doing that until the overall error is just about acceptable. I make no apologies for any shortcomings in this 36-year-old code! :shock:

Richard.
Last edited by Richard Russell on Sat Jun 23, 2018 11:46 am, edited 1 time in total.

User avatar
BigEd
Posts: 2174
Joined: Sun Jan 24, 2010 10:24 am
Location: West
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by BigEd » Sat Jun 23, 2018 11:38 am

Excellent - thanks for unearthing and sharing!

User avatar
Richard Russell
Posts: 544
Joined: Sun Feb 27, 2011 10:35 am
Location: Downham Market, Norfolk
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by Richard Russell » Sat Jun 23, 2018 11:57 am

BigEd wrote:
Sat Jun 23, 2018 11:38 am
Excellent - thanks for unearthing and sharing!
I haven't in general got archives of code written back then, but it so happens that in 1996 I needed once again to do a Chebyshev optimisation - this time for calculating sines and cosines on a Motorola 56000 fixed-point DSP (as part of the Free-D Camera Tracking Decoder). So at that time I hunted out the old program, probably on 8" floppy, and it was then preserved in the archived Free-D project. What are the chances...

Richard.

User avatar
ctr
Posts: 190
Joined: Wed Jul 16, 2014 2:53 pm
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by ctr » Mon Jun 25, 2018 2:39 pm

Richard Russell wrote:
Sat Jun 23, 2018 11:32 am
I don't know if this is helpful, but somewhat to my amazement I've been able to find the original BBC BASIC program
It is, but I now realise I got the wrong end of the stick. This is useful for the polynomial approximations, but not the continued fraction ones.

I checked that the continued fraction coefficients used for LN in BASIC2 actually perform better than the unoptimised coefficients from Wolfram Alpha. And they do.

Code: Select all

BASIC2 original
RMS: 1.10498094226884E-10
MAX: 2.7879337727299E-10

BASIC2 with unoptimised continued fractions
RMS: 6.13708607235846E-10
MAX: 2.83993045746556E-09
But I've still no idea how to derive them.
Last edited by ctr on Mon Jun 25, 2018 2:50 pm, edited 1 time in total.

User avatar
BigEd
Posts: 2174
Joined: Sun Jan 24, 2010 10:24 am
Location: West
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by BigEd » Mon Jun 25, 2018 2:47 pm

This might help. Maybe.

User avatar
ctr
Posts: 190
Joined: Wed Jul 16, 2014 2:53 pm
Contact:

Re: BBC Micro Compendium by Jeremy Ruston

Post by ctr » Tue Jun 26, 2018 3:11 pm

BigEd wrote:
Mon Jun 25, 2018 2:47 pm
This might help. Maybe.
I used Wolfram-Alpha to calculate the [3/3] pade approximant of (ln(1+x)-x)/x^2 (as used in BASIC2's LN) and then converted it to a continued fraction. This gave the same seven terms as calculating the continued fraction directly. Pade approximants of taylor series and truncated continued fraction representations of taylor series seem to be basically the same thing.

I also wondered how a Chebyshev-optimised polynomial approximation converted to a continued fraction would fare. Unsurprisingly, this is much worse than the default continued fraction.

I'm out of ideas for the moment.

Post Reply