![]() Indian mathematician Madhava used correction terms to improve accuracy. This result is inverse to our goal of minimizing deviation. Thus, at each step, we get the maximum deviation (white color) from the line. Half of the segment (red color) is closer to the line than its edges (white color). At each step, the white line passes through the pi line. Our goal is to minimize deviation (red color) from the pi line (blue color) with a limited number of steps. How to get the result? It all depends on the goal. To get the exact value, we need to continue the endless calculation process. The simple program in C for calculating pi value:Īt each step, we get a value (white color) far from the pi line (blue color). To get pi, we need to multiply the result by 4. įor an infinite number of terms, this sum is equal to the fourth part of pi. For example, take the first number 1, the second number 1/3 with a minus sign, the third 1/5 with a plus sign, etc. The sign of a fraction is changed at every step. The formula Madhava - Gregory - Leibniz is a simple sum of fractions of one divided by odd number from 1 to infinity. It is always interesting to improve an existing method so that it works faster and easier. These methods are well researched and their characteristics are known, how much time and computing resources are needed to solve them. what took 60 hours on an Apple II took just under 4 seconds now on my seven-years-old laptop = 54000 times faster = 1.45x speedup every year which seems a pretty good match with Moore's Law.There are many interesting ways to calculate the pi number: geometric constructions, natural experiments using random numbers, as well as a huge number of different formulas from simple to complex. I just did a search now and found this readable page (many pages aren't :-/ ) which I translated in to BASIC and. I ran a program from a magazine to calculate pi, and over a weekend I got two full pages of digits (~8000), of which at least the first 100 were correct when compared to the encyclopaedia. Way back when I was a teenager, my school had an Apple II. Obviously there is a speed hit, though, eg, if you multiply two 10,000 digit numbers, then that is 100 million digit-times-digit multiplications, although any decent library would be doing it in chunks of digits eg 32 bit operations mean you can add 9 decimal digits at a time. ![]() I haven't seen any native ones for B4X, but presumably a Java one could be used. ![]() There are math packages around for doing big-number calculations, usually effectively limited only by the available memory. I remember TurboPascal had them, and more recently, that Microsoft BASICs have a CURRENCY type, which were 64 bit integers divided by 10000 to give 4 decimal places (2 for cents, and 2 for just-in-case )īig-number applications like your 100-digit pi calculation, are not going to work with double-precision (~15 decimal digits) or even quadruple-precision (~33 decimal digits) numbers, regardless of whether they are in binary or decimal. Some languages have fixed-point or decimal floating-point (aka BCD = binary coded decimal) numbers, specifically for handling financial values. Double-precision gives 52 binary digits of precision = approximately 15 decimal digits.Īccounting applications can usually get away with using double-precision numbers - 15 decimal digits - say 2 guard digits for rounding leaves 13 usable digits = 11 digits for dollars and 2 digits for cents = good for values up to $100 billion accurate to the last cent. Single-precision floats give 24 binary digits of precision which translates to approximately 7 decimal digits of precision. The problem is that 0.1 (ie 1/10th) cannot be exactly represented in binary, just the same as 1/3rd cannot be exactly represented in decimal. Click to expand.You're not alone, except that mostly it is people doing financial accounting that discover it when using single-precision floating-point to adding small numbers to large totals. ![]()
0 Comments
Leave a Reply. |