# arbitrary precision types

arbitrary precision types

--> fixed point / floating point types, with overloaded basic operators (+*-/)
it has been suggested to use char[] and base-256 coded numbers to implement this.
Arbitrary precision numbers are useful for many applications, like cryptography and analysis of vibration behaviour. Almost everything that requires a large number of arithmetic calculations is likely to use arbitrary precision numbers.

24 Sep 05, 10:24AM

I agree that most calculations should be done using multiprecision arithmetic. I have written four components to provide basic operations: addition, subtraction and multiplication. They are located in the [b:3ac9e4e71b]Maths::Arithmetic[/b:3ac9e4e71b] category, and are called: [i:3ac9e4e71b]Add, Subtract, Multiply[/i:3ac9e4e71b] and [i:3ac9e4e71b]Karatsuba[/i:3ac9e4e71b]. The last one is a good algorithm for multiplying integers, having time-complexity less than ordinary school multiplication. Please view these components for more documentation.
The way I have implemented the functions was to handle positive integers, with digits belonging to an arbitrary numerical base from 2 to 10. The integers themselves are stored as [b:3ac9e4e71b]std::string[/b:3ac9e4e71b] objects to provide better accessibility in initialising them or displaying their values. I am fully aware there may be more optimized methods of storing such integers, perhaps even using a certain compression method.
I agree it would be good to have a set of multiprecision number theory algorithms that would form the base of all other numerical components on Code Cogs. This would have the advantage of providing an arbitrary number of precision digits in the evaluation of several mathematical functions, at the cost of execution speed. In my opinion it would be possible to create two overloaded versions of each numerical component, one using the ordinary [b:3ac9e4e71b]double[/b:3ac9e4e71b] type implemented with C, and the second using multiprecision arithmetic provided by the set of number theory algorithms. I would be interested in your opinion on this matter.

24 Sep 05, 11:14AM

hi,
i had better check the data base before posting ;-)
the multiplication algorithm is really nifty, but in my opinion one could still improve your implementation.
Obviously the base of any number is not relevant for the calculations we perform on that number. So why don't take the (8-bit-long) characters of a string and use them as base-256 digits ? In doing so, we can easily perform simple arithmetics (like addition) on these digits without converting them back to ordinary doubles or integers.

#include <iostream> using namespace std; int main() { char a,b; a = 12; b = 79; //the conversion to short is not really good, it will produce negative //numbers for sums greater than 127 cout <<short(a+b)<<endl; return(0); }Of course, then there have to be means of converting the number back to a human-readable base, but that can be done very efficiently. That is of course not my idea, i've taken it from [url=www.library.cornell.edu/nr]Numerical Recipes in C . What you said about overloading functions in the code cogs database would be much simpler if we had a defined type (i.e. a class) that overloads arithmetic operators. In this vein you avoid rewriting the entire code and replacing "a+b" by "add(a,b)" and so on. robert

4 Oct 05, 1:33PM

Hi,
i tried to implement an arbitrary precision data type for unsigend integers.
It's not fast, but it works fine with addition, substraction, increment, decreme
nt, comparison as well as right-shift and left-shift. The only problem is divisi
on, it's much harder than i thought. When i finished programming the division al
gorithm, i will submit the program to code cogs.
robert

29 Oct 05, 6:55AM

Its been a while since your last post. Have you made any progress with the arbitrary precision types?
Your idea to use base 256 sounded promising...
Will.

29 Oct 05, 10:46AM

yes, it's been a while :-) but i'm still working on it - as i pointed out division is quite hard to implement.
robert