What would be the best way to do the following.
Enter a very long number, lets say 500,000 digits long without it going into scientific notation; and then am able to
Mathematica allows you to do such math and you can write complete programs in it.
Otherwise, what you seek is a "library" to extend the built-in functionality of another programming language, such as Python or Java.
In the case of Python, the decimal module enables you to specify a precision in which math operations will be peformed.
Python does this out of the box with no special library. So does 'bc' (which is a full programming language masquerading as a calculator) for Unix systems.
What you're looking for isn't necessarily a language, but an arbitrary-precision library.
GMP would be a fast implementation in C/C++, and scripting languages that handles big integers would probably use something like that.
Many functional languages natively support arbitrary-precision numbers. Some have already been mentioned here, but I'll repeat them for completeness:
Haskell (when using GHC) also has builtin support for arbitrarily long integers. Here's a snippet showing the length of a number converted to a string.
Prelude> length $ show $ 10
2
Prelude> length $ show $ 1 + 2^2000000
602060
Prelude> let x = 2^200000
Prelude> let y = 2^200000 + 5
Prelude> y - x
5
Or you could just type 2^200000
at the interactive console and wait a couple minutes for it to print out all 600k+ characters. I figured this way was a little simpler to demonstrate.
Perl has a bignum
module to do that sort of thing, and Python supports it natively.