I shall leave aside the question of whether it's a good idea, or whether the physical quantity you're measuring could even in theory ever exceed a value of 2^63, or 10^19 or thereabouts. I'm sure you have your reasons. So what are your options in pure C/C++?
The answer is: not many.
- 128 bit integers are not part of any standard, nor are they supported on the compilers I know.
- 64 bit double will give you the dynamic range (10^308 or so). An excellent choice if you don't need exact answers. Unfortunately if you have a number with enough zeros and you add one to it, it isn't going to change.
- 80 bit double is natively support by the floating point processor, and that gives you the 63 bit mantissa together with the extended dynamic range.
So, how about roll-your-own 128 bit integer arithmetic? You would really have to be a masochist. It's easy enough to do addition and subtraction (mind your carries), and with a bit of thought it's not too hard to do multiplication. Division is another thing entirely. That is seriously hard, and the likely outcome is bugs similar to the Pentium bug of the 1990s.
You could probably accumulate your counters in two (or more) 64 bit integers without much difficulty. Then convert them into doubles for the calculations at the end. That shouldn't be too hard.
After that I'm afraid it's off to library shopping. Boost you mentioned, but there are much more specialised libraries around, such as cpp-bigint.
Not surprisingly, this question has been asked before and has a very good answer: Representing 128-bit numbers in C++.