Building on the solution given by Martijn Pieters, you can make your string a huge number, what Python 3 can deal very well, since it's int type is arbitrarily large (that is not "how computers works", see my commentary of your question).
Given the list of character numerical codes:
>>> a = [ord(c) for c in 'Hello World!']
>>> print(a)
[72, 101, 108, 108, 111, 32, 87, 111, 114, 108, 100, 33]
And knowing, from Wikipedia's page on Unicode that the greatest unicode character number is 10FFFF (in hexadecimal), you can do:
def numfy(s):
number = 0
for e in [ord(c) for c in s]:
number = (number * 0x110000) + e
return number
def denumfy(number):
l = []
while(number != 0):
l.append(chr(number % 0x110000))
number = number // 0x110000
return ''.join(reversed(l))
Thus:
>>> a = numfy("Hello, World, عالَم, ދުނިޔެ, जगत, 世界")
>>> a
31611336900126021[...]08666956
>>> denumfy(a)
'Hello, World, عالَم, ދުނިޔެ, जगत, 世界'
Where this 0x110000 (from 10FFFF + 1) is the number of different foreseen Unicode characters (1114112, in decimal). If you are sure you are only using English alphabet, you can use here 128, and if you are using some Latin language with accents, it is safe to use 256. Either way your number will be much smaller, but it will be unable to represent every Unicode character.