I\'m using a server with 128GB memory to do some computation. I need to malloc() a 2D float array of size 56120 * 56120. An example code is as follows:
The problem is, that your calculation
(num * num) * sizeof(float)
is done as 32-bit signed integer calculation and the result for num=56120 is
-4582051584
Which is then interpreted for size_t with a very huge value
18446744069127500032
You do not have so much memory ;) This is the reason why malloc() fails.
Cast num to size_t in the calculation of malloc, then it should work as expected.