Why is np.linalg.norm(x,2) slower than solving it directly?

ⅰ亾dé卋堺 提交于 2021-01-29 16:09:26

问题


Example code:

import numpy as np
import math
import time

x=np.ones((2000,2000))

start = time.time()
print(np.linalg.norm(x, 2))
end = time.time()
print("time 1: " + str(end - start))

start = time.time()
print(math.sqrt(np.sum(x*x)))
end = time.time()
print("time 2: " + str(end - start))

The output (on my machine) is:

1999.999999999991
time 1: 3.216777801513672
2000.0
time 2: 0.015042781829833984

It shows that np.linalg.norm() takes more than 3s to solve it, while the direct solution takes just 0.01s. Why is np.linalg.norm() so slow?


回答1:


np.linalg.norm(x, 2) computes the 2-norm, taking the largest singular value

math.sqrt(np.sum(x*x)) computes the frobenius norm

These operations are different, so it should be no surprise that they take different amounts of time. What is the difference between the Frobenius norm and the 2-norm of a matrix? on math.SO may be of interest.




回答2:


What is comparable is :

In [10]: %timeit sum(x*x,axis=1)**.5
36.4 ms ± 6.11 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [11]: %timeit norm(x,axis=1)
32.3 ms ± 3.94 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Neither np.linalg.norm(x, 2) nor sum(x*x)**.5 are the same thing.



来源:https://stackoverflow.com/questions/52804046/why-is-np-linalg-normx-2-slower-than-solving-it-directly

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!