alpha = csr_matrix((1000,1000),dtype=np.float32)
beta = csr_matrix((1,1000),dtype=np.float32)
alpha[0,:] = beta
After initiation, alpha and beta sh
When I copy your steps I get
In [131]: alpha[0,:]=beta
/usr/lib/python3/dist-packages/scipy/sparse/compressed.py:730:
SparseEfficiencyWarning: Changing the sparsity structure of a
csr_matrix is expensive. lil_matrix is more efficient.
SparseEfficiencyWarning)
So that's the first indicator that you are doing something that the developers consider unwise.
We could dig into the csr
__setitem__
code, but my guess is that it is converting your beta
to dense, and then doing the assignment. And isn't automatically doing the eliminate_zeros
step (either during or after the assignment).
Normally why would people be doing a[...]=...
? Usually it's to build the sparse matrix. Zeroing out non-zero values is possible, but not frequent enough to treat as a special case.
It's possible for a variety of reasons to have 0 values in a sparse matrix. You could even insert the 0s into alpha.data
directly. That's why there are 'cleanup' methods like eliminate_zeros
and prune
. Even nonzero
applies a !=0
mask
# convert to COOrdinate format
A = self.tocoo()
nz_mask = A.data != 0
return (A.row[nz_mask],A.col[nz_mask])
In normal sparse practice you build the data in coo
or other format, and then convert to csr
for calculations. Matrix multiplication is it's strong point. That constructs a new sparse matrix. Modification of a csr
is possible, but not encouraged.
====================
alpha.__setitem__??
(in Ipython) shows
def __setitem__(self, index, x):
# Process arrays from IndexMixin
i, j = self._unpack_index(index)
i, j = self._index_to_arrays(i, j)
if isspmatrix(x):
x = x.toarray()
....
self._set_many(i, j, x.ravel())
So yes, it converts the RHS to a dense array before doing the assignment.