Is there a good way of differentiating between row and column vectors in python? So far I\'m using numpy and scipy and what I see so far is that If I was to give one a vecto
Use double []
when writing your vectors.
Then, if you want a row vector:
row_vector = array([[1, 2, 3]]) # shape (1, 3)
Or if you want a column vector:
col_vector = array([[1, 2, 3]]).T # shape (3, 1)
It looks like Python's Numpy doesn't distinguish it unless you use it in context:
"You can have standard vectors or row/column vectors if you like. "
" :) You can treat rank-1 arrays as either row or column vectors. dot(A,v) treats v as a column vector, while dot(v,A) treats v as a row vector. This can save you having to type a lot of transposes. "
Also, specific to your code: "Transpose on a rank-1 array does nothing. " Source: http://wiki.scipy.org/NumPy_for_Matlab_Users
I think you can use ndmin option of numpy.array. Keeping it to 2 says that it will be a (4,1) and transpose will be (1,4).
>>> a = np.array([12, 3, 4, 5], ndmin=2)
>>> print a.shape
>>> (1,4)
>>> print a.T.shape
>>> (4,1)
You can store the array's elements in a row or column as follows:
>>> a = np.array([1, 2, 3])[:, None] # stores in rows
>>> a
array([[1],
[2],
[3]])
>>> b = np.array([1, 2, 3])[None, :] # stores in columns
>>> b
array([[1, 2, 3]])
When I tried to compute w^T * x
using numpy, it was super confusing for me as well. In fact, I couldn't implement it myself. So, this is one of the few gotchas in NumPy that we need to acquaint ourselves with.
As far as 1D array is concerned, there is no distinction between a row vector and column vector. They are exactly the same.
Look at the following examples, where we get the same result in all cases, which is not true in (the theoretical sense of) linear algebra:
In [37]: w
Out[37]: array([0, 1, 2, 3, 4])
In [38]: x
Out[38]: array([1, 2, 3, 4, 5])
In [39]: np.dot(w, x)
Out[39]: 40
In [40]: np.dot(w.transpose(), x)
Out[40]: 40
In [41]: np.dot(w.transpose(), x.transpose())
Out[41]: 40
In [42]: np.dot(w, x.transpose())
Out[42]: 40
With that information, now let's try to compute the squared length of the vector |w|^2
.
For this, we need to transform w
to 2D array.
In [51]: wt = w[:, np.newaxis]
In [52]: wt
Out[52]:
array([[0],
[1],
[2],
[3],
[4]])
Now, let's compute the squared length (or squared magnitude) of the vector w
:
In [53]: np.dot(w, wt)
Out[53]: array([30])
Note that we used w
, wt
instead of wt
, w
(like in theoretical linear algebra) because of shape mismatch with the use of np.dot(wt, w). So, we have the squared length of the vector as [30]
. Maybe this is one of the ways to distinguish (numpy's interpretation of) row and column vector?
And finally, did I mention that I figured out the way to implement w^T * x
? Yes, I did :
In [58]: wt
Out[58]:
array([[0],
[1],
[2],
[3],
[4]])
In [59]: x
Out[59]: array([1, 2, 3, 4, 5])
In [60]: np.dot(x, wt)
Out[60]: array([40])
So, in NumPy, the order of the operands is reversed, as evidenced above, contrary to what we studied in theoretical linear algebra.
P.S. : potential gotchas in numpy
If you want a distiction for this case I would recommend to use a matrix
instead, where:
matrix([1,2,3]) == matrix([1,2,3]).transpose()
gives:
matrix([[ True, False, False],
[False, True, False],
[False, False, True]], dtype=bool)
You can also use a ndarray
explicitly adding a second dimension:
array([1,2,3])[None,:]
#array([[1, 2, 3]])
and:
array([1,2,3])[:,None]
#array([[1],
# [2],
# [3]])