I try to convert matlab code to numpy and figured out that numpy has a different result with the std function.
in matlab
std([1,3,4,6])
ans = 2.0817
The standard deviation is the square root of the variance. The variance of a random variable X is defined as

An estimator for the variance would therefore be

where
denotes the sample mean. For randomly selected
, it can be shown that this estimator does not converge to the real variance, but to

If you randomly select samples and estimate the sample mean and variance, you will have to use a corrected (unbiased) estimator

which will converge to
. The correction term
is also called Bessel's correction.
Now by default, MATLABs std calculates the unbiased estimator with the correction term n-1. NumPy however (as @ajcr explained) calculates the biased estimator with no correction term by default. The parameter ddof allows to set any correction term n-ddof. By setting it to 1 you get the same result as in MATLAB.
Similarly, MATLAB allows to add a second parameter w, which specifies the "weighing scheme". The default, w=0, results in the correction term n-1 (unbiased estimator), while for w=1, only n is used as correction term (biased estimator).