This is a logistic sigmoid function:

I know x. How can I calculate F(x) in Python now?
Let's say x = 0.458.
F(x) = ?
This is a logistic sigmoid function:
I know x. How can I calculate F(x) in Python now?
Let's say x = 0.458.
F(x) = ?
This should do it:
import math def sigmoid(x): return 1 / (1 + math.exp(-x))
And now you can test it by calling:
>>> sigmoid(0.458) 0.61253961344091512
Update: Note that the above was mainly intended as a straight one-to-one translation of the given expression into Python code. It is not tested or known to be a numerically sound implementation. If you know you need a very robust implementation, I'm sure there are others where people have actually given this problem some thought.
It is also available in scipy: http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logistic.html
In [1]: from scipy.stats import logistic In [2]: logistic.cdf(0.458) Out[2]: 0.61253961344091512
which is only a costly wrapper (because it allows you to scale and translate the logistic function) of another scipy function:
In [3]: from scipy.special import expit In [4]: expit(0.458) Out[4]: 0.61253961344091512
If you are concerned about performances continue reading, otherwise just use expit
.
As expected logistic.cdf
is (much) slower than expit
. expit
is still slower than the python sigmoid
function when called with a single value because it is a universal function written in C ( http://docs.scipy.org/doc/numpy/reference/ufuncs.html ) and thus has a call overhead. This overhead is bigger than the computation speedup of expit
given by its compiled nature when called with a single value. But it becomes negligible when it comes to big arrays:
In [9]: import numpy as np In [10]: x = np.random.random(1000000) In [11]: def sigmoid_array(x): ....: return 1 / (1 + np.exp(-x)) ....:
(You'll notice the tiny change from math.exp
to np.exp
(the first one does not support arrays, but is much faster if you have only one value to compute))
In [12]: %timeit -r 1 -n 100 sigmoid_array(x) 100 loops, best of 1: 34.3 ms per loop In [13]: %timeit -r 1 -n 100 expit(x) 100 loops, best of 1: 31 ms per loop
But when you really need performance, a common practice is to have a precomputed table of the the sigmoid function that hold in RAM, and trade some precision and memory for some speed (for example: http://radimrehurek.com/2013/09/word2vec-in-python-part-two-optimizing/ )
Also, note that expit
implementation is numerically stable since version 0.14.0: https://github.com/scipy/scipy/issues/3385
Here's how you would implement the logistic sigmoid in a numerically stable way (as described here):
def sigmoid(x): "Numerically-stable sigmoid function." if x >= 0: z = exp(-x) return 1 / (1 + z) else: z = exp(x) return z / (1 + z)
Or perhaps this is more accurate:
import numpy as np def sigmoid(x): return math.exp(-np.logaddexp(0, -x))
Internally, it implements the same condition as above, but then uses log1p
.
In general, the multinomial logistic sigmoid is:
def nat_to_exp(q): max_q = max(0.0, np.max(q)) rebased_q = q - max_q return np.exp(rebased_q - np.logaddexp(-max_q, np.logaddexp.reduce(rebased_q)))
another way
>>> def sigmoid(x): ... return 1 /(1+(math.e**-x)) ... >>> sigmoid(0.458)
I feel many might be interested in free parameters to alter the shape of the sigmoid function. Second for many applications you want to use a mirrored sigmoid function. Third you might want to do a simple normalization for example the output values are between 0 and 1.
Try:
def normalized_sigmoid_fkt(a, b, x): ''' Returns array of a horizontal mirrored normalized sigmoid function output between 0 and 1 Function parameters a = center; b = width ''' s= 1/(1+np.exp(b*(x-a))) return 1*(s-min(s))/(max(s)-min(s)) # normalize function to 0-1
And to draw and compare:
def draw_function_on_2x2_grid(x): fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2) plt.subplots_adjust(wspace=.5) plt.subplots_adjust(hspace=.5) ax1.plot(x, normalized_sigmoid_fkt( .5, 18, x)) ax1.set_title('1') ax2.plot(x, normalized_sigmoid_fkt(0.518, 10.549, x)) ax2.set_title('2') ax3.plot(x, normalized_sigmoid_fkt( .7, 11, x)) ax3.set_title('3') ax4.plot(x, normalized_sigmoid_fkt( .2, 14, x)) ax4.set_title('4') plt.suptitle('Different normalized (sigmoid) function',size=10 ) return fig
Finally:
x = np.linspace(0,1,100) Travel_function = draw_function_on_2x2_grid(x)
Good answer from @unwind. It however can't handle extreme negative number (throwing OverflowError).
My improvement:
def sigmoid(x): try: res = 1 / (1 + math.exp(-x)) except OverflowError: res = 0.0 return res
Another way by transforming the tanh
function:
sigmoid = lambda x: .5 * (math.tanh(.5 * x) + 1)
Tensorflow includes also a sigmoid
function: https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/sigmoid
import tensorflow as tf sess = tf.InteractiveSession() x = 0.458 y = tf.sigmoid(x) u = y.eval() print(u) # 0.6125396