probability

Probability distribution function in Python

删除回忆录丶 提交于 2019-12-06 10:26:20
问题 I know how to create an histogram in Python, but I would like that it is the probability density distribution. Let's start with my example. I have an array d , with a size of 500000 elements. With the following code I am building a simple histogram telling me how many elements of my array d are between every bin. max_val=log10(max(d)) min_val=log10(min(d)) logspace = np.logspace(min_val, max_val, 50) H=hist(select,bins=logspace,histtype='step') The problem is that this plot is not what I want

Weighted random map

五迷三道 提交于 2019-12-06 09:23:09
Suppose I have a big 2D array of values in the range [0,1] where 0 means "impossible" and 1 means "highly probable". How can I select a random set of points in this array according to the probabilities described above ? One way to look at the problem is to ignore (for the moment) the fact that you're dealing with a 2d grid. What you have are a set of weighted items. A standard way of randomly selecting from such a set is to: sum the weights, call the sum s select a uniform random value 0 <= u < s iterate through the items, keeping a running total t of the weights of the items you've examined

Use rand5(), to generate rand7() (with the same probability) [duplicate]

╄→尐↘猪︶ㄣ 提交于 2019-12-06 09:20:27
问题 This question already has answers here : Closed 6 years ago . Possible Duplicate: Expand a random range from 1–5 to 1–7 I have seen the question in here: Link The solution the author provided didn't seem to generate the same probability. For example, the number 4, out of 10k calls for the function, was returned 1-2 times (when the other numbers, like 2, were returned about 2k times each). Maybe I understood wrong, or I wrote the algorithm wrong, but here: static int rand5() { return new

Plotting fit of lognormal distribution after fit by scipy using seaborn

若如初见. 提交于 2019-12-06 08:42:14
I have fit a distribution to my data using scipy.stats.lognormal , and now I am trying to plot the distribution. I have generated the fit to my data with seaborn: ax = sns.distplot(1 - clint_unique_cov_filter['Identity'], kde=False, hist=True, norm_hist=True, fit=lognorm, bins=np.linspace(0, 1, 500)) ax.set_xlim(0, 0.1) Which gets me the fit I expect: I need to use the parameters of this distribution for further analysis, but first I wanted to verify I understood the terms. This post shows me that I want to do the following transformations to turn the output of lognorm.fit to get the standard

memory error by using rbf with scipy

萝らか妹 提交于 2019-12-06 08:42:09
I want to plot some points with the rbf function like here to get the density distribution of the points: if i run the following code, it works fine: from scipy.interpolate.rbf import Rbf # radial basis functions import cv2 import matplotlib.pyplot as plt import numpy as np # import data x = [1, 1, 2 ,3, 2, 7, 8, 6, 6, 7, 6.5, 7.5, 9, 8, 9, 8.5] y = [0, 2, 5, 6, 1, 2, 9, 2, 3, 3, 2.5, 2, 8, 8, 9, 8.5] d = np.ones(len(x)) print(d) ti = np.linspace(-1,10) xx, yy = np.meshgrid(ti, ti) rbf = Rbf(x, y, d, function='gaussian') jet = cm = plt.get_cmap('jet') zz = rbf(xx, yy) plt.pcolor(xx, yy, zz,

Matplotlib: How to convert a histogram to a discrete probability mass function?

依然范特西╮ 提交于 2019-12-06 07:38:08
问题 I have a question regarding the hist() function with matplotlib. I am writing a code to plot a histogram of data who's value varies from 0 to 1. For example: values = [0.21, 0.51, 0.41, 0.21, 0.81, 0.99] bins = np.arange(0, 1.1, 0.1) a, b, c = plt.hist(values, bins=bins, normed=0) plt.show() The code above generates a correct histogram (I could not post an image since I do not have enough reputation). In terms of frequencies, it looks like: [0 0 2 0 1 1 0 0 1 1] I would like to convert this

Sampling from a multivariate probability density function in python

不想你离开。 提交于 2019-12-06 07:20:21
I have a multivariate probability density function P(x,y,z), and I want to sample from it. Normally, I would use numpy.random.choice() for this sort of task, but this function only works for 1-dimensional probability densities. Is there an equivalent function for multivariate pdfs? There a few different paths one can follow here. (1) If P(x,y,z) factors as P(x,y,z) = P(x) P(y) P(z) (i.e., x, y, and z are independent) then you can sample each one separately. (2) If P(x,y,z) has a more general factorization, you can reduce the number of variables that need to be sampled to whatever's conditional

Find item in array using weighed probability and a value

旧时模样 提交于 2019-12-06 07:19:27
Last week I had some problems with a simple program I am doing and somebody here helped me. Now I have run into another problem. I currently have this code: var findItem = function(desiredItem) { var items = [ { item: "rusty nail", probability: 0.25 }, { item: "stone", probability: 0.23 }, { item: "banana", probability: 0.20 }, { item: "leaf", probability: 0.17 }, { item: "mushroom", probability: 0.10 }, { item: "diamond", probability: 0.05 } ]; var possible = items.some( ({item, probability}) => item === desiredItem && probability > 0 ); if (!possible) { console.log('There is no chance you\

Kalman filter prediction in case of missing measurement and only positions are known

不羁岁月 提交于 2019-12-06 06:31:42
I am trying to implement Kalman filter. I only know the positions. The measurements are missing at some time steps. This is how I define my matrices: Process noise matrix Q = np.diag([0.001, 0.001] ) Measurement noise matrix R = np.diag([10, 10]) Covariance matrix P = np.diag([0.001, 0.001]) Observation matirx H = np.array([[1.0, 0.0], [0.0, 1.0]]) Transition matrix F = np.array([[1, 0], [0, 1]]) state x = np.array([pos[0], [pos[1]]) I dont know if it is right. For instance, if I see target at t=0 and dont see at t = 1 , how will I predict its position. I dont know the velocity. Are these

Real-world problems with naive shuffling

一笑奈何 提交于 2019-12-06 06:12:27
问题 I'm writing a number of articles meant to teach beginning programming concepts through the use of poker-related topics. Currently, I'm working on the subject of shuffling. As Jeff Atwood points out on CodingHorror.com, one simple shuffling method (iterating through an array and swapping each card with a random card elsewhere in the array) creates an uneven distribution of permutations. In an actual application, I would just use the Knuth Fisher-Yates shuffle for more uniform randomness. But,