probability

generating sorted random numbers without exponentiation involved?

烈酒焚心 提交于 2019-12-12 17:53:19
问题 I am looking for a math equation or algorithm which can generate uniform random numbers in ascending order in the range [0,1] without the help of division operator. i am keen in skipping the division operation because i am implementing it in hardware. Thank you. 回答1: Generating the numbers in ascending (or descending) order means generating them sequentially but with the right distribution. That, in turn, means we need to know the distribution of the minimum of a set of size N, and then at

Using matlab function “pdf”

这一生的挚爱 提交于 2019-12-12 16:53:35
问题 I have got a Gaussian mixture distribution object obj of 64 dimensions and would like to put it in the pdf function to find out the probability of certain point. Yet when I type pdf(obj,obj.mu(1,:)) to test the object it yield a very high probability (like 2.4845e+069) And it does not make sense, cause probability should lies between zero and one. Is my matlab having any problem? p.s. even pdf(obj,obj.mu(1,:)+obj.Sigma(1,1)*rand()) yield a high probability (2.1682e+069) 回答1: First things

A single elimination tournament - number of possible combinations

ⅰ亾dé卋堺 提交于 2019-12-12 16:40:56
问题 What are the number of combinations in which 8 persons taking part in a single elimination tornament play? Total no of matches played would be 7 but I also need the number of combinations that can for this set 回答1: If it doesn't matter where in the tree a player starts, but only which opponents he/she fights, and how long he/she gets, we can say that the left player always wins and then just calculate the number of ways to create the bottom most row, which is 8! 40320. The first possibility:

How can I implement the Kullback-Leibler loss in TensorFlow?

此生再无相见时 提交于 2019-12-12 12:14:15
问题 I need to minimize KL loss in tensorflow . I tried this function tf.contrib.distributions.kl(dist_a, dist_b, allow_nan=False, name=None) , but I failed. I tried to implement it manually: def kl_divergence(p,q): return p* tf.log(p/q)+(1-p)*tf.log((1-p)/(1-q)) Is it correct? 回答1: What you have there is the cross entropy, KL divergence should be something like: def kl_divergence(p, q): return tf.reduce_sum(p * tf.log(p/q)) This assumes that p and q are both 1-D tensors of float, of the same

multinomial pmf in python scipy/numpy

拟墨画扇 提交于 2019-12-12 08:25:24
问题 Is there a built-in function in scipy/numpy for getting the PMF of a Multinomial? I'm not sure if binom generalizes in the correct way, e.g. # Attempt to define multinomial with n = 10, p = [0.1, 0.1, 0.8] rv = scipy.stats.binom(10, [0.1, 0.1, 0.8]) # Score the outcome 4, 4, 2 rv.pmf([4, 4, 2]) What is the correct way to do this? thanks. 回答1: There's no built-in function that I know of, and the binomial probabilities do not generalize (you need to normalise over a different set of possible

Uniform distribution from a fractal Perlin noise function in C#

孤街醉人 提交于 2019-12-12 07:48:09
问题 My Perlin noise function (which adds up 6 octaves of 3D simplex at 0.75 persistence) generates a 2D array array of double s. These numbers each come out normalized to [-1, 1], with mean at 0. I clamp them to avoid exceptions, which I think are due to floating-point accuracy issues, but I am fairly sure my scaling factor is good enough for restricting the noise output to exactly this neighborhood in the ideal case. Anyway, that's all details. The point is, here is a 256-by-256 array of noise:

How to pick a number based on probability?

懵懂的女人 提交于 2019-12-12 06:58:27
问题 I want to select a random number from 0,1,2,3...n , however I want to make it that the chance of selecting k|0<k<n will be lower by multiplication of x from selecting k - 1 so x = (k - 1) / k . As bigger the number as smaller the chances to pick it up. As an answer I want to see the implementation of the next method: int pickANumber(n,x) This is for a game that I am developing, I saw those questions as related but not exactly that same: How to pick an item by its probability C Function for

probability for variable value in agentsets, netlogo

天大地大妈咪最大 提交于 2019-12-12 04:17:13
问题 I am trying to use probability to assign [0] or [1] individual values for a turtles-own variable in NetLogo, but have only found ways of printing or reporting probability outputs rather than using them to determine a variable value. Example: I am asking two turtles to check whether they each want to exchange information with each other, and have assigned a variable exchangeinfo. If exchangeinfo = 0, then no information exchange happens. If exchangeinfo = 1, then information exchange occurs.

How to calculate probability of a binary function in Python?

戏子无情 提交于 2019-12-12 03:52:39
问题 Let us consider the following function: $f(x)=\begin{cases} 0,& \Pr(f(x)=0)=x \\ 1,& \Pr(f(x)=1)=1-x\end{cases}$, where $0< x< 1$ Trial: I have tried with the following code but I,m not sure whether it is correct or not. The codes are here: import random def f(x): b=random.randint(0,1) return b x=0.3 count0=0 count1=0 for i in range(1000): if f(x)==0: count0=count0+1 else: count1=count1+1 print 'pr(f(x)=0)=', count0*1.0/1000 print 'pr(f(x)=1)=', count1*1.0/1000 Does my code give the correct

Select a user according to set probability

给你一囗甜甜゛ 提交于 2019-12-12 03:03:16
问题 My database stores info about users, their groups, and relationships. One of the columns, fcount , in the users table tracks the number of relationships each user has had within their current group; it starts at 0, and I increment it when appropriate. I need to write a script that selects all users in a given group and then randomly selects one of them with the probability of being selected being based on the number of relationships one has had; fewer relationships means a greater probability