Representing continuous probability distributions

走远了吗. 提交于 2019-12-02 15:37:40
Alexandre C.

No need for histograms or symbolic computation: everything can be done at the language level in closed form, if the right point of view is taken.

[I shall use the term "measure" and "distribution" interchangeably. Also, my Haskell is rusty and I ask you to forgive me for being imprecise in this area.]

Probability distributions are really codata.

Let mu be a probability measure. The only thing you can do with a measure is integrate it against a test function (this is one possible mathematical definition of "measure"). Note that this is what you will eventually do: for instance integrating against identity is taking the mean:

mean :: Measure -> Double
mean mu = mu id

another example:

variance :: Measure -> Double
variance mu = (mu $ \x -> x ^ 2) - (mean mu) ^ 2

another example, which computes P(mu < x):

cdf :: Measure -> Double -> Double
cdf mu x = mu $ \z -> if z < x then 1 else 0

This suggests an approach by duality.

The type Measure shall therefore denote the type (Double -> Double) -> Double. This allows you to model results of MC simulation, numerical/symbolic quadrature against a PDF, etc. For instance, the function

empirical :: [Double] -> Measure
empirical h:t f = (f h) + empirical t f

returns the integral of f against an empirical measure obtained by eg. MC sampling. Also

from_pdf :: (Double -> Double) -> Measure
from_pdf rho f = my_favorite_quadrature_method rho f

construct measures from (regular) densities.

Now, the good news. If mu and nu are two measures, the convolution mu ** nu is given by:

(mu ** nu) f = nu $ \y -> (mu $ \x -> f $ x + y)

So, given two measures, you can integrate against their convolution.

Also, given a random variable X of law mu, the law of a * X is given by:

rescale :: Double -> Measure -> Measure
rescale a mu f = mu $ \x -> f(a * x)

Also, the distribution of phi(X) is given by the image measure phi_* X, in our framework:

apply :: (Double -> Double) -> Measure -> Measure
apply phi mu f = mu $ f . phi

So now you can easily work out an embedded language for measures. There are much more things to do here, particularly with respect to sample spaces other than the real line, dependencies between random variables, conditionning, but I hope you get the point.

In particular, the pushforward is functorial:

newtype Measure a = (a -> Double) -> Double
instance Functor Measure a where
    fmap f mu = apply f mu

It is a monad too (exercise -- hint: this very much looks like the continuation monad. What is return ? What is the analog of call/cc ?).

Also, combined with a differential geometry framework, this can probably be turned into something which compute Bayesian posterior distributions automatically.

At the end of the day, you can write stuff like

m = mean $ apply cos ((from_pdf gauss) ** (empirical data))

to compute the mean of cos(X + Y) where X has pdf gauss and Y has been sampled by a MC method whose results are in data.

Jeremy Gibbons

Probability distributions form a monad; see eg the work of Claire Jones and also the LICS 1989 paper, but the ideas go back to a 1982 paper by Giry (DOI 10.1007/BFb0092872) and to a 1962 note by Lawvere that I cannot track down (http://permalink.gmane.org/gmane.science.mathematics.categories/6541).

But I don't see the comonad: there's no way to get an "a" out of an "(a->Double)->Double". Perhaps if you make it polymorphic - (a->r)->r for all r? (That's the continuation monad.)

Is there anything that stops you from employing a mini-language for this?

By that I mean, define a language that lets you write f = x + y and evaluates f for you just as written. And similarly for g = x * z, h = y(x), etc. ad nauseum. (The semantics I'm suggesting call for the evaluator to select a random number on each innermost PDF appearing on the RHS at evaluation time, and not to try to understand the composted form of the resulting PDFs. This may not be fast enough...)


Assuming that you understand the precision limits you need, you can represent a PDF fairly simply with a histogram or spline (the former being a degenerate case of the later). If you need to mix analytically defined PDFs with experimentally determined ones, you'll have to add a type mechanism.


A histogram is just an array, the contents of which represent the incidence in a particular region of the input range. You haven't said if you have a language preference, so I'll assume something c-like. You need to know the bin-structure (uniorm sizes are easy, but not always best) including the high and low limits and possibly the normalizatation:

struct histogram_struct {
  int bins; /* Assumed to be uniform */
  double low;
  double high;
  /* double normalization; */    
  /* double *errors; */ /* if using, intialize with enough space, 
                         * and store _squared_ errors
                         */
  double contents[];
};

This kind of thing is very common in scientific analysis software, and you might want to use an existing implementation.

I worked on similar problems for my dissertation.

One way to compute approximate convolutions is to take the Fourier transform of the density functions (histograms in this case), multiply them, then take the inverse Fourier transform to get the convolution.

Look at Appendix C of my dissertation for formulas for various special cases of operations on probability distributions. You can find the dissertation at: http://riso.sourceforge.net

I wrote Java code to carry out those operations. You can find the code at: https://sourceforge.net/projects/riso

Autonomous mobile robotics deals with similar issue in localization and navigation, in particular the Markov localization and Kalman filter (sensor fusion). See An experimental comparison of localization methods continued for example.

Another approach you could borrow from mobile robots is path planning using potential fields.

A couple of responses:

1) If you have empirically determined PDFs they either you have histograms or you have an approximation to a parametric PDF. A PDF is a continuous function and you don't have infinite data...

2) Let's assume that the variables are independent. Then if you make the PDF discrete then P(f(x,y)) = f(x,y)p(x,y) = f(x,y)p(x)p(y) summed over all the combinations of x and y such that f(x,y) meets your target.

If you are going to fit the empirical PDFs to standard PDFs, e.g. the normal distribution, then you can use already-determined functions to figure out the sum, etc.

If the variables are not independent, then you have more trouble on your hands and I think you have to use copulas.

I think that defining your own mini-language, etc., is overkill. you can do this with arrays...

Some initial thoughts:

First, Mathematica has a nice facility for doing this with exact distributions.

Second, representation as histograms (ie, empirical PDFs) is problematic since you have to make choices about bin size. That can be avoided by storing a cumulative distribution instead, ie, an empirical CDF. (In fact, you then retain the ability to recreate the full data set of samples that the empirical distribution is based on.)

Here's some ugly Mathematica code to take a list of samples and return an empirical CDF, namely a list of value-probability pairs. Run the output of this through ListPlot to see a plot of the empirical CDF.

empiricalCDF[t_] := Flatten[{{#[[2,1]],#[[1,2]]},#[[2]]}&/@Partition[Prepend[Transpose[{#[[1]], Rest[FoldList[Plus,0,#[[2]]]]/Length[t]}&[Transpose[{First[#],Length[#]}&/@ Split[Sort[t]]]]],{Null,0}],2,1],1]

Finally, here's some information on combining discrete probability distributions:

http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter7.pdf

I think the histograms or the list of 1/N area regions is a good idea. For the sake of argument, I'll assume that you'll have a fixed N for all distributions.

Use the paper you linked edit 4 to generate the new distribution. Then, approximate it with a new N-element distribution.

If you don't want N to be fixed, it's even easier. Take each convex polygon (trapezoid or triangle) in the new generated distribution and approximate it with a uniform distribution.

Another suggestion is to use kernel densities. Especially if you use Gaussian kernels, then they can be relatively easy to work with... except that the distributions quickly explode in size without care. Depending on the application, there are additional approximation techniques like importance sampling that can be used.

If you want some fun, try representing them symbolically like Maple or Mathemetica would do. Maple uses directed acyclic graphs, while Matematica uses a list/lisp like appoach (I believe, but it's been a loooong time, since I even thought about this).

Do all your manipulations symbolically, then at the end push through numerical values. (Or just find a way to launch off in a shell and do the computations).

Paul.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!