multinomial

multinominal regression with imputed data

杀马特。学长 韩版系。学妹 提交于 2019-12-01 11:30:35
问题 I need to impute missing data and then coduct multinomial regression with the generated datasets. I have tried using mice for the imputing and then multinom function from nnet for the multnomial regression. But this gives me unreadable output. Here is an example using the nhanes2 dataset available with the mice package: library(mice) library(nnet) test <- mice(nhanes2, meth=c('sample','pmm','logreg','norm')) #age is categorical, bmi is continuous m <- with(test, multinom(age ~ bmi, model = T)

Sampling without replacement from a given non-uniform distribution in TensorFlow

北慕城南 提交于 2019-12-01 03:53:58
I'm looking for something similar to numpy.random.choice(range(3),replacement=False,size=2,p=[0.1,0.2,0.7]) in TensorFlow. The closest Op to it seems to be tf.multinomial(tf.log(p)) which takes logits as input but it can't sample without replacement. Is there any other way to do sampling from a non-uniform distribution in TensorFlow? Thanks. You could just use tf.py_func to wrap numpy.random.choice and make it available as a TensorFlow op: a = tf.placeholder(tf.float32) size = tf.placeholder(tf.int32) replace = tf.placeholder(tf.bool) p = tf.placeholder(tf.float32) y = tf.py_func(np.random

GBM multinomial distribution, how to use predict() to get predicted class?

拟墨画扇 提交于 2019-12-01 03:46:22
I am using the multinomial distribution from the gbm package in R. When I use the predict function, I get a series of values: 5.086328 -4.738346 -8.492738 -5.980720 -4.351102 -4.738044 -3.220387 -4.732654 but I want to get the probability of each class occurring. How do I recover the probabilities? Thank You. Take a look at ?predict.gbm , you'll see that there is a "type" parameter to the function. Try out predict(<gbm object>, <new data>, type="response") . smci predict.gbm(..., type='response') is not implemented for multinomial, or indeed any distribution other than bernoulli or poisson. So

Sampling without replacement from a given non-uniform distribution in TensorFlow

萝らか妹 提交于 2019-12-01 00:41:47
问题 I'm looking for something similar to numpy.random.choice(range(3),replacement=False,size=2,p=[0.1,0.2,0.7]) in TensorFlow. The closest Op to it seems to be tf.multinomial(tf.log(p)) which takes logits as input but it can't sample without replacement. Is there any other way to do sampling from a non-uniform distribution in TensorFlow? Thanks. 回答1: You could just use tf.py_func to wrap numpy.random.choice and make it available as a TensorFlow op: a = tf.placeholder(tf.float32) size = tf

Multinomial logit in R: mlogit versus nnet

≡放荡痞女 提交于 2019-11-29 08:34:49
问题 I want to run a multinomial logit in R and have used two libraries, nnet and mlogit, which produce different results and report different types of statistics. My questions are: What is the source of discrepency between the coefficients and standard errors reported by nnet and those reported by mlogit ? I would like to report my results to a Latex file using stargazer . When doing so, there is a problematic tradeoff: If I use the results from mlogit then I get the statistics I wish, such as