multinomial

why multinom() predicts a lot of rows of probabilities for each level of outcome?

柔情痞子 提交于 2019-12-11 18:20:59
问题 I have a moltinomial logistic regression and the outcome variable has 6 levels: 10,20,60,70,80,90 test<-multinom(y ~ x1 + x2 + as.factor(x3) ,data=data1) I want to predict the probabilities associate with each level of y for each set of given input values. So I run this: dfin <- data.frame( ses = c(10,20,60,70,80,90), x1=2.1, x2=4, x3=40) predict(test, todaydata = dfin, type = "probs") But instead of getting 6 probabilities (one for each level of outcome), I got many many rows of

scikit-learn - multinomial logistic regression with probabilities as a target variable

百般思念 提交于 2019-12-11 07:59:52
问题 I'm implementing a multinomial logistic regression model in Python using scikit-learn. The thing is, however, that I'd like to use probability distribution for classes of my target variable. As an example let's say that this is a 3-classes variable which looks as follows: class_1 class_2 class_3 0 0.0 0.0 1.0 1 1.0 0.0 0.0 2 0.0 0.5 0.5 3 0.2 0.3 0.5 4 0.5 0.1 0.4 So that a sum of values for every row equals to 1. How could I fit a model like this? When I try: model = LogisticRegression

Improve flow Python classifier and combine features

本秂侑毒 提交于 2019-12-11 04:42:01
问题 I am trying to create a classifier to categorize websites. I am doing this for the very first time so it's all quite new to me. Currently I am trying to do some Bag of Words on a couple of parts of the web page (e.g. title, text, headings). It looks like this: from sklearn.feature_extraction.text import CountVectorizer countvect_text = CountVectorizer(encoding="cp1252", stop_words="english") countvect_title = CountVectorizer(encoding="cp1252", stop_words="english") countvect_headings =

ChoiceModelR - Hierarchical Bayes Multinomial Logit Model

匆匆过客 提交于 2019-12-10 11:41:21
问题 I hope that some of you are a bit experienced with the R package ChoiceModelR by Sermas and Colias, to estimate a Hierarchical Bayes Multinomial Logit Model. Actually, I am quite a newbie on both R and Hierarchical Bayes. However, I tried to get some estimates by using the script provided by Sermas and Colias in the help file. I have a data set in the same structure as they use (ID, choice set, alternative, independent variables, and choice variable). I have four independent variables all of

How to use predict with multinom() with intercept in R?

喜欢而已 提交于 2019-12-10 10:39:14
问题 I have run the multinom() function in R, but when I try to predict on a new sample, it keeps giving an error. this is the code: library(nnet) dta=data.frame(replicate(10,runif(10))) names(dta)=c('y',paste0('x',1:9)) res4 <- multinom(y ~ as.matrix(dta[2:10]) , data=dta) #make new data to predict nd<-0.1*dta[1,2:10] pred<-predict(res4, newdata=nd) and this is the error: Error in predict.multinom(res4, newdata = nd) : NAs are not allowed in subscripted assignments I think it has to do with the

Sum of dices rolls

不问归期 提交于 2019-12-10 09:57:41
问题 I am trying to compute the probability of getting a specific sum of n s-sided dice outcomes. I found the formula in this link (formula 10). This is the code that I wrote in C : # include <stdio.h> # include <stdlib.h> # include <math.h> # define n 2 // number of dices # define s 6 // number of sides of one dice int fact(int x){ int y = 1; if(x){ for(int i = 1; i <= x; i++) y *= i; } return y; } int C(int x,int y){ int z = fact(x)/(fact(y)*fact(x-y)); return z; } int main(){ int p,k,kmax;

Multinomial distribution in PyMC

旧巷老猫 提交于 2019-12-08 21:42:36
I am a newbie to pymc. I have read the required stuff on github and was doing fine till I was stuck with this problem. I want to make a collection of multinomial random variables which I can later sample using mcmc. But the best I can do is rv = [ Multinomial("rv", count[i], p_d[i]) for i in xrange(0, len(count)) ] for i in rv: print i.value i.random() for i in rv: print i.value But it is of no good since I want to be able to call rv.value and rv.random() , otherwise I won't be able to sample from it. count is a list of non-ve integers each denoting value of n for that distribution eg. a

ChoiceModelR - Hierarchical Bayes Multinomial Logit Model

心已入冬 提交于 2019-12-08 13:55:26
I hope that some of you are a bit experienced with the R package ChoiceModelR by Sermas and Colias, to estimate a Hierarchical Bayes Multinomial Logit Model. Actually, I am quite a newbie on both R and Hierarchical Bayes. However, I tried to get some estimates by using the script provided by Sermas and Colias in the help file. I have a data set in the same structure as they use (ID, choice set, alternative, independent variables, and choice variable). I have four independent variables all of them binary coded as categorical variables, none of them restricted. I have eight choice sets with

something similar to permutation accuracy importance in h2o package

坚强是说给别人听的谎言 提交于 2019-12-07 13:35:34
问题 I fitted a random forest for my multinomial target with the randomForest package in R. Looking for the variable importance I found out permutation accuracy importance which is what I was looking for my analysis. I fitted a random forest with the h2o package too, but the only measures it shows me are relative_importance, scaled_importance, percentage . My question is: can I extract a measure that shows me the level of the target which better classify the variable i want to take in exam?

Tensorflow: Efficient multinomial sampling (Theano x50 faster?)

佐手、 提交于 2019-12-06 13:26:13
I want to be able to sample from a multinomial distribution very efficiently and apparently my TensorFlow code is very... very slow... The idea is that, I have: A vector: counts = [40, 50, 26, ..., 19] for example A matrix of probabilities: probs = [[0.1, ..., 0.5], ... [0.3, ..., 0.02]] such that np.sum(probs, axis=1) = 1 Let's say len(counts) = N and len(probs) = (N, 50) . What I want to do is (in our example): sample 40 times from the first probability vector of the matrix probs sample 50 times from the second probability vector of the matrix probs ... sample 19 times from the Nth