hessian-matrix

Why do I get negative variance from hessian matrix in optim function

 ̄綄美尐妖づ 提交于 2021-01-29 14:14:33
问题 I try to estimate mle parameters of a generalised gamma distribution. I use optim function with a lower bound equal to one (since parameters must be positive) and BFGS method. Initially, I estimate the log likelihood function as below: negloglikgengamma<-function(thet,dat) { alpha<-thet[1] kappa<-thet[2] lamda<-thet[3] -sum(dggamma(y,scale=alpha,shape1=kappa,shape2=lamda,log=T)) } I use minus log likelihood function in order to use "optim" and find the minimum. Then I use optim function.

Hessian of Gaussian eigenvalues for 3D image with Python

自闭症网瘾萝莉.ら 提交于 2020-06-29 03:38:27
问题 I have a 3D image and I want to calculate the Hessian of Gaussian eigenvalues for this image. I would like to have the three eigenvalues of the Hessian approximation for each voxel. This feature seems to be something very common in image processing. Is there an existing implementation of this feature (like scipy.ndimage.laplace for laplacian calculation)? And is there one that parallels calculations? I tried to do it manually, through numpy operations but its not optimal because: I have to

Creating a Custom Objective Function in for XGBoost.XGBRegressor

為{幸葍}努か 提交于 2020-06-26 14:50:12
问题 So I am relatively new to the ML/AI game in python, and I'm currently working on a problem surrounding the implementation of a custom objective function for XGBoost. My differential equation knowledge is pretty rusty so I've created a custom obj function with a gradient and hessian that models the mean squared error function that is ran as the default objective function in XGBRegressor to make sure that I am doing all of this correctly. The problem is, the results of the model (the error

Creating a Custom Objective Function in for XGBoost.XGBRegressor

烂漫一生 提交于 2020-06-26 14:49:47
问题 So I am relatively new to the ML/AI game in python, and I'm currently working on a problem surrounding the implementation of a custom objective function for XGBoost. My differential equation knowledge is pretty rusty so I've created a custom obj function with a gradient and hessian that models the mean squared error function that is ran as the default objective function in XGBRegressor to make sure that I am doing all of this correctly. The problem is, the results of the model (the error

Creating a Custom Objective Function in for XGBoost.XGBRegressor

泪湿孤枕 提交于 2020-06-26 14:49:09
问题 So I am relatively new to the ML/AI game in python, and I'm currently working on a problem surrounding the implementation of a custom objective function for XGBoost. My differential equation knowledge is pretty rusty so I've created a custom obj function with a gradient and hessian that models the mean squared error function that is ran as the default objective function in XGBRegressor to make sure that I am doing all of this correctly. The problem is, the results of the model (the error

Tensorflow: Compute Hessian matrix (only diagonal part) with respect to a high rank tensor

我的未来我决定 提交于 2019-12-23 05:16:03
问题 I would like to compute the first and the second derivatives(diagonal part of Hessian) of my specified Loss with respect to each feature map of a vgg16 conv4_3 layer's kernel which is a 3x3x512x512 dimensional matrix. I know how to compute derivatives if it is respected to a low-rank one according to How to compute all second derivatives (only the diagonal of the Hessian matrix) in Tensorflow? However, when it turns to higher-rank, I got completed lost. # Inspecting variables under Ipython

Return Inverse Hessian Matrix at the end of DNN Training and Partial Derivatives wrt the Inputs

淺唱寂寞╮ 提交于 2019-12-20 03:31:13
问题 Using Keras and Tensorflow as the backend, I have built a DNN that takes stellar spectra as an input (7213 data points) and output three stellar parameters (Temperature, gravity, and metallicity). The network trains well and predicts well on my test sets, but in order for the results to be scientifically useful, I need to be able to estimate my errors. The first step in doing this is to obtain the inverse Hessian matrix, which doesn't seem to be possible using just Keras. Therefore I am

Matlab gradient and hessian computation for symbolic vector function

ぃ、小莉子 提交于 2019-12-13 02:19:37
问题 This question was migrated from Cross Validated because it can be answered on Stack Overflow. Migrated 4 years ago . I am trying to use the Matlab "gradient" and "hessian" functions to calculate the derivative of a symbolic vector function with respect to a vector. Below is an example using the sigmoid function 1/(1+e^(-a)) where a is a feature vector multiplied by weights. The versions below all return an error. I am new to Matlab and would very much appreciate any advice. The solution may

How to use theano.gradient.hessian? Example needed

核能气质少年 提交于 2019-12-13 02:04:49
问题 I tried the code below: x=T.dvector('x') y=T.dvector('y') input=[x,y] s=T.sum(x**2+y**2) f=theano.gradient.hessian(s,wrt=input) h=function(input,f) Then I run it with following real values x=[1,2] y=[1,2] h([x,y] Then I encountered following error TypeError: ('Bad input argument to theano function with name "<ipython-input-115-32fd257c46ad>:7" at index 0(0-based)', 'Wrong number of dimensions: expected 1, got 2 with shape (2L, 2L).') I am new to python and am exploring Theano for building

Fastest way to create a sparse matrix of the form A.T * diag(b) * A + C?

大兔子大兔子 提交于 2019-12-11 03:16:07
问题 I'm trying to optimize a piece of code that solves a large sparse nonlinear system using an interior point method. During the update step, this involves computing the Hessian matrix H , the gradient g , then solving for d in H * d = -g to get the new search direction. The Hessian matrix has a symmetric tridiagonal structure of the form: A.T * diag(b) * A + C I've run line_profiler on the particular function in question: Line # Hits Time Per Hit % Time Line Contents ===========================