regression

Linear Regression prediction in R using Leave One out Approach

ぃ、小莉子 提交于 2021-02-18 19:51:30
问题 I have 3 linear regression models built using the mtcars and would like to use those models to generate predictions for each rows of the mtcars tables. Those predictions should be added as additional columns (3 additional columns) of the mtcars dataframe and should be generated in a for loop using the leave one out approach. Furthermore predictions for model1 and model2 should be performed by "grouping" the cyl numbers whiles predictions made with the model 3 should be accomplished without

Linear regression on raster images - lm complains about NAs

喜你入骨 提交于 2021-02-18 19:13:46
问题 I'm sure this can be fixed with few bytes, but I've spent hours on this simple thing and can't get out of it. I don't use R often. I have 5 asciigrid files that represent 5 raster images. Some pixels do have values, other do have NAs. For example, the first image might be something like: NA NA NA NA NA NA NA 2 3 NA NA 0.2 0.3 1 NA NA NA 4 NA NA and the second might be: NA NA NA NA NA NA NA 5 1 NA NA 0.1 12 12 NA NA NA 6 NA NA As you can see, NA position is always the same and I'm 100% sure

Linear regression on raster images - lm complains about NAs

喜欢而已 提交于 2021-02-18 19:10:28
问题 I'm sure this can be fixed with few bytes, but I've spent hours on this simple thing and can't get out of it. I don't use R often. I have 5 asciigrid files that represent 5 raster images. Some pixels do have values, other do have NAs. For example, the first image might be something like: NA NA NA NA NA NA NA 2 3 NA NA 0.2 0.3 1 NA NA NA 4 NA NA and the second might be: NA NA NA NA NA NA NA 5 1 NA NA 0.1 12 12 NA NA NA 6 NA NA As you can see, NA position is always the same and I'm 100% sure

Obtaining random-effects matrices from a mixed model

↘锁芯ラ 提交于 2021-02-18 08:43:31
问题 In my below code, I was wondering how I can obtain the equivalent of out and Ts from an lme() object in the library(nlme) ? dat <- read.csv("https://raw.githubusercontent.com/rnorouzian/v/main/mv.l.csv") library(lme4) x <- lmer(value ~0 + name+ (1| School/Student), data = dat, control = lmerControl(check.nobs.vs.nRE= "ignore")) lwr <- getME(x, "lower") theta <- getME(x, "theta") out = any(theta[lwr == 0] < 1e-4) # find this from `x1` object below Ts = getME(x, "Tlist") # find this from `x1`

Obtaining random-effects matrices from a mixed model

被刻印的时光 ゝ 提交于 2021-02-18 08:42:23
问题 In my below code, I was wondering how I can obtain the equivalent of out and Ts from an lme() object in the library(nlme) ? dat <- read.csv("https://raw.githubusercontent.com/rnorouzian/v/main/mv.l.csv") library(lme4) x <- lmer(value ~0 + name+ (1| School/Student), data = dat, control = lmerControl(check.nobs.vs.nRE= "ignore")) lwr <- getME(x, "lower") theta <- getME(x, "theta") out = any(theta[lwr == 0] < 1e-4) # find this from `x1` object below Ts = getME(x, "Tlist") # find this from `x1`

Logistic regression in Julia using Optim.jl

江枫思渺然 提交于 2021-02-17 22:53:13
问题 I'm trying to implement a simple regularized logistic regression algorithm in Julia. I'd like to use Optim.jl library to minimize my cost function, but I can't get it to work. My cost function and gradient are as follows: function cost(X, y, theta, lambda) m = length(y) h = sigmoid(X * theta) reg = (lambda / (2*m)) * sum(theta[2:end].^2) J = (1/m) * sum( (-y).*log(h) - (1-y).*log(1-h) ) + reg return J end function grad(X, y, theta, lambda, gradient) m = length(y) h = sigmoid(X * theta) #

Logistics Regression 逻辑回归及Python代码

徘徊边缘 提交于 2021-02-12 11:05:02
  逻辑回归(Logistics Regression)是广义线性模型中的一种,其取值为0或1,服从伯努利分布。而伯 努利家族的正则响应函数就是sigmoid函数 ,因此逻辑回归为什么选用sigmoid函数的理论原因。同时,sigmoid函数好处有:   1. 将现行分类器的响应值 <w , x> (内积) 映射到一个概率上;   2. 将实域上的数映射到P(y=1|w,x)上,满足逻辑回归的要求。   逻辑回归可以用于二分类问题,只能解决线性可分的情况,不能用于线性不可分。   对于输入向量X,其属于y=1的概率为:   $P(y=1|X,W)=h(X)=\frac{1}{1+{{e}^{-WX}}}$ 其属于y=0的概率为:   $P(y=0|X,W)=1-P(y=0|X,W)=1-h(X)=\frac{{{e}^{-WX}}}{1+{{e}^{-WX}}}$ 对于逻辑回归函数,其属于y的概率为:   $P(y|X,W)=h{{(X)}^{y}}\cdot {{(1-h(X))}^{1-y}}.$   逻辑回归模型需要求得参数向量W,可以使用极大似然估计求解。假设有 m 个样本,则似然函数为:   \[{{\text{L}}_{\text{W}}}=\prod\limits_{i=1}^{m}{\left[ h{{({{X}_{i}})}^{{{y}_{i}}}}\cdot {

一文读懂目标检测:R-CNN、Fast R-CNN、Faster R-CNN、YOLO、SSD

依然范特西╮ 提交于 2021-02-11 20:40:16
一文读懂目标检测:R-CNN、Fast R-CNN、Faster R-CNN、YOLO、SSD 前言 之前我所在的公司七月在线开设的深度学习等一系列课程经常会讲目标检测,包括R-CNN、Fast R-CNN、Faster R-CNN,但一直没有比较好的机会深入(但当你对目标检测有个基本的了解之后,再看 这些课程 你会收益很大)。但目标检测这个领域实在是太火了,经常会看到一些写的不错的通俗易懂的资料,加之之前在京东上掏了一本书看了看,就这样耳濡目染中,还是开始研究了。 今年五一,从保定回京,怕高速路上堵 没坐大巴,高铁又没抢上,只好选择哐当哐当好几年没坐过的绿皮车,关键还不断晚点。在车站,用手机做个热点,修改 题库 ,顺便终于搞清R-CNN、fast R-CNN、faster R-CNN的核心区别。有心中热爱 何惧任何啥。 为纪念这心中热爱,故成此文。 一、目标检测常见算法 object detection,就是在给定的图片中精确找到物体所在位置,并标注出物体的类别。所以,object detection要解决的问题就是物体在哪里以及是什么的整个流程问题。 然而,这个问题可不是那么容易解决的,物体的尺寸变化范围很大,摆放物体的角度,姿态不定,而且可以出现在图片的任何地方,更何况物体还可以是多个类别。 目前学术和工业界出现的目标检测算法分成3类: 1. 传统的目标检测算法

一文读懂目标检测:R-CNN、Fast R-CNN、Faster R-CNN、YOLO、SSD

蹲街弑〆低调 提交于 2021-02-11 20:31:09
一、目标检测常见算法 object detection,就是在给定的图片中精确找到物体所在位置,并标注出物体的类别。所以,object detection要解决的问题就是物体在哪里以及是什么的整个流程问题。 然而,这个问题可不是那么容易解决的,物体的尺寸变化范围很大,摆放物体的角度,姿态不定,而且可以出现在图片的任何地方,更何况物体还可以是多个类别。 目前学术和工业界出现的目标检测算法分成3类: 1. 传统的目标检测算法:Cascade + HOG/DPM + Haar/SVM以及上述方法的诸多改进、优化; 2. 候选区域/框 + 深度学习分类:通过提取候选区域,并对相应区域进行以深度学习方法为主的分类的方案,如: R-CNN(Selective Search + CNN + SVM) SPP-net(ROI Pooling) Fast R-CNN(Selective Search + CNN + ROI) Faster R-CNN(RPN + CNN + ROI) R-FCN 等系列方法; 3. 基于深度学习的回归方法:YOLO/SSD/DenseBox 等方法;以及最近出现的结合RNN算法的RRC detection;结合DPM的Deformable CNN等 传统目标检测流程: 1)区域选择(穷举策略:采用滑动窗口,且设置不同的大小,不同的长宽比对图像进行遍历,时间复杂度高)

How to extract the coefficients from a linear model without repeating my code in R?

梦想的初衷 提交于 2021-02-11 12:49:43
问题 I am using a Montecarlo simulation for predicting mpg in the mtcars data. I want to extract the coefficients of all the variables in the dataframe to compute how many times each car has lower mpg than the other car. For example how many times Toyota Corona has less predicted mpg than Datsun 710. This is my initial code using only two independent variables. I want to expand this selection to use all the variables in the data frame without manually have to include all the variables in the data