在glmnet中绘制ROC曲线

3

编辑:正如Dwin在评论中指出的那样,下面的代码不是ROC曲线。ROC曲线必须按照t的变化而不是lambda的变化进行索引(就像我在下面做的那样)。我会在有机会时编辑下面的代码。

下面是我尝试创建glmnet预测二元结果的ROC曲线的代码。我在下面的代码中模拟了一个近似于glmnet结果的矩阵。正如你们中的一些人所知道的那样,对于一个 n x p 的输入矩阵,glmnet输出一个n x 100的预测概率 [$\Pr(y_i = 1)$] 矩阵,对应100个不同 lambda 值。如果进一步改变lambda值不能增加预测能力,则输出将比100更窄。 下面的模拟glmnet预测概率矩阵是一个250x69矩阵。

首先,是否有更简单的方法来绘制glmnet ROC曲线?其次,如果没有,下面的方法是否正确?第三,我是否关心绘制(1)假/真阳性的概率还是(2)仅关注假/真阳性的观察率?

set.seed(06511)

# Simulate predictions matrix
phat = as.matrix(rnorm(250,mean=0.35, sd = 0.12))
lambda_effect = as.matrix(seq(from = 1.01, to = 1.35, by = 0.005))
phat = phat %*% t(lambda_effect)


#Choose a cut-point
t = 0.5

#Define a predictions matrix
predictions = ifelse(phat >= t, 1, 0)

##Simulate y matrix
y_phat = apply(phat, 1, mean) + rnorm(250,0.05,0.10)
y_obs = ifelse(y_phat >= 0.55, 1, 0)

#percentage of 1 observations in the validation set, 
p = length(which(y_obs==1))/length(y_obs)

#   dim(testframe2_e2)

#probability of the model predicting 1 while the true value of the observation is 0, 
apply(predictions, 1, sum)

## Count false positives for each model
## False pos ==1, correct == 0, false neg == -1
error_mat = predictions - y_obs
## Define a matrix that isolates false positives
error_mat_fp = ifelse(error_mat ==1, 1, 0)
false_pos_rate = apply(error_mat_fp, 2,  sum)/length(y_obs)

# Count true positives for each model
## True pos == 2, mistakes == 1, true neg == 0
error_mat2 = predictions + y_obs
## Isolate true positives
error_mat_tp = ifelse(error_mat2 ==2, 1, 0)
true_pos_rate = apply(error_mat_tp, 2,  sum)/length(y_obs)


## Do I care about (1) this probability OR (2) simply the observed rate?
## (1)
#probability of false-positive, 
p_fp = false_pos_rate/(1-p)
#probability of true-positive, 
p_tp = true_pos_rate/p

#plot the ROC, 
plot(p_fp, p_tp)


## (2)
plot(false_pos_rate, true_pos_rate)

这个问题在SO上有一个回答,但是答案很粗糙,而且不太正确:glmnet lasso ROC charts


2
预测准确率作为lambda函数的图形不是“ROC曲线”。 - IRTFM
@DWin,您是在说只有在我们变化的输入是判别阈值t时,才真正是“ROC曲线”吗? - Dr. Beeblebrox
1
是的,那正是他所说的。 - Hong Ooi
1
首先,ROC曲线是单调的,而你所描述的曲线(在我的参考资料中没有给出名称)则不是,至少如果它是基于OOB或验证数据的话。 - IRTFM
我运行了这段代码,惊讶地发现FPR和FNR同时上升。我一定不理解它们所衡量的内容。难道它们不应该具有相互关系吗? - IRTFM
显示剩余2条评论
2个回答

8
使用ROCR计算AUC并绘制ROC曲线的选项:
library(ROCR)
library(glmnet)
library(caret)

df <- data.matrix() # dataframe w/ predictor variables & a response variable
                      # col1 = response var; # cols 2:10 = predictor vars

# Create training subset for model development & testing set for model performance testing
inTrain <- createDataPartition(df$ResponsVar, p = .75, list = FALSE)
Train <- df[ inTrain, ]
Test <- df[ -inTrain, ]

# Run model over training dataset
lasso.model <- cv.glmnet(x = Train[,2:10], y = Train[,1], 
                         family = 'binomial', type.measure = 'auc')

# Apply model to testing dataset
Test$lasso.prob <- predict(lasso.model,type="response", 
                           newx = Test[,2:10], s = 'lambda.min')
pred <- prediction(Test$lasso.prob, Test$ResponseVar)

# calculate probabilities for TPR/FPR for predictions
perf <- performance(pred,"tpr","fpr")
performance(pred,"auc") # shows calculated AUC for model
plot(perf,colorize=FALSE, col="black") # plot ROC curve
lines(c(0,1),c(0,1),col = "gray", lty = 4 )

对于上面的 Test$lasso.prob,您可以输入不同的lambda来测试每个值的预测能力。


0

通过预测和标签,以下是如何创建基本的ROC曲线

# randomly generated data for example, binary outcome
predictions = runif(100, min=0, max=1) 
labels = as.numeric(predictions > 0.5) 
labels[1:10] = abs(labels[1:10] - 1) # randomly make some labels not match predictions

# source: https://blog.revolutionanalytics.com/2016/08/roc-curves-in-two-lines-of-code.html
labels_reordered = labels[order(predictions, decreasing=TRUE)]
roc_dat = data.frame(TPR=cumsum(labels_reordered)/sum(labels_reordered), FPR=cumsum(!labels_reordered)/sum(!labels_reordered))

# plot the roc curve
plot(roc_dat$FPR, roc_dat$TPR)

生成的图表


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接