Fastest way to replace NAs in a large data.table

后端 未结 10 1068
走了就别回头了
走了就别回头了 2020-11-22 17:10

I have a large data.table, with many missing values scattered throughout its ~200k rows and 200 columns. I would like to re code those NA values to zeros as efficiently as

10条回答
  •  暗喜
    暗喜 (楼主)
    2020-11-22 17:41

    My understanding is that the secret to fast operations in R is to utilise vector (or arrays, which are vectors under the hood.)

    In this solution I make use of a data.matrix which is an array but behave a bit like a data.frame. Because it is an array, you can use a very simple vector substitution to replace the NAs:

    A little helper function to remove the NAs. The essence is a single line of code. I only do this to measure execution time.

    remove_na <- function(x){
      dm <- data.matrix(x)
      dm[is.na(dm)] <- 0
      data.table(dm)
    }
    

    A little helper function to create a data.table of a given size.

    create_dt <- function(nrow=5, ncol=5, propNA = 0.5){
      v <- runif(nrow * ncol)
      v[sample(seq_len(nrow*ncol), propNA * nrow*ncol)] <- NA
      data.table(matrix(v, ncol=ncol))
    }
    

    Demonstration on a tiny sample:

    library(data.table)
    set.seed(1)
    dt <- create_dt(5, 5, 0.5)
    
    dt
                V1        V2        V3        V4        V5
    [1,]        NA 0.8983897        NA 0.4976992 0.9347052
    [2,] 0.3721239 0.9446753        NA 0.7176185 0.2121425
    [3,] 0.5728534        NA 0.6870228 0.9919061        NA
    [4,]        NA        NA        NA        NA 0.1255551
    [5,] 0.2016819        NA 0.7698414        NA        NA
    
    remove_na(dt)
                V1        V2        V3        V4        V5
    [1,] 0.0000000 0.8983897 0.0000000 0.4976992 0.9347052
    [2,] 0.3721239 0.9446753 0.0000000 0.7176185 0.2121425
    [3,] 0.5728534 0.0000000 0.6870228 0.9919061 0.0000000
    [4,] 0.0000000 0.0000000 0.0000000 0.0000000 0.1255551
    [5,] 0.2016819 0.0000000 0.7698414 0.0000000 0.0000000
    

提交回复
热议问题