Subset a data frame into 2 data frames based on breaks in the tables

帅比萌擦擦* 提交于 2020-08-10 19:15:16

问题


I have a csv file that I download that contains 3 different tables on the same tab. I only need the top table and the bottom table but depending on when I download the file the number of rows vary. I have attached an image of the file below. CSV file with the 3 tables separated by blank rows

What I am hoping to accomplish is reading the 1st table and the 3rd table as two separate dataframes. I was hoping to use grep/grepl to get DF1 up to the 1st break (row 202) and get DF2 starting after the 2nd break (row 212)

I know I can subset the data by going into the file and skipping rows and/or dropping rows. Although I wanted to see if there is a method to automatically identify these tables and subset them.


回答1:


(The best fix is to correct the origination of this data: don't do this, better to have separate files or some other format. Lacking that ...)

I can only guess from an image, so here's a sample file.

a,b,c
1,11,21
2,12,22
,,,,,
aa,bb,cc,dd
31,41,51,61
,,,,,
aaa,bbb,ccc,ddd,eee,fff
111,222,333,444,555,666

Use this function:

##' Read multi-part CSV files.
##'
##' @details
##' A typical CSV file contains rows, unbroken by spaces,
##' with an equal number of columns separated by a fixed character
##' (typically "," or "\\t"). Occasionally, some rows are incomplete
##' (insufficient number of fields); this issue is handled by
##' \code{read.csv} directly with the \code{fill = TRUE} argument.
##'
##' Two other issues can arise in a seemingly compliant CSV file:
##'
##' \itemize{
##'
##'   \item{The header row is repeated multiple times throughout the
##' document. This likely spoils the results from \code{read.csv} by
##' forcing all columns to be factors or characters, instead of the
##' actual data (e.g., numeric, integer).}
##'
##'   \item{There are blank lines separating truly disparate tables.
##' With just \code{read.csv}, the blank lines will typically be
##' \code{fill}ed, all tables will be expanded to the width of the
##' widest table, and all headers will be from the first table.}
##' }
##'
##' This function mitigates both of these issues.
##'
##' NOTE: arguments passed to \code{read.csv} are used with all
##' tables, so if you have blank lines with disparate tables, the
##' presence or absence of headers will not be handled gracefully.
##' @param fname character or vector, the file name(s)
##' @param by.header logical (default TRUE), whether to split by identical header rows
##' @param by.space logical (default TRUE), whether to split by empty lines
##' @param ... arguments passed to \code{readLines} or \code{read.csv}
##' @return list, one named entry per filename, each containing a list
##' containing the recursive tables in the CSV file
##' @export
readMultiCSV <- function(fname, by.header = TRUE, by.space = TRUE, ...) {
    dots <- list(...)

    readlinesopts <- (names(dots) %in% names(formals(readLines)))
    readcsvopts <- (! readlinesopts) & (names(dots) %in% names(formals(read.csv)))

    ret <- lapply(fname, function(fn) {
        txt <- do.call(readLines, c(list(con = fn), dots[readlinesopts]))

        starts <- 1

        if (by.space) {
            starts <- sort(c(starts, 1 + which(txt == ''), 1 + grep("^,*$", txt)))
            stops <- c(starts[-1], length(txt) + 2) - 2
        }

        if (by.header) {
            morestarts <- unlist(mapply(
                function(x,y)
                    if ((x+1) < y)
                        x + which(txt[x] == txt[(x+1):y]),
                starts,
                ## I do "- 2" to remove the empty lines found in the by.space block
                c(starts[-1], length(txt) + 2) - 2, SIMPLIFY = TRUE))
            starts <- sort(c(starts, morestarts))
            stops <- sort(c(stops, morestarts - 1))
        }

        ## filter out empty ranges
        nonEmpties <- (stops - starts) > 0
        starts <- starts[nonEmpties]
        stops <- stops[nonEmpties]

        mapply(function(x,y) do.call(read.csv, c(list(file = fn, skip = x-1, nrows = y-x), dots[readcsvopts])),
               starts, stops, SIMPLIFY = FALSE)
    })
    names(ret) <- basename(fname)
    ret
}

Demo:

readMultiCSV("~/StackOverflow/11815793/61091149.csv")
# $`61091149.csv`
# $`61091149.csv`[[1]]
#   a  b  c
# 1 1 11 21
# 2 2 12 22
# $`61091149.csv`[[2]]
#   aa bb cc dd
# 1 31 41 51 61
# $`61091149.csv`[[3]]
#   aaa bbb ccc ddd eee fff
# 1 111 222 333 444 555 666

Excel will often out-smart us, and instead have trailing commas in all tables to the right-most edge of the widest. Instead, this will give us a file like:

a,b,c,,,
1,11,21,,,
2,12,22,,,
,,,,,
aa,bb,cc,dd,,
31,41,51,61,,
,,,,,
aaa,bbb,ccc,ddd,eee,fff
111,222,333,444,555,666

This doesn't break it, it just gives you more work on the back side:

readMultiCSV("~/StackOverflow/11815793/61091149.csv")
# $`61091149.csv`
# $`61091149.csv`[[1]]
#   a  b  c  X X.1 X.2
# 1 1 11 21 NA  NA  NA
# 2 2 12 22 NA  NA  NA
# $`61091149.csv`[[2]]
#   aa bb cc dd  X X.1
# 1 31 41 51 61 NA  NA
# $`61091149.csv`[[3]]
#   aaa bbb ccc ddd eee fff
# 1 111 222 333 444 555 666


来源:https://stackoverflow.com/questions/61091149/subset-a-data-frame-into-2-data-frames-based-on-breaks-in-the-tables

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!