How do I process a text file in C by chunks of lines?

落爺英雄遲暮 提交于 2019-12-02 04:44:43

问题


I'm writing a program in C that processes a text file and keeps track of each unique word (by using a struct that has a char array for the word and a count for its number of occurrences) and stores this struct into a data structure. However, the assignment has this included: "The entire txt file may be very large and not able to be held in the main memory. Account for this in your program."

I asked him after class, and he said to read the text file by X lines at a time (I think 20,000 was his suggestion?) at a time, analyze them and update the structs, until you've reached the end of the file.

Can anyone help explain the best way to do this and tell me what functions to use? I'm very, very new to C.

(my current program is accurate and correct for small files, I just need to make it accommodate enormous files).

Thank you so much!!

EDIT:

        fp = fopen(argv[w], "r");
        if ((fp) == NULL){
           fprintf( stderr, "Input file %s cannot be opened.\n", argv[w] );
         return 2;
        }

        /* other parts of my program here */

        char s[MaxWordSize];

        while (fscanf(fp,"%s",s) != EOF){   
            nonAlphabeticDelete(s); // removes non letter characters

            toLowerCase(s); //converts the string to lowercase

            //attempts to add to data structure 
            pthread_mutex_lock(&lock);
            add(words, &q, s);
            pthread_mutex_unlock(&lock);
        }

This works, I just need to adjust it to go X lines at a time through the text file.


回答1:


How about getline() ? Here an example from the manpage http://man7.org/linux/man-pages/man3/getline.3.html

   #define _GNU_SOURCE
   #include <stdio.h>
   #include <stdlib.h>

   int
   main(void)
   {
       FILE *stream;
       char *line = NULL;
       size_t len = 0;
       ssize_t read;

       stream = fopen("/etc/motd", "r");
       if (stream == NULL)
           exit(EXIT_FAILURE);

       while ((read = getline(&line, &len, stream)) != -1) {
           printf("Retrieved line of length %zu :\n", read);
           printf("%s", line);
       }

       free(line);
       fclose(stream);
       exit(EXIT_SUCCESS);
   }



回答2:


This is best done by reading some manuals but I can provide a headstart.

FILE *fp;
fp=fopen("fileToRead.txt", "rb");
if (!fp) { /* handle failure! */ }
#define GUESS_FOR_LINE_LENGTH 80
char sentinel = '\0';
while ((sentinel = getc(fp)) != EOF)
{
    ungetc(sentinel, fp);
    char buffer[20000*GUESS_FOR_LINE_LENGTH];
    size_t numRead = fread(buffer, 1, 20000*GUESS_FOR_LINE_LENGTH, fp);
    if (numRead < 20000*GUESS_FOR_LINE_LENGTH) { /*last run */ }
    /* now buffer has numRead characters */
    size_t lastLine = numRead - 1;
    while (buffer[lastLine] != '\n') { --lastLine; }
    /* process up to lastLine */
    /* copy the remainder from lastLine to the front */
    /* and fill the remainder from the file */
}

This is really more like pseudo-code. Since you mostly have a working program, you should use this as a guideline.




回答3:


First try reading one line at a time. Scan the line buffer for word boundaries and fine-tune the word counting part. Using a hash table to store the words and counts seems a good approach. Make the output optional, so you can measure read/parse/lookup performance.

Then make another program that uses the same algorithm for the core part but uses mmap to read sizeable parts of the file and scan the block of memory. The tricky part is handling the block boundaries.

Compare output from both programs on a set of huge files, ensure the counts are identical. You can create huge files by concatenating the same file many times.

Compare timings too. Use the time command line utility. Disable output for this benchmark to focus on the read/parse/analysis part.

Compare the timings with other programs such as wc or cat - > /dev/null. Once you get similar performance, the bottleneck is the speed of reading from mass storage, there is not much room left for improvement.

EDIT: looking at your code, I have these remarks:

  • fscanf is probably not the right tool: at least you should protect for buffer overflow. How should you handle foo,bar 1 word or 2 words?

  • I would suggest using fgets() or fread and moving a pointer along the buffer, skipping the non word bytes, converting the word bytes to lower case with an indirection through a 256 byte array, avoiding copies.

  • Make the locking stuff optional via a preprocessor variable. It is not needed if the words structure is only accessed by a single thread.

  • How did you implement add? What is q?



来源:https://stackoverflow.com/questions/34081158/how-do-i-process-a-text-file-in-c-by-chunks-of-lines

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!