large-files

Python: How to read huge text file into memory

我的梦境 提交于 2019-11-26 12:55:36
问题 I\'m using Python 2.6 on a Mac Mini with 1GB RAM. I want to read in a huge text file $ ls -l links.csv; file links.csv; tail links.csv -rw-r--r-- 1 user user 469904280 30 Nov 22:42 links.csv links.csv: ASCII text, with CRLF line terminators 4757187,59883 4757187,99822 4757187,66546 4757187,638452 4757187,4627959 4757187,312826 4757187,6143 4757187,6141 4757187,3081726 4757187,58197 So each line in the file consists of a tuple of two comma separated integer values. I want to read in the whole

Upload large files in .NET

╄→尐↘猪︶ㄣ 提交于 2019-11-26 12:43:45
问题 I\'ve done a good bit of research to find an upload component for .NET that I can use to upload large files, has a progress bar, and can resume the upload of large files. I\'ve come across some components like AjaxUploader, SlickUpload, and PowUpload, to name a few. Each of these options cost money and only PowUpload does the resumable upload, but it does it with a java applet. I\'m willing to pay for a component that does those things well, but if I could write it myself that would be best.

How to find the largest file in a directory and its subdirectories?

。_饼干妹妹 提交于 2019-11-26 12:35:07
问题 We\'re just starting a UNIX class and are learning a variety of Bash commands. Our assignment involves performing various commands on a directory that has a number of folders under it as well. I know how to list and count all the regular files from the root folder using: find . -type l | wc -l But I\'d like to know where to go from there in order to find the largest file in the whole directory. I\'ve seen somethings regarding a du command, but we haven\'t learned that, so in the repertoire of

Get last 10 lines of very large text file > 10GB

北城余情 提交于 2019-11-26 12:08:57
What is the most efficient way to display the last 10 lines of a very large text file (this particular file is over 10GB). I was thinking of just writing a simple C# app but I'm not sure how to do this effectively. Read to the end of the file, then seek backwards until you find ten newlines, and then read forward to the end taking into consideration various encodings. Be sure to handle cases where the number of lines in the file is less than ten. Below is an implementation (in C# as you tagged this), generalized to find the last numberOfTokens in the file located at path encoded in encoding

Reading very large files in PHP

允我心安 提交于 2019-11-26 11:52:05
fopen is failing when I try to read in a very moderately sized file in PHP . A 6 meg file makes it choke, though smaller files around 100k are just fine. i've read that it is sometimes necessary to recompile PHP with the -D_FILE_OFFSET_BITS=64 flag in order to read files over 20 gigs or something ridiculous, but shouldn't I have no problems with a 6 meg file? Eventually we'll want to read in files that are around 100 megs, and it would be nice be able to open them and then read through them line by line with fgets as I'm able to do with smaller files. What are your tricks/solutions for reading

Working with huge files in VIM

瘦欲@ 提交于 2019-11-26 11:48:05
问题 I tried opening a huge (~2GB) file in VIM but it choked. I don\'t actually need to edit the file, just jump around efficiently. How can I go about working with very large files in VIM? 回答1: I had a 12GB file to edit today. The vim LargeFile plugin did not work for me. It still used up all my memory and then printed an error message :-(. I could not use hexedit for either, as it cannot insert anything, just overwrite. Here is an alternative approach: You split the file, edit the parts and then

Read and parse a Json File in C#

六眼飞鱼酱① 提交于 2019-11-26 11:35:26
I have spent the best part of two days "faffing" about with code samples and etc., trying to read a very large JSON file into an array in c# so I can later split it up into a 2d array for processing. The problem I was having was I could not find any examples of people doing what I was trying to do. This meant I was just editing code a little an hoping for the best. I have managed to get something working that will: Read the file Miss out headers and only read values into array. Place a certain amount of values on each line of an array. (So I could later split it an put into 2d array) This was

using php to download files, not working on large files? [duplicate]

喜欢而已 提交于 2019-11-26 11:29:29
问题 This question already has an answer here: Downloading large files reliably in PHP 13 answers I\'m using php to download files, rather than the file itself opening in a new window. It seems to work ok for smaller files, but does not work for large files (I need this to work on very large files). Here\'s the code I have to download the file: function downloadFile($file) { if (file_exists($file)) { //download file header(\'Content-Description: File Transfer\'); header(\'Content-Type: application

Downloading a Large File - iPhone SDK

烂漫一生 提交于 2019-11-26 10:08:31
问题 I am using Erica Sadun\'s method of Asynchronous Downloads (link here for the project file: download), however her method does not work with files that have a big size (50 mb or above). If I try to download a file above 50 mb, it will usually crash due to a memory crash. Is there anyway I can tweak this code so that it works with large files as well? Here is the code I have in the DownloadHelper Classes (which is already in the download link): .h @protocol DownloadHelperDelegate <NSObject>

What is the fastest way to create a checksum for large files in C#

点点圈 提交于 2019-11-26 08:54:11
问题 I have to sync large files across some machines. The files can be up to 6GB in size. The sync will be done manually every few weeks. I cant take the filename into consideration because they can change anytime. My plan is to create checksums on the destination PC and on the source PC and then copy all files with a checksum, which are not already in the destination, to the destination. My first attempt was something like this: using System.IO; using System.Security.Cryptography; private static