large-files

How to efficiently write large files to disk on background thread (Swift)

孤者浪人 提交于 2019-11-28 14:27:27
问题 Update I have resolved and removed the distracting error. Please read the entire post and feel free to leave comments if any questions remain. Background I am attempting to write relatively large files (video) to disk on iOS using Swift 2.0, GCD, and a completion handler. I would like to know if there is a more efficient way to perform this task. The task needs to be done without blocking the Main UI, while using completion logic, and also ensuring that the operation happens as quickly as

R:Loops to process large dataset(GBs) in chunks?

我怕爱的太早我们不能终老 提交于 2019-11-28 11:31:47
I have a large data set in GBs that I'd have to process before I analyse them. I tried creating a connector, which allows me to loop through the large datasets and extract chunks at a time.This allows me to quarantine data that satisfies some conditions. My problem is that I am not able to create an indicator for the connector that stipulates it is null and to execute close(connector) when the end of the dataset is reached. Moreover, for the first chunk of extracted data, I'd have to skip 17 lines since the file contains header that R is not able to read. A manual attempt that works: filename=

Reading large excel file with PHP

孤人 提交于 2019-11-28 10:23:28
I'm trying to read a 17MB excel file (2003) with PHPExcel1.7.3c, but it crushes already while loading the file, after exceeding the 120 seconds limit I have. Is there another library that can do it more efficiently? I have no need in styling, I only need it to support UTF8. Thanks for your help Filesize isn't a good measure when using PHPExcel, it's more important to get some idea of the number of cells (rowsxcolumns) in each worksheet. If you have no need for styling, are you calling: $objReader->setReadDataOnly(true); before loading the file? If you don't need to access all worksheets, or

WSGI file streaming with a generator

烂漫一生 提交于 2019-11-28 09:30:27
I have the following code: def application(env, start_response): path = process(env) fh = open(path,'r') start_response('200 OK', [('Content-Type','application/octet-stream')]) return fbuffer(fh,10000) def fbuffer(f, chunk_size): '''Generator to buffer file chunks''' while True: chunk = f.read(chunk_size) if not chunk: break yield chunk I'm not sure that it's right but the scraps of information I've found on the internet have led me to think it ought to work. Basically I want to stream a file out in chunks, and to do that I'm passing a generator back from my application function. However this

[Android SDK]Can't copy external database (13MB) from Assets

我的梦境 提交于 2019-11-28 06:09:02
问题 I need a list of italian words for a game I'm developing but I can't actually make it copy my database from assets. I tried quitea lot of solutions I found on the website, such as: Using your own SQLite database in Android applications how to copy large database which occupies much memory from assets folder to my application? Load files bigger than 1M from assets folder But I had no luck, it keeps on giving me this error on the line os.write(buffer, 0, len); but I can't understand why. Here's

Writing large files with Node.js

倖福魔咒の 提交于 2019-11-28 05:22:49
I'm writing a large file with node.js using a writable stream : var fs = require('fs'); var stream = fs.createWriteStream('someFile.txt', { flags : 'w' }); var lines; while (lines = getLines()) { for (var i = 0; i < lines.length; i++) { stream.write( lines[i] ); } } I'm wondering if this scheme is safe without using drain event? If it is not (which I think is the case), what is the pattern for writing an arbitrary large data to a file? That's how I finally did it. The idea behind is to create readable stream implementing ReadStream interface and then use pipe() method to pipe data to writable

How to read 4GB file on 32bit system

生来就可爱ヽ(ⅴ<●) 提交于 2019-11-28 05:03:21
问题 In my case I have different files lets assume that I have >4GB file with data. I want to read that file line by line and process each line. One of my restrictions is that soft has to be run on 32bit MS Windows or on 64bit with small amount of RAM (min 4GB). You can also assume that processing of these lines isn't bottleneck. In current solution I read that file by ifstream and copy to some string. Here is snippet how it looks like. std::ifstream file(filename_xml.c_str()); uintmax_t m

gitignore by file size?

南楼画角 提交于 2019-11-28 03:48:32
I'm trying to implement Git to manage creative assets (Photoshop, Illustrator, Maya, etc.), and I'd like to exclude files from Git based on file size rather than extension, location, etc. For example, I don't want to exclude all .avi files, but there are a handful of massive +1GB avi files in random directories that I don't want to commit. Any suggestions? I'm new to .gitignore, so there may be better ways to do this, but I've been excluding files by file size using: find . -size +1G | cat >> .gitignore Obviously you'll have to run this code frequently if you're generating a lot of large files

Get Large File Size in C

試著忘記壹切 提交于 2019-11-28 03:13:35
问题 Before anyone complains of "duplicate", I've been checking SO quite thoroughly, but there seem to be no clean answer yet, although the question looks quite simple. I'm looking for a portable C code , which is able to provide the size of a file, even if such a file is bigger than 4GB. The usual method (fseek, ftell) works fine, as long as the file remains < 2GB. It's fairly well supported everywhere, so I'm trying to find something equivalent. Unfortunately, the updated methods (fseeko, ftello

Best Free Text Editor Supporting *More Than* 4GB Files? [closed]

淺唱寂寞╮ 提交于 2019-11-28 02:39:58
I am looking for a text editor that will be able to load a 4+ Gigabyte file into it. Textpad doesn't work. I own a copy of it and have been to its support site, it just doesn't do it. Maybe I need new hardware, but that's a different question. The editor needs to be free OR, if its going to cost me, then no more than $30. For Windows. VonC glogg could also be considered, for a different usage: Caveat (reported by Simon Tewsi in the comments , Feb. 2013) One caveat - has two search functions, Main Search and Quick Find . The lower one, which I assume is Quick Find , is at least an order of