large-files

What is different with PushStreamContent between web api & web api 2?

为君一笑 提交于 2019-12-03 12:26:00
问题 I've created two identical web api projects, one in VS 2012 and another in VS 2013, both targeting the 4.5 .net framework. The projects are based on Filip W's video download tutorial found here: http://www.strathweb.com/2013/01/asynchronously-streaming-video-with-asp-net-web-api/ Copying & pasting the code from the tutorial into the VS 2012 project (using web api 1?) produces no errors (after I add the proper 'using' statements). However, when I follow the same steps in the VS 2013 project I

Dealing with large files in Haskell

不打扰是莪最后的温柔 提交于 2019-12-03 12:25:10
I have a large file (4+ gigs) of, lets just say, 4 byte floats. I would like to treat it as List, in the sense that I would like to be able to use map, filter, foldl, etc. However, instead of producing a new list with the output, I would like to write the output back into the file, and thus only have to load a small portion of the file in memory. You could say I what a type called MutableFileList Has anyone ran into this situation before? Instead of re-inventing the wheel I was wondering if there a Hackish way for dealing with this? You should not treat it as a [Double] or [Float] in memory.

Hadoop put performance - large file (20gb)

浪尽此生 提交于 2019-12-03 09:23:10
问题 I'm using hdfs -put to load a large 20GB file into hdfs. Currently the process runs @ 4mins. I'm trying to improve the write time of loading data into hdfs. I tried utilizing different block sizes to improve write speed but got the below results: 512M blocksize = 4mins; 256M blocksize = 4mins; 128M blocksize = 4mins; 64M blocksize = 4mins; Does anyone know what the bottleneck could be and other options I could explore to improve performance of the -put cmd? 回答1: 20GB / 4minute comes out to

Error tokenizing data. C error: out of memory pandas python, large file csv

自古美人都是妖i 提交于 2019-12-03 07:34:42
问题 I have a large csv file of 3.5 go and I want to read it using pandas. This is my code: import pandas as pd tp = pd.read_csv('train_2011_2012_2013.csv', sep=';', iterator=True, chunksize=20000000, low_memory = False) df = pd.concat(tp, ignore_index=True) I get this error: pandas/parser.pyx in pandas.parser.TextReader.read (pandas/parser.c:8771)() pandas/parser.pyx in pandas.parser.TextReader._read_rows (pandas/parser.c:9731)() pandas/parser.pyx in pandas.parser.TextReader._tokenize_rows

c handle large file

此生再无相见时 提交于 2019-12-03 07:34:14
I need to parse a file that could be many gbs in size. I would like to do this in C. Can anyone suggest any methods to accomplish this? The file that I need to open and parse is a hard drive dump that I get from my mac's hard drive. However, I plan on running my program inside of 64-bit Ubuntu 10.04. Also given the large file size, the more optimized the method the better. On both *nix and Windows, there are extensions to the I/O routines that touch file size that will support sizes larger than 2GB or 4GB. Naturally, the underlying file system must also support a file that large. On Windows,

Can someone provide an example of seeking, reading, and writing a >4GB file using boost iostreams

我的未来我决定 提交于 2019-12-03 07:20:28
I have read that boost iostreams supposedly supports 64 bit access to large files semi-portable way. Their FAQ mentions 64 bit offset functions , but there is no examples on how to use them. Has anyone used this library for handling large files? A simple example of opening two files, seeking to their middles, and copying one to the other would be very helpful. Thanks. Short answer Just include #include <boost/iostreams/seek.hpp> and use the seek function as in boost::iostreams::seek(device, offset, whence); where device is a file, stream, streambuf or any object convertible to seekable ;

Parsing large (20GB) text file with python - reading in 2 lines as 1

冷暖自知 提交于 2019-12-03 07:12:14
问题 I'm parsing a 20Gb file and outputting lines that meet a certain condition to another file, however occasionally python will read in 2 lines at once and concatenate them. inputFileHandle = open(inputFileName, 'r') row = 0 for line in inputFileHandle: row = row + 1 if line_meets_condition: outputFileHandle.write(line) else: lstIgnoredRows.append(row) I've checked the line endings in the source file and they check out as line feeds (ascii char 10). Pulling out the problem rows and parsing them

Android pinch zoom large image, memory efficient without losing detail

时间秒杀一切 提交于 2019-12-03 06:07:12
My app has to display a number of high resolution images (about 1900*2200 px), support pinch zoom. To avoid Out of memory error I plan to decode image to show full screen by using options.inSampleSize = scale (scale was calculated as Power of 2 as Document) (My view i used is TouchImageView extends of ImageView ) So i can quickly load image and swipe smoothly between screens(images). However, when i pinch zoom, my app loses detail because of scaled image. If i load full image, i can't load quickly or smoothly swipe, drag after pinch zoom. Then i try to only load full image when user begin

How do I download a large file (via HTTP) in .NET?

白昼怎懂夜的黑 提交于 2019-12-03 05:40:42
问题 I need to download a large file (2 GB) over HTTP in a C# console application. Problem is, after about 1.2 GB, the application runs out of memory. Here's the code I'm using: WebClient request = new WebClient(); request.Credentials = new NetworkCredential(username, password); byte[] fileData = request.DownloadData(baseURL + fName); As you can see... I'm reading the file directly into memory. I'm pretty sure I could solve this if I were to read the data back from HTTP in chunks and write it to a

How may I scroll with vim into a big file?

可紊 提交于 2019-12-03 04:22:43
问题 I have a big file with thousands of lines of thousands of characters. I move the cursor to 3000th character. If I use PageDown or Ctrl + D , the file will scroll but the cursor will come back to the first no-space character. There's is an option to set to keep the cursor in the same column after a such scroll ? I have the behavior with gvim on Window , vim on OpenVMS and Cygwin . 回答1: CTRL-E - scroll down CTRL-Y - scroll up 100 <CTRL-E> will scroll down 100 lines for example If you like using