large-files

Android pinch zoom large image, memory efficient without losing detail

好久不见. 提交于 2019-12-04 10:53:33
问题 My app has to display a number of high resolution images (about 1900*2200 px), support pinch zoom. To avoid Out of memory error I plan to decode image to show full screen by using options.inSampleSize = scale (scale was calculated as Power of 2 as Document) (My view i used is TouchImageView extends of ImageView ) So i can quickly load image and swipe smoothly between screens(images). However, when i pinch zoom, my app loses detail because of scaled image. If i load full image, i can't load

grep -f alternative for huge files

筅森魡賤 提交于 2019-12-04 09:24:30
问题 grep -F -f file1 file2 file1 is 90 Mb (2.5 million lines, one word per line) file2 is 45 Gb That command doesn't actually produce anything whatsoever, no matter how long I leave it running. Clearly, this is beyond grep's scope. It seems grep can't handle that many queries from the -f option. However, the following command does produce the desired result: head file1 > file3 grep -F -f file3 file2 I have doubts about whether sed or awk would be appropriate alternatives either, given the file

Does fread fail for large files?

蓝咒 提交于 2019-12-04 06:37:11
I have to analyze a 16 GB file. I am reading through the file sequentially using fread() and fseek() . Is it feasible? Will fread() work for such a large file? You don't mention a language, so I'm going to assume C. I don't see any problems with fread , but fseek and ftell may have issues. Those functions use long int as the data type to hold the file position, rather than something intelligent like fpos_t or even size_t . This means that they can fail to work on a file over 2 GB, and can certainly fail on a 16 GB file. You need to see how big long int is on your platform. If it's 64 bits, you

need help designing for search algorithm in a more efficient way

十年热恋 提交于 2019-12-04 05:17:45
I have a problem that involves biology area. Right now I have 4 VERY LARGE files(each with 0.1 billion lines), but the structure is rather simple, each line of these files has only 2 fields, both stands for a type of gene. My goal is: design an efficient algorithm that can achieves the following: Find a circle within the contents of these 4 files. The circle is defined as: field #1 in a line in file 1 == field #1 in a line in file 2 and field #2 in a line in file 2 == field #1 in a line in file 3 and field #2 in a line in file 3 == field #1 in a line in file 4 and field #2 in a line in file 4

How to read line-delimited JSON from large file (line by line)

邮差的信 提交于 2019-12-03 22:21:43
I'm trying to load a large file (2GB in size) filled with JSON strings, delimited by newlines. Ex: { "key11": value11, "key12": value12, } { "key21": value21, "key22": value22, } … The way I'm importing it now is: content = open(file_path, "r").read() j_content = json.loads("[" + content.replace("}\n{", "},\n{") + "]") Which seems like a hack (adding commas between each JSON string and also a beginning and ending square bracket to make it a proper list). Is there a better way to specify the JSON delimiter (newline \n instead of comma , )? Also, Python can't seem to properly allocate memory for

Read large file into sqlite table in objective-C on iPhone

久未见 提交于 2019-12-03 21:32:19
I have a 2 MB file, not too large, that I'd like to put into an sqlite database so that I can search it. There are about 30K entries that are in CSV format, with six fields per line. My understanding is that sqlite on the iPhone can handle a database of this size. I have taken a few approaches but they have all been slow > 30 s. I've tried: 1) Using C code to read the file and parse the fields into arrays. 2) Using the following Objective-C code to parse the file and put it into directly into the sqlite database: NSString *file_text = [NSString stringWithContentsOfFile: filePath usedEncoding:

AS3 Working With Arbitrarily Large Files

你。 提交于 2019-12-03 17:07:42
I am trying to read a very large file in AS3 and am having problems with the runtime just crashing on me. I'm currently using a FileStream to open the file asynchronously. This does not work(crashes without an Exception) for files bigger than about 300MB. _fileStream = new FileStream(); _fileStream.addEventListener(IOErrorEvent.IO_ERROR, loadError); _fileStream.addEventListener(Event.COMPLETE, loadComplete); _fileStream.openAsync(myFile, FileMode.READ); In looking at the documentation , it sounds like the FileStream class still tries to read in the entire file to memory(which is bad for large

off_t without -D_FILE_OFFSET_BITS=64 on a file > 2GB

血红的双手。 提交于 2019-12-03 17:02:27
1- I'm wondering, what would be the problem if I try to read a file greater than 2GB in size without compiling my program with the option -D_FILE_OFFSET_BITS=64 using off_t and using the second function on this page ? would it segfault? 2- I'm planning to use this implementation with off64_t and #define _LARGEFILE64_SOURCE 1 #define _FILE_OFFSET_BITS 64 Would there be any problem? stat() will fail, and errno set to EOVERFLOW in that case. Here's what the linux man page says EOVERFLOW stat()) path refers to a file whose size cannot be represented in the type off_t. This can occur when an

ASP.NET C# OutofMemoryException On Large File Upload

痞子三分冷 提交于 2019-12-03 16:39:45
I have the following file upload handler: public class FileUploader : IHttpHandler { public void ProcessRequest(HttpContext context) { HttpRequest request = context.Request; context.Response.ContentType = "text/html"; context.Response.ContentEncoding = System.Text.Encoding.UTF8; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); var tempPath = request.PhysicalApplicationPath + "\\Files\\TempFiles\\"; byte[] buffer = new byte[request.ContentLength]; using (BinaryReader br = new BinaryReader(request.InputStream)) { br.Read(buffer, 0, buffer.Length); } var tempName = WriteTempFile

Multi-line regex search in whole file

[亡魂溺海] 提交于 2019-12-03 12:52:25
I've found loads of examples on to to replace text in files using regex. However it all boils down to two versions: 1. Iterate over all lines in the file and apply regex to each single line 2. Load the whole file. No. 2 Is not feasible using "my" files - they're about 2GiB... As to No. 1: Currently this is my approach, however I was wondering... What if need to apply a regex spanning more than one line ? Here's the Answer: There is no easy way I found a StreamRegex-Class which could be able to do what I am looking for. From what I could grasp of the algorithm: Start at the beginning of the