chunking

File uploads; How to utilize “chunking”?

白昼怎懂夜的黑 提交于 2019-12-28 06:46:40
问题 I am (still) attempting to upload large files <200mb via a html form using php. During my research into this I have come across the term "chunking", I understand that this process can break the file into handy sizes such as 5mb and reassemble them into the full file at the server side. My problem seems to be where I can begin? I seem unable to find the correct resources by googling (Or perhaps I'm suffering from not knowing which terms to search for). So what I'm hoping for today is a chance

ValueError: import data via chunks into pandas.csv_reader()

别来无恙 提交于 2019-12-24 07:28:30
问题 I have a large gzip file which I would like to import into a pandas dataframe. Unfortunately, the file has an uneven number of columns. The data has roughly this format: .... Col_20: 25 Col_21: 23432 Col22: 639142 .... Col_20: 25 Col_22: 25134 Col23: 243344 .... Col_21: 75 Col_23: 79876 Col25: 634534 Col22: 5 Col24: 73453 .... Col_20: 25 Col_21: 32425 Col23: 989423 .... Col_20: 25 Col_21: 23424 Col22: 342421 Col23: 7 Col24: 13424 Col 25: 67 .... Col_20: 95 Col_21: 32121 Col25: 111231 As a

jQuery file uploader - Django not working correctly with chunks

三世轮回 提交于 2019-12-24 01:52:32
问题 I've spent some days by now trying to figure out how to tell Django that my jQuery file uploader is sending chunks and not x seperate files. I know that I need a custom FileUploadHandler like here in this one. My client-side code is posted in this question. The plugin sends chunk by chunk as a separate AJAX call (at least with FireBug it looks like this). The server accepts every one of them and saves them under a different name (in my case "_1", "_2", "_3"... ). And yes, the handler is used.

Semantic parsing with NLTK

纵然是瞬间 提交于 2019-12-23 10:22:45
问题 I am trying to use NLTK for semantic parsing of spoken navigation commands such as "go to San Francisco", "give me directions to 123 Main Street", etc. This could be done with a fairly simple CFG grammar such as S -> COMMAND LOCATION COMMAND -> "go to" | "give me directions to" | ... LOCATION -> CITY | STREET | ... The problem is that this involves non-atomic (more than one word-long) literals such as "go to", which NLTK doesn't seem to be set up for (correct me if I am wrong). The parsing

Memory usage serializing chunked byte arrays with Protobuf-net

旧巷老猫 提交于 2019-12-19 06:06:07
问题 In our application we have some data structures which amongst other things contain a chunked list of bytes (currently exposed as a List<byte[]> ). We chunk bytes up because if we allow the byte arrays to be put on the large object heap then over time we suffer from memory fragmentation. We've also started using Protobuf-net to serialize these structures, using our own generated serialization DLL. However we've noticed that Protobuf-net is creating very large in-memory buffers while

How can I upload large files by chunk, pieces?

大兔子大兔子 提交于 2019-12-18 16:56:55
问题 I have got a little file sharing webpage. It's free to use it. I would like to upload files between 0mb and 1GB. I'm searching in Google since two days, but I can't find anything what I needed... My webpage: http://boxy.tigyisolutions.hu However I can upload only 20-30mb now. I would like upload only 1 file at once. But it may be bigger than 500-600mb ... Can anyone help me? I tried jquery fileupload, but it's uploading nothing for me. 回答1: The Blob.slice method will allow you to split up a

Handling large SQL select queries / Read sql data in chunks

强颜欢笑 提交于 2019-12-18 12:29:19
问题 I'm using .Net 4.0 and SQL server 2008 R2. I'm running a big SQL select query which returns millions of results and takes up a long time to fully run. Does anyone know how can I read only some of the results returned by the query without having to wait for the whole query to complete? In other words, I want to read the first by 10,000 records chunks while the query still runs and getting the next results. 回答1: It depends in part on whether the query itself is streaming, or whether it does

How do I use File.ReadAllBytes In chunks

爷,独闯天下 提交于 2019-12-18 09:56:53
问题 I am using this code string location1 = textBox2.Text; byte[] bytes = File.ReadAllBytes(location1); string text = (Convert.ToBase64String(bytes)); richTextBox1.Text = text; But when I use a file that is too big I get out of memory exception. I want to use File.ReadAllBytes in chunks. I have seen code like this below System.IO.FileStream fs = new System.IO.FileStream(textBox2.Text, System.IO.FileMode.Open); byte[] buf = new byte[BUF_SIZE]; int bytesRead; // Read the file one kilobyte at a time

How do I use File.ReadAllBytes In chunks

你离开我真会死。 提交于 2019-12-18 09:56:12
问题 I am using this code string location1 = textBox2.Text; byte[] bytes = File.ReadAllBytes(location1); string text = (Convert.ToBase64String(bytes)); richTextBox1.Text = text; But when I use a file that is too big I get out of memory exception. I want to use File.ReadAllBytes in chunks. I have seen code like this below System.IO.FileStream fs = new System.IO.FileStream(textBox2.Text, System.IO.FileMode.Open); byte[] buf = new byte[BUF_SIZE]; int bytesRead; // Read the file one kilobyte at a time

split file on Nth occurrence of delimiter

試著忘記壹切 提交于 2019-12-17 16:22:06
问题 Is there a one-liner to split a text file into pieces / chunks after every Nth occurrence of a delimiter? example: the delimiter below is "+" entry 1 some more + entry 2 some more even more + entry 3 some more + entry 4 some more + ... There are several million entries, so splitting on every occurrence of delimiter "+" is a bad idea. I want to split on, say, every 50,000th instance of delimiter "+". Unix commands "split" and "csplit" just don't seem to do this... 回答1: Using awk you could: awk