binaryfiles

Code for searching for a string in a binary file

落爺英雄遲暮 提交于 2019-12-05 08:01:03
问题 I have asked this question a few days ago: How to look for an ANSI string in a binary file? and I got a really nice answer, what later turned into a much harder question: Can input iterators be used where forward iterators are expected? what is now really not on a level what I could understand. I am still learning C++ and I am looking for an easy way to search for a string in a binary file. Could someone show me a simple code for a minimalistic C++ console program which looks for a string in

How to read packed binary data in Go?

独自空忆成欢 提交于 2019-12-05 07:14:29
I'm trying to figure out the best way to read a packed binary file in Go that was produced by Python like the following: import struct f = open('tst.bin', 'wb') fmt = 'iih' #please note this is packed binary: 4byte int, 4byte int, 2byte int f.write(struct.pack(fmt,4, 185765, 1020)) f.write(struct.pack(fmt,4, 185765, 1022)) f.close() I have been tinkering with some of the examples I've seen on Github.com and a few other sources but I can't seem to get anything working correctly (update shows working method). What is the idiomatic way to do this sort of thing in Go? This is one of several

SQLite: insert binary data from command line

情到浓时终转凉″ 提交于 2019-12-05 06:03:57
I have this SQLite table: create table mytable ( aid INTEGER NOT NULL PRIMARY KEY, bid INTEGER NOT NULL, image BLOB ); And I want to insert a binary file into the image field in this table. Is it possible to do it from the sqlite3 command line interface? If so, how? I'm using Ubuntu. Thank you! You may use a syntax like : echo "insert into mytable values(1,1, \"`cat image`\")" | sqlite3 yourDb i'm not sure for the " around blob's value. Note the backquotes around cat command, means the cat command will be executed before the echo. [EDIT] Blob are stored as hexa digit with "X" prefix. You can

Full-text search on MongoDB GridFS?

我只是一个虾纸丫 提交于 2019-12-05 05:29:51
Say, if I want to store PDFs or ePub files using MongoDB's GridFS, is it possible to perform full-text searching on the data files? You can't currently do real full text search within mongo: http://www.mongodb.org/display/DOCS/Full+Text+Search+in+Mongo Feel free to vote for it here: https://jira.mongodb.org/browse/SERVER-380 Mongo is more of a general purpose scalable data store, and as of yet it doesn't have any full text search support. Depending on your use case, you could use the standard b-tree indexes with an array of all of the words in the text, but it won't do stemming or fuzzy

General question about Binary files

北城以北 提交于 2019-12-05 04:40:29
I am a beginner and I am having trouble in grasping binary files. When I write to a file in binary mode (in python), I just write normal text. There is nothing binary about it. I know every file on my computer is a binary file but I am having trouble distinguishing between files written in binary mode by me and files like audio, video etc files that show up as gibberish if I open them in a text editor. How are files that show up as gibberish created? Can you please give an example of a small file that is created like this, preferably in python? I have a feeling I am asking a really stupid

how to parse binary files in Clojure

ⅰ亾dé卋堺 提交于 2019-12-05 02:44:14
What is the cleanest way to parse binary data in clojure? I need to be able to read/write equally cleanly to a file or a socket. something like: (read-data source-of-data) => { :index 42 , :block-size 4 , data-size: 31415, :data (1 2 3 4 ...)} and the reverse for putting data back. It would be really great to somehow define the structure once and have the read and write functions use the same definition. Gloss makes it easy to define binary formats at the byte level for both reading and writing. (defcodec example-codec [:id :uint32 :msg-type (enum :byte {:a \A, :b \B}) :status (string :ascii

Efficiently reading few lines from a very large binary file

只谈情不闲聊 提交于 2019-12-05 02:38:53
问题 Here is a simple example to illustrate my problem: I have a large binary file with 10 million values. I want to get 5K values from certain points in this file. I have a list of indexes giving me the exact place in the file I have my value in. To solve this I tried two methods: Going through the values and simply using seek() (from the start of the file) to get each value, something like this: binaryFile_new = open(binary_folder_path, "r+b") for index in index_list: binaryFile_new.seek (size *

C#: Write values into Binary (.bin) file format

北战南征 提交于 2019-12-04 21:18:25
So lets say I have the following values: HEADER01 48 45 41 44 45 52 30 31 06/17/14 30 36 2F 31 37 2F 31 34 1.0 31 2E 30 0x0000 00 0x0027 27 0x0001 01 0x0001 01 0x0001 01 0x0001 01 0x0028 28 192 C0 168 A8 1 01 1 01 The first 3 values are STRINGS, should be converted to ASCII HEX values, then written in the .bin file The next 7 values are HEX, should be written AS-IS in the .bin file The last 4 values are INTEGERS, should be converted to HEX, then written in the .bin file OUTPUT (.bin) file should look something like this: 00000000 48 45 41 44 45 52 30 31 30 36 2F 31 37 2F 31 34 00000010 31 2E

spark in python: creating an rdd by loading binary data with numpy.fromfile

我是研究僧i 提交于 2019-12-04 21:15:58
The spark python api currently has limited support for loading large binary data files, and so I tried to get numpy.fromfile to help me out. I first got a list of filenames I'd like to load, e.g.: In [9] filenames Out[9]: ['A0000.dat', 'A0001.dat', 'A0002.dat', 'A0003.dat', 'A0004.dat'] I can load these files without problems with a crude iterative unionization, for i in range(len(filenames)): rdd = sc.parallelize([np.fromfile(filenames[i], dtype="int16", count=-1, sep='')]) if i==0: allRdd = rdd; else: allRdd = allRdd.union(rdd); It would be great to load the files all at once, and into

C# loading binary files

孤者浪人 提交于 2019-12-04 20:13:36
Please show me the best/fast methods for: 1) Loading very small binary files into memory. For example icons; 2) Loading/reading very big binary files of size 512Mb+. 3) Your common choice when you do not want to think about size/speed but must do only thing: read all bytes into memory? Thank you!!! P.S. Sorry for maybe trivial question. Please do not close it;) P.S.2. Mirror of analog question for Java; 1: For very small files File.ReadAllBytes will be fine. 2: For very big files and using .net 4.0 , you can make use MemoryMapped Files. 3: If Not using .net 4.0 than , reading chunks of data