riak

How to deactivate or delete a bucket type in Riak?

耗尽温柔 提交于 2019-12-09 17:21:35
问题 /home/khorkak> sudo riak-admin bucket-type Usage: riak-admin bucket-type <command> The follow commands can be used to manage bucket types for the cluster: list List all bucket types and their activation status status <type> Display the status and properties of a type activate <type> Activate a type create <type> <json> Create or modify a type before activation update <type> <json> Update a type after activation /home/khorkak> Well I have a set of bucket types I created while trying some

Riak link-walking like a join?

隐身守侯 提交于 2019-12-09 16:46:18
问题 I am looking to store pictures in a NoSQL database (<5MB) and link them to articles in a different bucket. What kind of speed does Riak's link walking feature offer? Is it like a RDBMS join at all? 回答1: Links are not at all similar to JOINs (which involve a Cartesian product), but they can be used for similar purposes in some senses. They are very similar to links in an HTML document. With link-walking you either start with a single key, or you create a map-reduce job that starts with

Riak fails at MapReduce queries. Which configuration to use?

江枫思渺然 提交于 2019-12-07 02:04:06
问题 I am working on a nodejs application in combination with riak / riak-js and run into the following problem: Running this request db.mapreduce .add('logs') .run(); corretly returns all 155.000 items stored in the bucket logs with their IDs: [ 'logs', '1GXtBX2LvXpcPeeR89IuipRUFmB' ], [ 'logs', '63vL86NZ96JptsHifW8JDgRjiCv' ], [ 'logs', 'NfseTamulBjwVOenbeWoMSNRZnr' ], [ 'logs', 'VzNouzHc7B7bSzvNeI1xoQ5ih8J' ], [ 'logs', 'UBM1IDcbZkMW4iRWdvo4W7zp6dc' ], [ 'logs', 'FtNhPxaay4XI9qfh4Cf9LFO1Oai' ],

Riak database fails after a short period

狂风中的少年 提交于 2019-12-06 11:24:04
I crerated a simple erlang application which periodically collects required data and puts it in a riak database. As I start my application it runs smoothly.. but after a period of time it stucks as PUT requests to riak database becomes too slow.. It is logs from my app: 2013-06-26 12:44:09.090 [info] <0.60.0> data processed in [16476 ms] 2013-06-26 12:45:51.472 [info] <0.60.0> data processed in [18793 ms] ... 2013-06-26 12:57:28.138 [info] <0.60.0> data processed in [15135 ms] 2013-06-26 13:07:01.484 [info] <0.60.0> data processed in [488420 ms] 2013-06-26 14:03:11.561 [info] <0.60.0> data

Riak: are links dissolved if the target is deleted?

谁说胖子不能爱 提交于 2019-12-06 10:26:29
When an item is deleted from a store are links automatically deleted from all of the documents linking to the now missing item? Or do we have a situation that's similar to a broken link on an HTML page? No, links are not deleted automatically. Links are just a metadata stored with objects so to find all objects which link to a deleted object you need to traverse the whole database which is not reasonable. 来源: https://stackoverflow.com/questions/5113264/riak-are-links-dissolved-if-the-target-is-deleted

Mapreduce with Riak

不问归期 提交于 2019-12-06 07:53:40
问题 Does anyone have example code for mapreduce for Riak that can be run on a single Riak node. 回答1: cd ~/riak erl -name zed@127.0.0.1 -setcookie riak -pa apps/riak/ebin In the shell: # connect to the server > {ok, Client} = riak:client_connect('riak@127.0.0.1'). {ok,{riak_client,'riak@127.0.0.1',<<6,201,208,64>>}} # create and insert objects > Client:put(riak_object:new(<<"groceries">>, <<"mine">>, ["eggs", "bacons"]), 1). ok > Client:put(riak_object:new(<<"groceries">>, <<"yours">>, ["eggs",

How to append data to a Riak key under a heavily distributed environment?

非 Y 不嫁゛ 提交于 2019-12-06 02:25:02
问题 Using Riak I want to append data sequentially in a way that I can obtain all of the data I appended from time to time. Think of logs, if I pick incremented log rows and transfer them to riak, at some point I want to reconstitute all what I have appended. I thought of doing this by creating a new bucket for that purpose, then add keys defined by a sequential number or datetime stamp, and add the content to it, then use the list keys API and reconstitute the data I need. The problem with that

Bitcask ok for simple and high performant file store?

陌路散爱 提交于 2019-12-04 22:10:12
问题 I am looking for a simple way to store and retrieve millions of xml files. Currently everything is done in a filesystem, which has some performance issues. Our requirements are: Ability to store millions of xml-files in a batch-process. XML files may be up to a few megs large, most in the 100KB-range. Very fast random lookup by id (e.g. document URL) Accessible by both Java and Perl Available on the most important Linux-Distros and Windows I did have a look at several NoSQL-Platforms (e.g.

Completely confused about MapReduce in Riak + Erlang's riakc client

僤鯓⒐⒋嵵緔 提交于 2019-12-04 21:46:18
The main thing I'm confused about here (I think) is what the arguments to the qfun are supposed to be and what the return value should be. The README basically doesn't say anything about this and the example it gives throws away the second and third args. Right now I'm only trying to understand the arguments and not using Riak for anything practical. Eventually I'll be trying to rebuild our (slow, MySQL-based) financial reporting system with it. So ignoring the pointlessness of my goal here, why does the following give me a badfun exception? The data is just tuples (pairs) of Names and Ages,

Downsides of storing binary data in Riak?

北慕城南 提交于 2019-12-04 17:13:14
问题 What are the problems, if any, of storing binary data in Riak? Does it effect the maintainability and performance of the clustering? What would the performance differences be between using Riak for this rather than a distributed file system? 回答1: Adding to @Oscar-Godson's excellent answer, you're likely to experience problems with values much larger than 50MBs. Bitcask is best suited for values that are up to a few KBs. If you're storing large values, you may want to consider alternative