scalability

scaleable cloud computing services [closed]

為{幸葍}努か 提交于 2019-12-11 18:54:58
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 3 years ago . I'm looking for a cloud computing service with the following requirements: no need to manage servers instant availability automatic scaling ability to run tasks for at least a couple of minutes Google App Engine seems to meet all of these requirements with the exception that processes can only run for 30 seconds

Sharing sockets over separate nodeJS instances

走远了吗. 提交于 2019-12-11 13:22:56
问题 I'm making chat application with multiple chat servers(nodeJS) and one redis server which should help grouping all nodeJS instances. Well I have this: var io = require('socket.io').listen(3000); // Create a Redis client var redis = require('redis'); client = redis.createClient(6379, '54.154.149.***', {}); // Check if redis is running var redisIsReady = false; client.on('error', function(err) { redisIsReady = false; console.log('redis is not running'); console.log(err); }); client.on('ready',

Adding composite indexes on MYSQL table

坚强是说给别人听的谎言 提交于 2019-12-11 07:03:12
问题 I have a table like this CREATE TABLE IF NOT EXISTS `billing_success` ( `bill_id` int(11) NOT NULL AUTO_INCREMENT, `msisdn` char(10) NOT NULL, `circle` varchar(2) NOT NULL, `amount` int(11) NOT NULL, `reference_id` varchar(100) NOT NULL, `source` varchar(100) NOT NULL, `time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`bill_id`), KEY `msisdn` (`msisdn`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=8573316 ; and I want to add composite indexes to optimize queries. This

Which is faster: Many rows or many columns?

旧时模样 提交于 2019-12-11 04:32:26
问题 In MySQL, is it generally faster/more efficient/scalable to return 100 rows with 3 columns, or 1 row with 100 columns? In other words, when storing many key => value pairs related to a record, is it better to store each key => value pair in a separate row with with the record_id as a key, or to have one row per record_id with a column for each key? Also, assume also that keys will need to be added/removed fairly regularly, which I assume would affect the long term maintainability of the many

CouchDB db-per-user with shared data scalability

戏子无情 提交于 2019-12-11 02:33:43
问题 I have an application with the following architecture: The master couchdb is required to share data between the users. EG: If user-1 writes data to the cloud, this replicates to the master and back to user-2 and user-3. However, as the user base increases so do the number of cloud user couchDBs, which results in a large number of replication links between the cloud user couchDBs and the master couchDB. I believe this can lead to a huge bottleneck. Is there a better way to approach this

Managing data-store concurrency as microservices scale

半城伤御伤魂 提交于 2019-12-11 02:06:18
问题 I am still trying to find my way around micro-services. I have a fundamental question. In an enterprise scenario, micro-services would probably have to write to a persistent data-store - be it a RDBMS or some kind of NoSQL. In most cases the persistent data-store is enterprise grade, but a single entity (ofcourse replicated and backed up). Now, let's consider the case of a single micro-service deployed to private/public cloud environment having it's own persistent data-store (say enterprise

With Hadoop, can I create a tasktracker on a machine that isn't running a datanode?

依然范特西╮ 提交于 2019-12-10 23:22:16
问题 So here's my situation : I have a mapreduce job that uses HBase. My mapper takes one line of text input and updates HBase. I have no reducer, and I'm not writing any output to the disc. I would like the ability to add more processing power to my cluster when I'm expecting a burst of utilization, and then scale back down when utilization decreases. Let's assume for the moment that I can't use Amazon or any other cloud provider; I'm running in a private cluster. One solution would be to add new

The proper way to scale python tornado application

孤人 提交于 2019-12-10 21:35:53
问题 I am searching for some way to scale one instance of tornado application to many. I have 5 servers and want to run at each 4 instances of application. The main issue I don't know how to resolve - is to make communication between instances in right way. I see next approaches to make it: Use memcached for sharing data. I don't think this approach is good, because much traffic would go to server with memcached. Therefore in the future there can be trafic-related issues. Open sockets between each

What's an effecient way to store a questionnaire in a database?

匆匆过客 提交于 2019-12-10 19:52:59
问题 Since questionnaires can always change and the questions themselves could be lengthy it seems silly to use questions as column names. Is there any convention or proven method for storing a questionnaire within a database? I was thinking of having a table with (Question-ID, Question) and then a second table for question-id and answer. But this solution might be too slow cause a third join would be necessary to join the questions with a particular user. 回答1: What's wrong with a join? That's the

NodeJS Socket.io Server<-->Server communication

谁说我不能喝 提交于 2019-12-10 18:48:39
问题 I'm trying to establish server to server communication in NodeJS (cluster architecture, separate VMs) with socket.io. I try to use what is posted here http://socket.io/docs/using-multiple-nodes/ var io = require('socket.io')(3000); var redis = require('socket.io-redis'); io.adapter(redis({ host: 'localhost', port: 6379 })); So I assume (probably wrong) that when doing io.emit("message", "some example message") I can listen for it with: io.on('connection', function(socket){ console.log("io.on"