distributed-computing

Distributed Transaction on mysql

扶醉桌前 提交于 2021-02-11 17:31:48
问题 I'm working on a distributed system that uses distributed transactions, which means that I may have a transaction that needs to edit multiple databases (on multiple servers) at the same time. In my system there is a controller to manage the distribution. the scenario that I want to satisfy is: server A wants to initiate a distributed transaction. the participants are server A and server B. so server A sends a request to the controller to initiate a distributed transaction. the controller

Distributed Transaction on mysql

喜夏-厌秋 提交于 2021-02-11 17:30:57
问题 I'm working on a distributed system that uses distributed transactions, which means that I may have a transaction that needs to edit multiple databases (on multiple servers) at the same time. In my system there is a controller to manage the distribution. the scenario that I want to satisfy is: server A wants to initiate a distributed transaction. the participants are server A and server B. so server A sends a request to the controller to initiate a distributed transaction. the controller

pyzmq REQ/REP with asyncio await for variable

好久不见. 提交于 2021-02-08 09:12:20
问题 I'm playing for the first time with asyncio in python and trying to combine it with ZMQ. Basically my issue is that I have a REP/REQ system, in an async def with a function I need to await. how the value is not updated. Here's a snippet of the code to illustrate that: #Declaring the zmq context context = zmq_asyncio.Context() REP_server_django = context.socket(zmq.REP) REP_server_django.bind("tcp://*:5558") I send this object to a class and get it back in this function async def readsonar

Replace groupByKey with reduceByKey in Spark

大憨熊 提交于 2021-02-07 04:28:19
问题 Hello I often need to use groupByKey in my code but I know it's a very heavy operation. Since I'm working to improve performance I was wondering if my approach to remove all groupByKey calls is efficient. I was used to create an RDD from another RDD and creating pair of type (Int, Int) rdd1 = [(1, 2), (1, 3), (2 , 3), (2, 4), (3, 5)] and since I needed to obtain something like this: [(1, [2, 3]), (2 , [3, 4]), (3, [5])] what I used was out = rdd1.groupByKey but since this approach might be

How to distribute (card game table) dealers across servers in a balanced way?

久未见 提交于 2021-01-29 05:56:25
问题 I am currently working on an online card game, similar to blackjack, which will consist of a series of tables where each table has a "dealer" and multiple human players. The dealer (a computer bot) is responsible for dealing and shuffling cards. The tables will be stored in a PostgreSQL database table and it will be possible for a human admin to add/remove/edit tables. The game will consist of a web front end and a REST/websocket API backend. I will probably use Kubernetes and Nginx as a load

Dask-distributed. How to get task key ID in the function being calculated?

旧街凉风 提交于 2021-01-29 00:58:18
问题 My computations with dask.distributed include creation of intermediate files whose names include UUID4, that identify that chunk of work. pairs = '{}\n{}\n{}\n{}'.format(list1, list2, list3, ...) file_path = os.path.join(job_output_root, 'pairs', 'pairs-{}.txt'.format(str(uuid.uuid4()).replace('-', ''))) file(file_path, 'wt').writelines(pairs) In the same time, all tasks in the dask distributed cluster have unique keys. Therefore, it would be natural to use that key ID for file name. Is it

Dask-distributed. How to get task key ID in the function being calculated?

删除回忆录丶 提交于 2021-01-29 00:56:11
问题 My computations with dask.distributed include creation of intermediate files whose names include UUID4, that identify that chunk of work. pairs = '{}\n{}\n{}\n{}'.format(list1, list2, list3, ...) file_path = os.path.join(job_output_root, 'pairs', 'pairs-{}.txt'.format(str(uuid.uuid4()).replace('-', ''))) file(file_path, 'wt').writelines(pairs) In the same time, all tasks in the dask distributed cluster have unique keys. Therefore, it would be natural to use that key ID for file name. Is it

Dask-distributed. How to get task key ID in the function being calculated?

≡放荡痞女 提交于 2021-01-29 00:52:45
问题 My computations with dask.distributed include creation of intermediate files whose names include UUID4, that identify that chunk of work. pairs = '{}\n{}\n{}\n{}'.format(list1, list2, list3, ...) file_path = os.path.join(job_output_root, 'pairs', 'pairs-{}.txt'.format(str(uuid.uuid4()).replace('-', ''))) file(file_path, 'wt').writelines(pairs) In the same time, all tasks in the dask distributed cluster have unique keys. Therefore, it would be natural to use that key ID for file name. Is it

Dask-distributed. How to get task key ID in the function being calculated?

三世轮回 提交于 2021-01-29 00:50:08
问题 My computations with dask.distributed include creation of intermediate files whose names include UUID4, that identify that chunk of work. pairs = '{}\n{}\n{}\n{}'.format(list1, list2, list3, ...) file_path = os.path.join(job_output_root, 'pairs', 'pairs-{}.txt'.format(str(uuid.uuid4()).replace('-', ''))) file(file_path, 'wt').writelines(pairs) In the same time, all tasks in the dask distributed cluster have unique keys. Therefore, it would be natural to use that key ID for file name. Is it

Can't get two Erlang nodes to communicate

限于喜欢 提交于 2021-01-27 17:12:54
问题 No matter what I try, I can't get two different nodes to communicate. This is probably a very simple problem to solve. I have created the file .cookie.erlang and I've placed it into my home directory. Then I open a terminal window and type the following commands: erl -sname user1@pc erlang:set_cookie(node(),cookie). In another terminal window I type: erl -sname user2@pc erlang:set_cookie(node(),cookie). Now if I type the following command in the first terminal window: net_adm:ping(user2@pc).