simulation

agent-based simulation: performance issue: Python vs NetLogo & Repast

那年仲夏 提交于 2019-12-02 23:34:50
I'm replicating a small piece of Sugarscape agent simulation model in Python 3. I found the performance of my code is ~3 times slower than that of NetLogo. Is it likely the problem with my code, or can it be the inherent limitation of Python? Obviously, this is just a fragment of the code, but that's where Python spends two-thirds of the run-time. I hope if I wrote something really inefficient it might show up in this fragment: UP = (0, -1) RIGHT = (1, 0) DOWN = (0, 1) LEFT = (-1, 0) all_directions = [UP, DOWN, RIGHT, LEFT] # point is just a tuple (x, y) def look_around(self): max_sugar_point

Is Hadoop right for running my simulations?

自闭症网瘾萝莉.ら 提交于 2019-12-02 20:43:50
have written a stochastic simulation in Java, which loads data from a few CSV files on disk (totaling about 100MB) and writes results to another output file (not much data, just a boolean and a few numbers). There is also a parameters file, and for different parameters the distribution of simulation outputs would be expected to change. To determine the correct/best input parameters I need to run multiple simulations, across multiple input parameter configurations, and look at the distributions of the outputs in each group. Each simulation takes 0.1-10 min depending on parameters and randomness

Should I use Threads or Tasks - Multiple Client Simulation

点点圈 提交于 2019-12-02 19:35:17
I am writing a client simulation program in which all simulated client runs some predefined routine against server - which is a web server running in azure with four instances. All simulated client run the same routine after getting connected to server. At any time I would like to simulate 300 to 800 clients using my program. My question is: Should I create N instances of client class and run them in N different threads? OR Should I use Task Library to do the things? You certainly should not create 800 threads. Let's take a step back here. You have a device called a "server" which takes in

Improving soccer simulation algorithm

怎甘沉沦 提交于 2019-12-02 17:48:24
In another question, you helped me to build a simulation algorithm for soccer. I got some very good answers there. Thanks again! Now I've coded this algorithm. I would like to improve it and find little mistakes which could be in it. I don't want to discuss how to solve it - as we did in the last question. Now I only want to improve it. Can you help me again please? Are there any mistakes? Is the structure of the nested if-clauses ok? Could it be improved? Are the tactics integrated correctly according to my description? Tactical settings which should have an influence on the randomness:

Algorithms for Simulating Fluid Flow

夙愿已清 提交于 2019-12-02 17:17:52
I've got a game idea that requires some semi-realistic simulation of a fluid flowing around various objects. Think of a pool of mercury on an irregular surface that is being tilted in various directions. This is for a game, so 100% physical realism is not necessary. What is most important is that the calculations can be done in real time on a device with the horsepower of an iPhone. I'm thinking that some sort of cellular automaton or particle system is the way to go, but I don't know where to start. Any suggestions? This is not my area of research but I believe this is considered the

C++ simulate pressing of equal sign (=) and question mark (?)

痴心易碎 提交于 2019-12-02 16:36:50
问题 Having some problems with simulating a keypress of equal sign (=) and question mark (?). I figured if there's no virtual key code for those two, I should combine key presses and releases as this guy did with Ctrl-V: http://batchloaf.wordpress.com/2012/10/18/simulating-a-ctrl-v-keystroke-in-win32-c-or-c-using-sendinput/ my code for "=" (SHIFT + "+"): INPUT ip; ip.type = INPUT_KEYBOARD; ip.ki.wScan = 0; // hardware scan code for key ip.ki.time = 0; ip.ki.dwExtraInfo = 0; ip.ki.wVk = VK_LSHIFT;

VHDL - How should I create a clock in a testbench?

杀马特。学长 韩版系。学妹 提交于 2019-12-02 16:20:05
How should I create a clock in a testbench? I already have found one answer, however others on stack overflow have suggested that there are alternative or better ways of achieving this: LIBRARY ieee; USE ieee.std_logic_1164.ALL; ENTITY test_tb IS END test_tb; ARCHITECTURE behavior OF test_tb IS COMPONENT test PORT(clk : IN std_logic;) END COMPONENT; signal clk : std_logic := '0'; constant clk_period : time := 1 ns; BEGIN uut: test PORT MAP (clk => clk); -- Clock process definitions( clock with 50% duty cycle is generated here. clk_process :process begin clk <= '0'; wait for clk_period/2; --for

What are some algorithms that will allow me to simulate planetary physics?

独自空忆成欢 提交于 2019-12-02 15:51:20
I'm interested in doing a "Solar System" simulator that will allow me to simulate the rotational and gravitational forces of planets and stars. I'd like to be able to say, simulate our solar system, and simulate it across varying speeds (ie, watch Earth and other planets rotate around the sun across days, years, etc). I'd like to be able to add planets and change planets mass, etc, to see how it would effect the system. Does anyone have any resources that would point me in the right direction for writing this sort of simulator? Are there any existing physics engines which are designed for this

Calculating the mean of every replication

房东的猫 提交于 2019-12-02 13:36:21
I have the following code set.seed(30) nsim <- 50 ## NUMBER OF REPLICATIONS demand <- c(12,13,24,12,13,12,14,10,11,10) res <- replicate(nsim, { load <- runif(10,11,14) diff <- load - demand ## DIFFERENCE BETWEEN DEMAND AND LOAD return(sum(diff < 0)) }) res [1] 6 5 7 4 4 5 4 3 6 4 5 5 5 4 2 5 3 3 3 5 3 2 4 6 5 4 4 3 5 6 4 4 3 6 5 3 5 5 4 3 3 [42] 6 4 4 4 6 6 5 4 5 I have a huge data set and the question is what is the fastest way of calculating the mean for every replication. For example the res in first replication is 6 so the result should be 6/1=6 for the second 6+5/2=5.5 for the third 6+5+7

Feedback from threads to main program

巧了我就是萌 提交于 2019-12-02 10:25:07
问题 My software will simulate a few hundred hardware devices, each of which will send several thousand reports to a database server. Trying it without threading did not give very good results, so now it's time to thread. Since I am load testing the d/b server, some of those transactions will succeed and a few may fail. The GUI of the main program needs to reflect this. How should the threads communicate their results back to the main program? Update global variables? Send a message? Or something