First of all, I know that lock{} is synthetic sugar for Monitor class. (oh, syntactic sugar)
I was playing with simple mul
If you have no locking around the shared variable ms_Sum, then both threads are able to access the ms_Sum variable and increment the value without restriction. 2 threads running in parallel on a dual-core machine will both operate on the variable at the same time.
Memory: ms_Sum = 5
Thread1: ms_Sum += 1: ms_Sum = 5+1 = 6
Thread2: ms_Sum += 1: ms_Sum = 5+1 = 6 (running in parallel).
Here is a rough breakdown in which things are happening as best I can explain:
1: ms_sum = 5.
2: (Thread 1) ms_Sum += 1;
3: (Thread 2) ms_Sum += 1;
4: (Thread 1) "read value of ms_Sum" -> 5
5: (Thread 2) "read value of ms_Sum" -> 5
6: (Thread 1) ms_Sum = 5+1 = 6
6: (Thread 2) ms_Sum = 5+1 = 6
It makes sense that with no synchronization/locking you get a result of roughly half the expected total since 2 threads can do things "almost" twice as fast.
With proper synchronization, ie lock(ms_Lock) { ms_Counter += 1; }, the order changes to be more like this:
1: ms_sum = 5.
2: (Thread 1) OBTAIN LOCK. ms_Sum += 1;
3: (Thread 2) WAIT FOR LOCK.
4: (Thread 1) "read value of ms_Sum" -> 5
5: (Thread 1) ms_Sum = 5+1 = 6
6. (Thread 1) RELEASE LOCK.
7. (Thread 2) OBTAIN LOCK. ms_Sum += 1;
8: (Thread 2) "read value of ms_Sum" -> 6
9: (Thread 2) ms_Sum = 6+1 = 7
10. (Thread 2) RELEASE LOCK.
As for why lock(ms_Lock) {}; ms_Counter += 1; is "almost" correct, I think you're just getting lucky. The lock forces each thread to slow down and "wait their turn" to obtain and release the lock. The fact that the arithmetic operation ms_Sum += 1; is so trivial (it runs very fast) is probably why the result is "almost" ok. By the time thread 2 has performed the overhead of obtaining and releasing the lock, the simple arithmetic is likely already done by thread 1 so you get close to the desired result. If you were doing something more complex (taking more processing time) you'd find that it wouldn't get as close to your desired result.