When to use volatile and synchronized

前端 未结 5 1880
囚心锁ツ
囚心锁ツ 2020-12-07 16:16

I know there are many questions about this, but I still don\'t quite understand. I know what both of these keywords do, but I can\'t determine which to use in certain scenar

5条回答
  •  孤街浪徒
    2020-12-07 16:34

    There is insufficient information in your post to determine what is going on, which is why all the advice you are getting is general information about volatile and synchronized.

    So, here's my general advice:

    During the cycle of writing-compiling-running a program, there are two optimization points:

    • at compile time, when the compiler might try to reorder instructions or optimize data caching.
    • at runtime, when the CPU has its own optimizations, like caching and out-of-order execution.

    All this means that instructions will most likely not execute in the order that you wrote them, regardless if this order must be maintained in order to ensure program correctness in a multithreaded environment. A classic example you will often find in the literature is this:

    class ThreadTask implements Runnable {
        private boolean stop = false;
        private boolean work;
    
        public void run() {
            while(!stop) {
               work = !work; // simulate some work
            } 
        }
    
        public void stopWork() {
            stop = true; // signal thread to stop
        }
    
        public static void main(String[] args) {
            ThreadTask task = new ThreadTask();
            Thread t = new Thread(task);
            t.start();
            Thread.sleep(1000);
            task.stopWork();
            t.join();
        }
    }
    

    Depending on compiler optimizations and CPU architecture, the above code may never terminate on a multi-processor system. This is because the value of stop will be cached in a register of the CPU running thread t, such that the thread will never again read the value from main memory, even thought the main thread has updated it in the meantime.

    To combat this kind of situation, memory fences were introduced. These are special instructions that do not allow regular instructions before the fence to be reordered with instructions after the fence. One such mechanism is the volatile keyword. Variables marked volatile are not optimized by the compiler/CPU and will always be written/read directly to/from main memory. In short, volatile ensures visibility of a variable's value across CPU cores.

    Visibility is important, but should not be confused with atomicity. Two threads incrementing the same shared variable may produce inconsistent results even though the variable is declared volatile. This is due to the fact that on some systems the increment is actually translated into a sequence of assembler instructions that can be interrupted at any point. For such cases, critical sections such as the synchronized keyword need to be used. This means that only a single thread can access the code enclosed in the synchronized block. Other common uses of critical sections are atomic updates to a shared collection, when usually iterating over a collection while another thread is adding/removing items will cause an exception to be thrown.

    Finally two interesting points:

    • synchronized and a few other constructs such as Thread.join will introduce memory fences implicitly. Hence, incrementing a variable inside a synchronized block does not require the variable to also be volatile, assuming that's the only place it's being read/written.
    • For simple updates such as value swap, increment, decrement, you can use non-blocking atomic methods like the ones found in AtomicInteger, AtomicLong, etc. These are much faster than synchronized because they do not trigger a context switch in case the lock is already taken by another thread. They also introduce memory fences when used.

提交回复
热议问题