Double-checked locking without volatile

前端 未结 5 1543
礼貌的吻别
礼貌的吻别 2020-12-24 03:06

I read this question about how to do Double-checked locking:

// Double-check idiom for lazy initialization of instance fields
private volatile FieldType fiel         


        
5条回答
  •  鱼传尺愫
    2020-12-24 03:43

    In short

    The version of the code without the volatile or the wrapper class is dependent on the memory model of the underlying operating system that the JVM is running on.

    The version with the wrapper class is a known alternative known as the Initialization on Demand Holder design pattern and relies upon the ClassLoader contract that any given class is loaded at most once, upon first access, and in a thread-safe way.

    The need for volatile

    The way developers think of code execution most of the time is that the program is loaded into main memory and directly executed from there. The reality, however, is that there are a number of hardware caches between main memory and the processor cores. The problem arises because each thread might run on separate processors, each with their own independent copy of the variables in scope; while we like to logically think of field as a single location, the reality is more complicated.

    To run through a simple (though perhaps verbose) example, consider a scenario with two threads and a single level of hardware caching, where each thread has their own copy of field in that cache. So already there are three versions of field: one in main memory, one in the first copy, and one in the second copy. I'll refer to these as fieldM, fieldA, and fieldB respectively.

    1. Initial state
      fieldM = null
      fieldA = null
      fieldB = null
    2. Thread A performs the first null-check, finds fieldA is null.
    3. Thread A acquires the lock on this.
    4. Thread B performs the first null-check, finds fieldB is null.
    5. Thread B tries to acquire the lock on this but finds that it's held by thread A. Thread B sleeps.
    6. Thread A performs the second null-check, finds fieldA is null.
    7. Thread A assigns fieldA the value fieldType1 and releases the lock. Since field is not volatile this assignment is not propagated out.
      fieldM = null
      fieldA = fieldType1
      fieldB = null
    8. Thread B awakes and acquires the lock on this.
    9. Thread B performs the second null-check, finds fieldB is null.
    10. Thread B assigns fieldB the value fieldType2 and releases the lock.
      fieldM = null
      fieldA = fieldType1
      fieldB = fieldType2
    11. At some point, the writes to cache copy A are synched back to main memory.
      fieldM = fieldType1
      fieldA = fieldType1
      fieldB = fieldType2
    12. At some later point, the writes to cache copy B are synched back to main memory overwriting the assignment made by copy A.
      fieldM = fieldType2
      fieldA = fieldType1
      fieldB = fieldType2

    As one of the commenters on the question mentioned, using volatile ensures writes are visible. I don't know the mechanism used to ensure this -- it could be that changes are propagated out to each copy, it could be that the copies are never made in the first place and all accesses of field are against main memory.

    One last note on this: I mentioned earlier that the results are system dependent. This is because different underlying systems may take less optimistic approaches to their memory model and treat all memory shared across threads as volatile or may perhaps apply a heuristic to determine whether a particular reference should be treated as volatile or not, though at the cost of performance of synching to main memory. This can make testing for these problems a nightmare; not only do you have to run against a enough large sample to try to trigger the race condition, you might just happen to be testing on a system which is conservative enough to never trigger the condition.

    Initialization on Demand holder

    The main thing I wanted to point out here is that this works because we're essentially sneaking a singleton into the mix. The ClassLoader contract means that while there can many instances of Class, there can be only a single instance of Class available for any type A, which also happens to be loaded on first when first reference / lazily-initialized. In fact, you can think of any static field in a class's definition as really being fields in a singleton associated with that class where there happens to be increased member access privileges between that singleton and instances of the class.

提交回复
热议问题