Preferring synchronized to volatile

ぃ、小莉子 提交于 2019-12-05 00:52:17

In fact synchronization is also related to memory visibilty as the JVM adds a memory barrier in the exit of the synchronized block. This ensures that the results of writes by the thread in the synchronization block are guaranteed to be visible by reads by another threads once the first thread has exited the synchronized block.

Note : following @PaŭloEbermann's comment, if the other thread go through a read memory barrier (by getting in a synchronized block for example), their local cache will not be invalidated and therefore they might read an old value.

The exit of a synchronized block is a happens-before in this doc : http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/package-summary.html#MemoryVisibility

Look for these extracts:

The results of a write by one thread are guaranteed to be visible to a read by another thread only if the write operation happens-before the read operation.

and

An unlock (synchronized block or method exit) of a monitor happens-before every subsequent lock (synchronized block or method entry) of that same monitor. And because the happens-before relation is transitive, all actions of a thread prior to unlocking happen-before all actions subsequent to any thread locking that monitor.

Synchronized and volatile are different, but usually both of them are used to solve same common problem.

Synchronized is to make sure that only one thread will access the shared resource at a given point of time.

Whereas, those shared resources are often declared as volatile, it is because, if a thread has changed the shared resource value, it has to updated in the other thread also. But without volatile, the Runtime, just optimizes the code, by reading the value from the cache. So what volatile does is, whenever any thread access volatile, it wont read the value from the cache, instead it actually gets it from the actual memory and the same is used.


Was going through log4j code and this is what I found.

/**
 * Config should be consistent across threads.
 */
protected volatile PrivateConfig config;

That's wrong. Synchronization has to do with memory visibility. Every thread has is own cache. If you got a lock the cache is refresehd. If you release a lock the cache is flused to the main memory.

If you read a volatile field there is also a refresh, if you write a volatile field there is a flush.

Fatih Donmez

If multiple threads write to a shared volatile variable and they also need to use a previous value of it, it can create a race condition. So at this point you need use synchronization.

... if two threads are both reading and writing to a shared variable, then using the volatile keyword for that is not enough. You need to use a synchronized in that case to guarantee that the reading and writing of the variable is atomic. Reading or writing a volatile variable does not block threads reading or writing. For this to happen you must use the synchronized keyword around critical sections.

For detailed tutorial about volatile, see 'volatile' is not always enough.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!