I faced the following code in our project:
synchronized (Thread.currentThread()){
//some code
}
I don\'t understand the reason to use s
consider this
Thread t = new Thread() {
public void run() { // A
synchronized (Thread.currentThread()) {
System.out.println("A");
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
}
}
}
};
t.start();
synchronized (t) { // B
System.out.println("B");
Thread.sleep(5000);
}
blocks A and B cannot run simultaneously, so in the given test either "A" or "B" output will be delayed by 5 secs, which one will come first is undefined
Although this is almost definitely an antipattern and should be solved differently, your immediate question still calls for an answer. If your entire codebase never acquires a lock on any Thread
instance other than Thread.currentThread()
, then indeed this lock will never be contended. However, if anywhere else you have
synchronized (someSpecificThreadInstance) { ... }
then such a block will have to contend with your shown block for the same lock. It may indeed happen that the thread reaching synchronized (Thread.currentThread())
must wait for some other thread to relinquish the lock.
Basically there is no difference between the presence and absence of the synchronized
block. However, I can think of a situation that could give other meaning to this usage.
The synchronized
blocks has an interesting side-effect of causing a memory barrier to be created by the runtime before entering and after leaving the block. A memory barrier is a special instruction to the CPU that enforces all variables that are shared between multiple threads to return their latest values. Usually, a thread works with its own copy of a shared variable, and its value is visible to this thread only. A memory barrier instructs the thread to update the value in a way so that the change is visible to the other threads.
So, the synchronized block in this case does not do any locking (as there will be no real case of lock and wait situation, at lest none I can think of)(unless the use-case mentioned in this answer is addressed), but instead it enforces the values of the shared fields to return their latest value. This, however, is true if the other places of the code that work with the variables in question also uses memory barriers (like having the same synchronized
block around the update/reassignment operations). Still, this is not a solution for avoiding race conditions.
If you're interested, I recommend you to read this article. It is about memory barriers and locking in C# and the .NET framework, but the problem is similar for Java and the JVM (except for the behavior of volatile fields). It helped me a lot in understanding how threads, volatile fields and locks work in general.
One must take into account some serious considerations in this approach, that were mentioned in comments below this answer.
You are implementing a recursive mutex.
i.e. the same thread can enter the synchronisation block, but not other threads.