CUDA: __syncthreads() inside if statements CUDA: __syncthreads() inside if statements c c

CUDA: __syncthreads() inside if statements


In short, the behavior is undefined. So it may sometimes do what you want, or it may not, or (quite likely) will just hang or crash your kernel.

If you are really curious how things work internally, you need to remember that threads do not execute independently, but a warp (group of 32 threads) at a time.

This of course creates a problem with conditional branches where the conditional does not evaluate uniformly throughout the warp. The problem is solved by execution both paths, one after the other, each with those threads disabled that are not supposed to execute that path. IIRC on existing hardware the branch is taken first, then the path is executed where the branch is not taken, but this behavior is undefined and thus not guaranteed.

This separate execution of paths continues up to some point for which the compiler can determine that it is guaranteed to be reached by all threads of the two separate execution paths (the "reconvergence point" or "synchronization point"). When execution of the first code path reaches this point, it is stopped and the second code path is executed instead. When the second path reaches the synchronization point, all threads are enabled again and execution continues uniformly from there.

The situation gets more complicated if another conditional branch is encountered before the synchronization. This problem is solved with a stack of paths that still need to be executed (luckily the growth of the stack is limited as we can have at most 32 different code paths for one warp).

Where the synchronization points are inserted is undefined and even varies slightly between architectures, so again there are no guarantees. The only (unofficial) comment you will get from Nvidia is that the compiler is pretty good at finding optimal synchronization points. However there are often subtle issues that may move the optimal point further down than you might expect, particularly if threads exit early.

Now to understand the behavior of the __syncthreads() directive, (which translates into a bar.sync instruction in PTX) it is important to realize that this instruction is not executed per thread, but for the whole warp at once (regardless of whether any threads are disabled or not) because only the warps of a block need to be synchronized. The threads of a warp are already executing in sync, and further synchronization will either have no effect (if all threads are enabled) or lead to a deadlock when trying to sync the threads from different conditional code paths.

You can work your way from this description to how your particular piece of code behaves. But keep in mind that all this is undefined, there are no guarantees, and relying on a specific behavior may break your code at any time.

You may want to look at the PTX manual for some more details, particularly for the bar.sync instruction that __syncthreads() compiles to. Henry Wong's "Demystifying GPU Microarchitecture through Microbenchmarking" paper, referenced below by ahmad, is also well worth reading. Even though for now outdated architecture and CUDA version, the sections about conditional branching and __syncthreads() appear to still be generally valid.


CUDA model is MIMD but current NVIDIA GPUs implement __syncthreads() at warp granularity instead of thread. It means, these are warps inside a thread-block who are synchronized not necessarily threads inside a thread-block. __syncthreds() waits for all 'warps' of thread-block to hit the barrier or exit the program. Refer to Henry Wong's Demistifying paper for further details.


You must not use __syncthreads() unless the statement is reached in all threads within one thread block, always. From the programming guide (B.6):

__syncthreads() is allowed in conditional code but only if the conditional evaluates identically across the entire thread block, otherwise the code execution is likely to hang or produce unintended side effects.

Basically, your code is not a well-formed CUDA program.