What's the most efficient way to calculate the warp id / lane id in a 1-D grid?

不打扰是莪最后的温柔 提交于 2019-12-04 09:37:40

问题


In CUDA, each thread knows its block index in the grid and thread index within the block. But two important values do not seem to be explicitly available to it:

  • Its index as a lane within its warp (its "lane id")
  • The index of the warp of which it is a lane within the block (its "warp id")

Assuming the grid is 1-dimensional(a.k.a. linear, i.e. blockDim.y and blockDim.z are 1), one can obviously obtain these as follows:

enum : unsigned { warp_size = 32 };
auto lane_id = threadIdx.x % warp_size;
auto warp_id = threadIdx.x / warp_size;

and if you don't trust the compiler to optimize that, you could rewrite it as:

enum : unsigned { warp_size = 32, log_warp_size = 5 };
auto lane_id = threadIdx.x & (warp_size - 1);
auto warp_id = threadIdx.x >> log_warp_size;

is that the most efficient thing to do? It still seems like a lot of waste for every thread to have to compute this.

(inspired by this question.)


回答1:


The naive computation is currently the most efficient.

Note: This answer has been heavily edited.

It is very tempting to try and avoid the computation altogether - as these two values seem to already be available if you look under the hood.

You see, nVIDIA GPUs have special registers which your (compiled) code can read to access various kinds of useful information. One such register holds threadIdx.x; another holds blockDim.x; another - the clock tick count; and so on. C++ as a language does not have these exposed, obviously; and, in fact, neither does CUDA. However, the intermediary representation into which CUDA code is compiled, named PTX, does expose these special registers (since PTX 1.3, i.e. with CUDA versions >= 2.1).

Two of these special registers are %warpid and %laneid. Now, CUDA supports inlining PTX code within CUDA code with the asm keyword - just like it can be used for host-side code to emit CPU assembly instructions directly. With this mechanism one can use these special registers:

__forceinline__ __device__ unsigned lane_id()
{
    unsigned ret; 
    asm volatile ("mov.u32 %0, %laneid;" : "=r"(ret));
    return ret;
}

__forceinline__ __device__ unsigned warp_id()
{
    // this is not equal to threadIdx.x / 32
    unsigned ret; 
    asm volatile ("mov.u32 %0, %warpid;" : "=r"(ret));
    return ret;
}

... but there are two problems here.

The first problem - as @Patwie suggests - is that %warp_id does not give you what you actually want - it's not the index of the warp in the context of the grid, but rather in the context of the physical SM (which can hold so many warps resident at a time), and those two are not the same. So don't use %warp_id.

As for %lane_id, it does give you the correct value, but it's misleadingly non-performant: Even though it's a "register", it's not like the regular registers in your register file, with 1-cycle access latency. It's a special register, which in the actual hardware is retrieved using an S2R instruction, which can exhibit long latency.


Bottom line: Just compute the warp ID and thread ID from the thread ID. We can't get around this - for now.




回答2:


The other answer is very dangerous! Compute the lane-id and warp-id yourself.

#include <cuda.h>
#include <iostream>

inline __device__ unsigned get_lane_id() {
  unsigned ret;
  asm volatile("mov.u32 %0, %laneid;" : "=r"(ret));
  return ret;
}

inline __device__ unsigned get_warp_id() {
  unsigned ret;
  asm volatile("mov.u32 %0, %warpid;" : "=r"(ret));
  return ret;
}

__global__ void kernel() {
  const int actual_warpid = get_warp_id();
  const int actual_laneid = get_lane_id();
  const int expected_warpid = threadIdx.x / 32;
  const int expected_laneid = threadIdx.x % 32;
  if (expected_laneid == 0) {
    printf("[warp:] actual: %i  expected: %i\n", actual_warpid,
           expected_warpid);
    printf("[lane:] actual: %i  expected: %i\n", actual_laneid,
           expected_laneid);
  }
}

int main(int argc, char const *argv[]) {
  dim3 grid(8, 7, 1);
  dim3 block(4 * 32, 1);

  kernel<<<grid, block>>>();
  cudaDeviceSynchronize();
  return 0;
}

which gives something like

[warp:] actual: 4  expected: 3
[warp:] actual: 10  expected: 0
[warp:] actual: 1  expected: 1
[warp:] actual: 12  expected: 1
[warp:] actual: 4  expected: 3
[warp:] actual: 0  expected: 0
[warp:] actual: 13  expected: 2
[warp:] actual: 12  expected: 1
[warp:] actual: 6  expected: 1
[warp:] actual: 6  expected: 1
[warp:] actual: 13  expected: 2
[warp:] actual: 10  expected: 0
[warp:] actual: 1  expected: 1
...
[lane:] actual: 0  expected: 0
[lane:] actual: 0  expected: 0
[lane:] actual: 0  expected: 0
[lane:] actual: 0  expected: 0
[lane:] actual: 0  expected: 0
[lane:] actual: 0  expected: 0
[lane:] actual: 0  expected: 0
[lane:] actual: 0  expected: 0
[lane:] actual: 0  expected: 0
[lane:] actual: 0  expected: 0
[lane:] actual: 0  expected: 0

see also the PTX docs

A predefined, read-only special register that returns the thread's warp identifier. The warp identifier provides a unique warp number within a CTA but not across CTAs within a grid. The warp identifier will be the same for all threads within a single warp.

Note that %warpid is volatile and returns the location of a thread at the moment when read, but its value may change during execution, e.g., due to rescheduling of threads following preemption.

Hence, it is the warp-id of the scheduler without any guarantee that it matches the virtual warp-id (started by counting from 0).

The docs makes this clear:

For this reason, %ctaid and %tid should be used to compute a virtual warp index if such a value is needed in kernel code; %warpid is intended mainly to enable profiling and diagnostic code to sample and log information such as work place mapping and load distribution.

If you think, ok let's use CUB for this: This even affects cub::WarpId()

Returns the warp ID of the calling thread. Warp ID is guaranteed to be unique among warps, but may not correspond to a zero-based ranking within the thread block.

EDIT: Using %laneid seems to be safe.



来源:https://stackoverflow.com/questions/44337309/whats-the-most-efficient-way-to-calculate-the-warp-id-lane-id-in-a-1-d-grid

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!