Based on the documentation, the default GPU is the one with the lowest id:
If you have more than one GPU in your system, the GPU with the lowest ID
Suever's answer correctly shows how to pin your operations to a particular GPU. However, if you are running multiple TensorFlow programs on the same machine, it is recommended that you set the CUDA_VISIBLE_DEVICES environment variable to expose different GPUs before starting the processes. Otherwise, TensorFlow will attempt to allocate almost the entire memory on all of the available GPUs, which prevents other processes from using those GPUs (even if the current process isn't using them).
Note that if you use CUDA_VISIBLE_DEVICES, the device names "/gpu:0", "/gpu:1", etc. refer to the 0th and 1st visible devices in the current process.