If I use nested parallel for loops like this:
#pragma omp parallel for schedule(dynamic,1)
for (int x = 0; x < x_max; ++x) {
#pragma omp parallel for
NO.
The first #pragma omp parallel
will create a team of parallel threads and the second will then try to create for each of the original threads another team, i.e. a team of teams. However, on almost all existing implementations the second team has just only one thread: the second parallel region is essentially not used. Thus, your code is more like equivalent to
#pragma omp parallel for schedule(dynamic,1)
for (int x = 0; x < x_max; ++x) {
// only one x per thread
for (int y = 0; y < y_max; ++y) {
// code here: each thread loops all y
}
}
If you don't want that, but only parallelise the inner loop, you can do this:
#pragma omp parallel
for (int x = 0; x < x_max; ++x) {
// each thread loops over all x
#pragma omp for schedule(dynamic,1)
for (int y = 0; y < y_max; ++y) {
// code here, only one y per thread
}
}