Im trying to use OpenGL with the windows API on different threads

和自甴很熟 提交于 2021-02-16 20:34:19

问题


So basically I am using the windows api to create an emty window and then I use OpenGL to draw to that window from different threads. I managed to do this just with one thread, but getting and dispatching system messages so that the window is usable was slowing down the frame rate I was able to get, so I'm trying to get another thread to do that in parallel while I draw in the main thread.

To do this I have a second thread which creates an empty window and enters an infinite loop to handle the windows message loop. Before entering the loop it passes the HWND of the empty window to the main thread so OpenGl can be initialised. To do that I use the PostThreadMessage function and use the message code WM_USER and the wParam of the message to send the window handler back. Here is the code to that secondary thread:

bool t2main(DWORD parentThreadId, int x = 0, int y = 0, int w = 256, int h = 256, int pixelw = 2, int pixelh = 2, const char* windowName = "Window") {

// Basic drawing values
int sw = w, sh = h, pw = pixelw, ph = pixelh;
int ww = 0; int wh = 0;

// Windows API window handler
HWND windowHandler;

// Calculate total window dimensions
ww = sw * pw; wh = sh * ph;

// Create the window handler
WNDCLASS wc;
wc.hIcon = LoadIcon(NULL, IDI_APPLICATION);
wc.hCursor = LoadCursor(NULL, IDC_ARROW);
wc.style = CS_HREDRAW | CS_VREDRAW | CS_OWNDC;
wc.hInstance = GetModuleHandle(nullptr);
wc.lpfnWndProc = DefWindowProc;
wc.cbClsExtra = 0;
wc.cbWndExtra = 0;
wc.lpszMenuName = nullptr;
wc.hbrBackground = nullptr;
wc.lpszClassName = "windowclass";

RegisterClass(&wc);

DWORD dwExStyle = WS_EX_APPWINDOW | WS_EX_WINDOWEDGE;
DWORD dwStyle = WS_CAPTION | WS_SYSMENU | WS_VISIBLE | WS_THICKFRAME;

RECT rWndRect = { 0, 0, ww, wh };
AdjustWindowRectEx(&rWndRect, dwStyle, FALSE, dwExStyle);
int width = rWndRect.right - rWndRect.left;
int height = rWndRect.bottom - rWndRect.top;

windowHandler = CreateWindowEx(dwExStyle, "windowclass", windowName, dwStyle, x, y, width, height, NULL, NULL, GetModuleHandle(nullptr), NULL);

if(windowHandler == NULL) { return false; }

PostThreadMessageA(parentThreadId, WM_USER, (WPARAM) windowHandler, 0);

for(;;) {

    MSG msg;

    PeekMessageA(&msg, NULL, 0, 0, PM_REMOVE);
    DispatchMessageA(&msg);

}
}

This function gets called from the main entry point, which correctly recieves the window handler and then tries to setup OpenGL with it. Here is the code:

int main() {

// Basic drawing values
int sw = 256, sh = 256, pw = 2, ph = 2;
int ww = 0; int wh = 0;
const char* windowName = "Window";

// Thread stuff
DWORD t1Id, t2Id;
HANDLE t1Handler, t2Handler;

// Pixel array
Pixel* pixelBuffer = nullptr;

// OpenGl device context to draw
HDC glDeviceContext;
HWND threadWindowHandler;

t1Id = GetCurrentThreadId();

std::thread t = std::thread(&t2main, t1Id, 0, 0, sw, sh, pw, ph, windowName);
t.detach();

t2Handler = t.native_handle();
t2Id = GetThreadId(t2Handler);

while(true) {

    MSG msg;

    PeekMessageA(&msg, NULL, WM_USER, WM_USER + 100, PM_REMOVE);

    if(msg.message == WM_USER) {

        threadWindowHandler = (HWND) msg.wParam;
        break;

    }

}

// Initialise OpenGL with thw window handler that we just created
glDeviceContext = GetDC(threadWindowHandler);

PIXELFORMATDESCRIPTOR pfd = {
    sizeof(PIXELFORMATDESCRIPTOR), 1,
    PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
    PFD_TYPE_RGBA, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
    PFD_MAIN_PLANE, 0, 0, 0, 0
};

int pf = ChoosePixelFormat(glDeviceContext, &pfd);
SetPixelFormat(glDeviceContext, pf, &pfd);

HGLRC glRenderContext = wglCreateContext(glDeviceContext);
wglMakeCurrent(glDeviceContext, glRenderContext);

// Create an OpenGl buffer
GLuint glBuffer;

glEnable(GL_TEXTURE_2D);
glGenTextures(1, &glBuffer);
glBindTexture(GL_TEXTURE_2D, glBuffer);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);

// Create a pixel buffer to hold the screen data and allocate space for it
pixelBuffer = new Pixel[sw * sh];
for(int32_t i = 0; i < sw * sh; i++) {
    pixelBuffer[i] = Pixel();
}

// Test a pixel
pixelBuffer[10 * sw + 10] = Pixel(255, 255, 255);

// Push the current buffer into view
glViewport(0, 0, ww, wh);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, sw, sh, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelBuffer);

glBegin(GL_QUADS);
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0f, -1.0f, 0.0f);
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0, 0.0); glVertex3f(1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0, 1.0); glVertex3f(1.0f, -1.0f, 0.0f);
glEnd();

SwapBuffers(glDeviceContext);

for(;;) {}

}

To hold the pixel information I'm using this struct:

struct Pixel {
union {
    uint32_t n = 0xFF000000; //Default 255 alpha
    struct {
        uint8_t r;  uint8_t g;  uint8_t b;  uint8_t a;
    };
};

Pixel() {

    r = 0;
    g = 0;
    b = 0;
    a = 255;

}

Pixel(uint8_t red, uint8_t green, uint8_t blue, uint8_t alpha = 255) {

    r = red;
    g = green;
    b = blue;
    a = alpha;

}

};

When I try to run this code I don't get the desired pixel output, instead I just get the empty window, as if OpenGl handn't initialised correctly. When I use the same code but all into one thread I get the empty window with the pixel in it. What am I doing wrong here?, Is there something I need to do before I initialise OpenGl in another thread? I apreciate all kind of feedback. Thanks in advance.


回答1:


There are several issues here. Let's address them in order.

First let's recall the rules of:

OpenGL and threads

The basic rules about OpenGL with regard to windows, device context and threads are:

  1. An OpenGL context is not associated with a particular window or device context.

  2. You can make a OpenGL context "current" on any device context (HDC, usually associated with a Window) that is compatible to the device context with which the context was original created with.

  3. An OpenGL context can be "current" on only one thread at a time, or not be active at all.

  4. To move OpenGL context "current state" from one thread to another you do:

    • first: unmake "current" the context on the thread it's currently used on
    • second: make it "current" on the thread you want to be current on.
  5. More than one (including all) threads in a process can have a OpenGL context "current" at the same time.

  6. Multiple OpenGL contexts (including all) – which will be rule 5 be current in different threads – can be current with the same device context (HDC) at the same time.

  7. There are no defined rules for drawing commands happening concurrently on different threads, but current on the same HDC. Ordering must happen by the user, by placing appropriate locks that work OpenGL synchronization primitives. Until the introduction of explicit, fine grains synchronization objects into OpenGL the only synchronization available were glFinish and the implicit synchronization point calls of OpenGL (e.g. glReadPixels).

Misconceptions in your understanding what OpenGL does

This comes from reading the comments in your code:

int main() {

Why is your thread function called main. main is a reserved name, exclusively to be used for the process entry function. Even if your entry is WinMain you must not use main as a functio name.

// Pixel array
Pixel* pixelBuffer = nullptr;

It's unclear what the pixelBuffer is meant for, later on. You will call it on a texture. but apparently don't set up the drawing to use a texture.

t1Id = GetCurrentThreadId();

std::thread t = std::thread(&t2main, t1Id, 0, 0, sw, sh, pw, ph, windowName);
t.detach();

t2Handler = t.native_handle();
t2Id = GetThreadId(t2Handler);

What, I don't even. What is this supposed to do in the first place? First things first: Don't mix Win32 threads API and C++ std::thread. Decice in one, and stick with it.

while(true) {

    MSG msg;

    PeekMessageA(&msg, NULL, WM_USER, WM_USER + 100, PM_REMOVE);

    if(msg.message == WM_USER) {

        threadWindowHandler = (HWND) msg.wParam;
        break;

    }

}

Why the hell are you passing the window handle through a thread message? This is so wrong on so many levels. Threads all live in the same address space, so you could use a queue, or global variables, or pass is as parameter to the thread entry function, etc., etc.

Furthermore you could just have created the OpenGL context in the main thread and then just passed it over.

wglMakeCurrent(glDeviceContext, glRenderContext);

// Create an OpenGl buffer
GLuint glBuffer;

glEnable(GL_TEXTURE_2D);
glGenTextures(1, &glBuffer);

That doesn't create an OpenGL buffer object, it creates a texture name.

glBindTexture(GL_TEXTURE_2D, glBuffer);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL);

// Create a pixel buffer to hold the screen data and allocate space 
pixelBuffer[10 * sw + 10] = Pixel(255, 255, 255);for it

Uhh, no, you don't supply drawable buffers to OpenGL in that way. Heck, you don't even supply draw buffers to OpenGL explicitly at all (this is not D3D12, Metal or Vulkan, where you do).

// Push the current buffer into view
glViewport(0, 0, ww, wh);

Noooo. That's not what glViewport does!

glViewport is part of the transformation pipeline state and ultimately is sets the destination rectangle of where inside a drawable the clip space volume will be mapped to. It does absolutely nothing with respect to the drawable buffers.

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, sw, sh, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelBuffer);

I think you don't understand what a texture is for. What this call does is, copying over the contexts of pixelBuffer into the currently bound texture. After that OpenGL is no longer concerned with pixelBuffer at all.

glBegin(GL_QUADS);
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0f, -1.0f, 0.0f);
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0, 0.0); glVertex3f(1.0f, 1.0f, 0.0f);
glTexCoord2f(1.0, 1.0); glVertex3f(1.0f, -1.0f, 0.0f);
glEnd();

Here you draw something, but never enabled the use of the texture in the first place. So all that ado about setting up the texture is for nothing.

SwapBuffers(glDeviceContext);

for(;;) {}

}

So after swapping the window buffers you make the thread spin forever. Two problems with that: There is still the main message loop over in the other thread that does handle other messages for the window. Including maybe WM_PAINT, and depending on if you've set a background brush and/or how you handle WM_ERASEBKGND whatever you just draw might instantly vanish thereafter.

And by spinning the thread you're consuming CPU time for no reason whatsover. You could just as well end the thread.




回答2:


I solved the problem with the help of @datenwolf's comment primarly. Firstly, I used variable pointer to pass variables between threads, which removed the need for PostThreadMessageA, which was the main reasson why I was using winapi threads in the first place. I also changed the OpenGl code a bit and finally got what I wanted.



来源:https://stackoverflow.com/questions/59694432/im-trying-to-use-opengl-with-the-windows-api-on-different-threads

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!