How does GUI output work from application to hardware level?

大城市里の小女人 提交于 2019-12-18 12:27:45

问题


I am getting into GUI programming and did some research. Now not everything is clear to me. If I would use GTK+ as toolkit, how does it communicate with the graphics card?

On a Linux system I suppose it would be GTK --> X Server --(OpenGL)--> graphics card. Is this right?

I read that some GUIs directly draw OpenGL (e.g. Blender3D), so how do other apps draw their GUIs?

If the only APIs (that i know of) for graphics cards is Direct3D and OpenGL, what is the distinction between software rendering and hardware acceleration?

Can software that does "software rendering" directly write to the framebuffer of the graphics card, so that OpenGL is untouched?

PS: sorry for the many questions, but i don't really get it how that all works, thanks for every answer :)


回答1:


On a Linux system I suppose it would be GTK --> X Server --(OpenGL)--> graphics card. Is this right?

No. GTK+ on Linux goes

                                              /-[ if direct context ]---\
                     /--> OpenGL >-+---------/                           \
                    /              \-> GLX -+--------------\              \
                   /                         \              \              \
GTK+ -+-> cairo >-+---> XRender >--------+----+-> Xlib/xcb >-+-> X server >-+-> Kernel Module >-> GPU
       \           \–-> pixmap buffer >–/
        \                              /
         \―---------------------------/

I read that some GUIs directly draw OpenGL (e.g. Blender3D), so how do other apps draw their GUIs?

It's just "Blender" (no trailing 3D). Blender's GUI toolkit uses OpenGL as its only backend, yes. But the GUI is not directly drawn using OpenGL, that would be just to cumbersome to work with (draw each and every button using OpenGL calls. Blender has its very own toolkit. GTK+ is another toolkit but not tied to Blender (in fact, one of my pet projects is extracting Blender's GUI toolkit, so that it can be used in independent projects).

Toolkits like GTK+ and Qt are designed for maximum portability. Blender has the luxury of knowing, that there will be OpenGL available. Apps developed for GTK+ or Qt may be able to run on non 3D capable systems, so the design of GTK+ and Qt allow to run on a number of backends. GTK+ now in version 3 uses the Cairo graphics engine as graphics backend. Cairo again has its own backends, namely a software rasterizer drawing into pixmaps (pixel images), or proxying drawing commands to an underlying graphics architecture. In the case of Cairo on Linux this may be either OpenGL or X11 (core and XRender protocol extensions).

If the only APIs (that i know of) for graphics cards is Direct3D and OpenGL, what is the distinction between software rendering and hardware acceleration?

Neither OpenGL nor Direct3D talk to the graphics card. They talk to the graphics card's drivers. So the option you'd have would be, talking to the drivers youself, bypassing OpenGL and Direct3D. But why would you do this? It's tedious.

On Windows in addition you have the GDI and/or the WPF (Windows Presentation Foundation) for drawing stuff, and Direct2D.

On Linux you get the X11 core and XRender extension protocol for drawing nice pictures.

Another API in the rise is OpenVG, which aims to standardize all those 2D drawing APIs. And at least in Linux OpenGL and OpenVG have been selected to become the only available abstract drawing APIs in the long term, with some windowing system for managing the framebuffer and user input. There's Wayland in development (which design I completely dislike) and X11, which I think has the better design (it's a network oriented system, that allows for distributed execution, something I consider very important in the future), but is in need of a complete overhaul into some "X12" – cleaning out the legacy cruft, make it operate in a contact color space, make connections transitionable (so that you can migrate clients between X server, which would allow for a much more elegant way of locking X sessions, by moving all the connections into some shadow X server, instead of trying to block access using a locking screen saver).

Can software that does "software rendering" directly write to the framebuffer of the graphics card, so that OpenGL is untouched?

Not on a modern operating system. However the OS may give you an abstracted access to the graphics card through some framebuffer API (/dev/fb0 on Linux). However the framebuffer is unmanaged, so if there's a X server or Wayland running, either of those are in task of managing the FB, and it's none of your business then.




回答2:


Can software that does "software rendering" directly write to the framebuffer of the graphics card, so that OpenGL is untouched?

Not quite, or more accurately: depends on the driver. Display systems like XServer (or the Windows and MacOS equivalents) don't use APIs but rather describe an API. Specifically called a driver model (or in XServers case one of multiple models like XAA, EXA, UXA and DRI) that the display driver has to satisfy.

Everything that is not defined in these models has to be done in "software" and calculated on the CPU. Additionally some of the operations defined by the models maybe calculated on the CPU by the driver as well.

OpenGL used to be completely separate standard from these models. Graphics drivers didn't have to implement OpenGL at all to be used by the OS. But with the continuing integration of 3D graphics and more complex compositing in modern OSs the driver models have started to rely on the same paradigms as OpenGL and Direct3D. For example WDDM (the Windows Display Driver Model) outright requires Direct3D 9 support.



来源:https://stackoverflow.com/questions/8777531/how-does-gui-output-work-from-application-to-hardware-level

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!