I'm not a 3D or HPC guy, but I've been tasked with doing some research into those fields for a possible HPC application. Reading benchmarks, comparisons and specs between nVidia Quadro and Geforce cards, it seems that for similar generation cards:
- Quadro is 2x-3x the price of Geforce
- hardware wise, the differences are not that great
- in benchmarks (3ds Max, Maya and some others) Quadro cards are much better performing than Geforce ones
Does anyone know what are the exact and precise technical differences that can cause such better performance? My speculation (and what can be generally read on the net), since the hardware is of similar specs, is that it's all in the drivers. If that's the case, what kind of functionality do the Quadro drivers offer that programmers of 3ds Max and other have taken advantage of?
Of course, I'm not interested in marketing speak: higher business value, professional oriented, better support, better QA, etc...
The difference is in view-port wire-frame rendering and double-sided polygon rendering, which is very common in professional CAD/3D software but not in games.
The difference is almost 10x-13x faster in single-fixed rendering pipeline (now very obsolete but some CAD software using it) rendering double sided polygons and wireframes:

Thats how entry level Quadro beats high-end GeForce. At least in the single-fixed pipeline using legacy calls like glLightModel(GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE). The trick is done with driver optimization (does not matter if its single-fixed pipeline Direct3D or OpenGL). And its true that on some GeForce cards some firmware/hardware hacking can unlock the features.
If double sided is implemented using shader code, the GeForce has to render the polygon twice giving the Quadro only 2x the speed difference (it's less in real-world). The wireframe rendering remains much much slower on GeForce even if implemented in a modern way.
Todays GeForce cards can render millions of polygons per second, drawing lines with faded polygons can result in 100x speed difference eliminating the Quadro benefit.
Quadro equivalent GTX cards have usually better clock speeds giving 2%-10% better performance in games.
So to sum up:
The Quadro rules the single-fixed legacy now obsolete rendering pipeline (which CAD uses), but by implementing modern rendering methods this can be significantly reduced (virtually no speed gain in Maya's Viewport 2.0, it uses GLSL effects - very similar to game engine).
Other reasons to get Quadro are double precision float computations for science, better warranty and display's support for professionals.
That's about it, price-vise the Quadros or FirePros are artificially overpriced.
It's called market segmentation or something like that. nVidia produces one very configurable chip and then sells it in different configurations that essentially have the same elements and hence the same bill of materials to different market segments, although one would usually expect to find elements of higher quality on the more expensive Quadro boards.
What differentiates Quadro from GeForce is that GeForce usually has its dual precision floating point performance severely limited, e.g. to 1/4 or 1/8 of that of the Quadro/Tesla GPUs. This limitation is purely artificial and imposed on solely to differentiate the gamer/enthusiast segment from the professional segment. Lower DP performance makes GeForce boards bad candidates for stuff like scientific or engineering computing and those are markets where money streams from. Also Quadros (arguably) have more display channels and faster RAMDACs which allows them to drive more and higher resolution screens, a sort of setup perceived as professional for CAD/CAM work.
As Jason Morgan has pointed out, there are tricks that can unlock some of the disabled features in GeForce to bring them in par with Quadro. Those usually involves soldering and voids the warranty on the card. Since HPC puts lots of stress on the hardware and malfunctions occur more frequently that one would like them to, I would advise against using cheap tricks.
Hardware wise the Quadro and GeForce cards are often idential. Indeed it is sometimes possible to convert some models from GeForce into Quadro by simply uploading new firmware and changing a couple resistor jumpers.
The difference is in the intended market and hence cost.
Quadro cards are intended for CAD. High end CAD software still uses OpenGL, whereas games and lower end CAD software use Direct3D (aka DirectX).
Quadro cards simply have firmware that is optimised for OpenGL. In the early days OpenGL was better and faster than Direct3D but now there is little difference. Gaming cards only support a very limited set of OpenGL, hence they don't run it very well.
CAD companies, e.g. Dassault with SolidWorks actively push high end cards by offering no support for DirectX with any level of performance.
Other CAD companies such as Altium, with Altium Designer, made the decision that forcing their customers to buy more expensive cards is not worthwhile when Direct3D is as good (if not better these days) than OpenGL.
Because of the cost, there are often other differences in the hardware, such as less use of overclocking, more memory etc, but these have relatively minor effects compared with the firmware support.
I have read that while the underlying chips are essentially the same, the design of the board is different.
Gamers want performance, and tend to favor overclocking and other things to get high frame rates but which maybe burn out the hardware occasionally.
Businesses want reliability, and tend to favor underclocking so they can be sure that their people can keep working.
Also, I have read that the quadro boards use ECC memory.
If you don't know what ECC memory is about: it's a [relatively] well known fact that sometimes memory "flips bits (experiences errors)". This does not happen too often, but is an unavoidable consequence of the underlying physics of the memory cards and the world we live in. ECC memory adds a small percentage to the cost and a small penalty to the performance and has enough redundancy to correct occasional errors and to detect (but not correct) somewhat rarer errors. Gamers don't care about that kind of accuracy because for gamers those are just very rare visual glitches. Companies do care about that kind of accuracy because those glitches would wind up as glitches in their products or else would require more double or triple checking (which winds up being a 2x or 3x performance penalty for some part of their business).
Another issue I have read about has to do with hooking up the graphics card to third party hardware. In other words: sending the images to another card or to another machine instead of to the screen. Most gamers are just using canned software that doesn't have any use for such capabilities. Companies that use that kind of thing get orders of magnitude performance gains from the more direct connections.
Surfing the web, you will find many technical justifications for Quadro price. Real answer is in "demand for reliable and task specific graphic cards".
Imagine you have an architectural firm with many fat projects on deadline. Your computers are only used in working with one specific CAD software. If foundation of your business is supposed to rely on these computers, you would want to make sure this foundation is strong.
For such clients, Nvidia engineered cards like Quadro, providing what they call "Professional Solution". And if you are among the targeted clients, you would really appreciate reliability of these graphic cards.
Many believe Geforce have become powerful and reliable enough to take Quadro's place. But in the end, it depends on the software you are mostly going to use and importance of reliability in what you do.
来源:https://stackoverflow.com/questions/10532978/difference-between-nvidia-quadro-and-geforce-cards