We generate graphs for huge datasets. We are talking 4096 samples per second, and 10 minutes per graph. A simple calculation makes for 4096 * 60 * 10 = 2457600 samples per lineg
You don't need to eliminate points from your actual dataset, but you can surely incrementally refine it when the user zooms in. It does you no good to render 25 million points to a single screen when the user can't possibly process all that data. I would recommend that you take a look at both the VTK library and the VTK user guide, as there's some invaluable information in there on ways to visualize large datasets.
Thank you very much. This is exactly what I was looking for. It seems VTK uses hardware to offload these kind of rendering, too. Btw, i guess you mean valuable ;). Second, the user does get information of the example i gave. However not really concise, the overview of the data can really be pure gold for the scientist. It is not about processing all the data for the user, it is about getting valuable information out of the rendering. Users seem to do this, even in the very 'zoomed out' representation of the dataset.
Any more suggestions?