How to create a Three.js 3D line series with width and thickness?

不打扰是莪最后的温柔 提交于 2019-11-28 07:41:24

I cooked up a possible solution which I believe meets most of your requirements:

http://codepen.io/garciahurtado/pen/AGEsf?editors=001

The concept is fairly simple: render any arbitrary geometry in "wireframe mode", then apply a full screen GLSL shader to it to add thickness to the wireframe lines.

The shader is inspired by the blur shaders in the ThreeJS distro, which essentially copy the image a bunch of times along the horizontal and vertical axis. I automated that process and made the number of copies a user defined parameter, while ensuring that the copies were offset by 1 pixel.

I used a 3D cube mesh in my demo (with an ortho camera), but it should be trivial to convert it to a poly line.

The real meat and potatoes of this thing is in the custom shader (fragment shader portion):

    uniform sampler2D tDiffuse;
    uniform int edgeWidth;
    uniform int diagOffset;
    uniform float totalWidth;
    uniform float totalHeight;
    const int MAX_LINE_WIDTH = 30; // Needed due to weird limitations in GLSL around for loops
    varying vec2 vUv;

    void main() {
        int offset = int( floor(float(edgeWidth) / float(2) + 0.5) );
        vec4 color = vec4( 0.0, 0.0, 0.0, 0.0);

        // Horizontal copies of the wireframe first
        for (int i = 0; i < MAX_LINE_WIDTH; i++) {
            float uvFactor = (float(1) / totalWidth);
            float newUvX = vUv.x + float(i - offset) * uvFactor;
            float newUvY = vUv.y + (float(i - offset) * float(diagOffset) ) * uvFactor;  // only modifies vUv.y if diagOffset > 0
            color = max(color, texture2D( tDiffuse, vec2( newUvX,  newUvY  ) ));    
            // GLSL does not allow loop comparisons against dynamic variables. Workaround below
            if(i == edgeWidth) break; 
        }

        // Now we create the vertical copies
        for (int i = 0; i < MAX_LINE_WIDTH; i++) {
            float uvFactor = (float(1) / totalHeight);
            float newUvX = vUv.x + (float(i - offset) * float(-diagOffset) ) * uvFactor; // only modifies vUv.x if diagOffset > 0
            float newUvY = vUv.y + float(i - offset) * uvFactor;
            color = max(color, texture2D( tDiffuse, vec2( newUvX, newUvY ) ));  
            if(i == edgeWidth) break;
        }

        gl_FragColor = color;
    }

Pros:

  • No need for additional geometry beyond the line vertices
  • Line thickness is user definable
  • A full screen shader should be relatively gentle on the GPU
  • Can be implemented fully within the WebGL canvas

Cons:

  • Line thickness is close to pixel perfect on horizontal and vertical edges, but slightly off on diagonal edges. This is due to the algorithm used and is a limitation of the solution. Having said that, for low line thickness and complex geometries, this is barely noticeable with the naked eye.
  • The joints between lines will show gaps for large enough line thickness. You can play with the Codepen demo to see what I mean. I started to implement a solution to this by adding a second "diagonal pass", but it got a little hairy and I think this would only be an issue for higher line thicknesses (+8 pixels) or extreme line angles. If you are interested in this solution, you can look at the original source to see where I was going with it.
  • Since this uses a full screen filter, you can only use the WebGL context for displaying objects of this thickness. Showing various line widths would require additional rendering passes.

As a potential solution. You could take your 3d points, then use THREE.Vector3.project method to figure out screen-space coordinates. Then simply use canvas and it's lineTo and moveTo operations. Canvas 2d context does support variable line thickness.

var w = renderer.domElement.innerWidth;
var h = renderer.domElement.innerHeight;
vector.project(camera);
context2d.lineWidth = 3;
var x = (vector.x+1)*(w/2);
var y = h - (vector.y+1)*(h/2);
context2d.lineTo(x,y);

Also, i don't think you can use the same canvas for that, so it would have to be a layer (another canvas) above your gl rendering context canvas.

If you have infrequent camera changes - it is also possible to construct line out of polygons and update it's vertex positions based on camera transform. For orthographic camera this would work best as only rotations would require vertex position manipulation.

Lastly, you could disable canvas clearing and draw your lines several times with offset inside a circle or a box. After that you can re-enable clearing. This would require several extra draw operations, but it's probably the most scalable approach.

The reason lines don't work as you'd expect out of the box is due to how ANGLE works, it's used in Chrome and in Firefox to my knowledge, it emulates OpenGL via DirectX. Guys from ANGLE state that WebGL spec only requires support of line thickness up-to 1, so they do not see it as a bug and don't intend to "fix" it. Line thickness should work on non-windows OSs though, where ANGLE is not used.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!