Performantly render tens of thousands of spheres of variable size/color/position in Three.js?

徘徊边缘 提交于 2020-03-26 04:04:36

问题


This question is picking up from my last question where I found that using Points leads to problems: https://stackoverflow.com/a/60306638/4749956

To solve this you'll need to draw your points using quads instead of points. There are many ways to do that. Draw each quad as a separate mesh or sprite, or merge all the quads into another mesh, or use InstancedMesh where you'll need a matrix per point, or write custom shaders to do points (see the last example on this article)

I've been trying to figure this answer out. My questions are

What is 'instancing'? What is the difference between merging geometries and instancing? And, if I were to do either one of these, what geometry would I use and how would I vary color? I've been looking at this example:

https://github.com/mrdoob/three.js/blob/master/examples/webgl_instancing_performance.html

And I see that for each sphere you would have a geometry which would apply the position and the size (scale?). Would the underlying geometry be a SphereBufferGeometry of unit radius, then? But, how do you apply color?

Also, I read about the custom shader method, and it makes some vague sense. But, it seems more complex. Would the performance be any better than the above?


回答1:


Based on your previous quesiton...

First off, Instancing is a way to tell three.js to draw the same geometry multiple times but change one more more things for each "instance". IIRC the only thing three.js supports out-of-the-box is setting a different matrix (position, orientatin, scale) for each instance. Past that, like having different colors for example, you have to write custom shaders.

Instancing allows you to ask the system to draw many things with one "ask" instead of an "ask" per thing. That means it ends up being much faster. You can think of it like anything. If want 3 hambergers you could ask someone to make you 1. When they finished you could ask them to make another. When they finished you could ask them to make a 3rd. That would be much slower than just asking them to make 3 hambergers at the start. That's not a perfect analogy but it does point out how asking for multiple things one at a time is less efficient than asking for mulitple things all at once.

Merging meshes is yet another solution, following the bad analogy above , mergeing meshes is like making one big 1pound hamberger instead of three 1/3 pound hamburgers. Flipping one larger burger and putting toppings and buns on one large burger is marginally faster than doing the same to 3 small burgers.

As for which is the best solution for you that depends. In your original code you were just drawing textured quads using Points. Points always draw their quad in screen space. Meshes on the other hand rotate in world space by default so if you made instances of quads or a merged set of quads and try to rotate them they would turn and not face the camera like Points do. If you used sphere geometry then you'd have the issues that instead of only computing 6 vertices per quad with a circle drawn on it, you'd be computing 100s or 1000s of vertices per sphere which would be slower than 6 vertices per quad.

So again it requires a custom shader to keep the points facing the camera.

To do it with instancing the short version is you decide which vertex data are repeated each instance. For example for a textured quad we need 6 vertex positions and 6 uvs. For these you make the normal BufferAttribute

Then you decide which vertex data are unique to each instance. In your case the size, the color, and the center of the point. For each of these we make an InstancedBufferAttribute

We add all of those attributes to an InstancedBufferGeometry and as the last argument we tell it how many instances.

At draw time you can think of it like this

  • for each instance
    • set size to the next value in the size attribute
    • set color to the next value in the color attribute
    • set center to the next value in the center attribute
    • call the vertex shader 6 times, with position and uv set to the nth value in their attributes.

In this way you get the same geometry (the positions and uvs) used multiple times but each time a few values (size, color, center) change.

body {
  margin: 0;
}
#c {
  width: 100vw;
  height: 100vh;
  display: block;
}
#info {
  position: absolute;
  right: 0;
  bottom: 0;
  color: red;
  background: black;
}
<canvas id="c"></canvas>
<div id="info"></div>
<script type="module">
// Three.js - Picking - RayCaster w/Transparency
// from https://threejsfundamentals.org/threejs/threejs-picking-gpu.html

import * as THREE from "https://threejsfundamentals.org/threejs/resources/threejs/r113/build/three.module.js";

function main() {
  const infoElem = document.querySelector("#info");
  const canvas = document.querySelector("#c");
  const renderer = new THREE.WebGLRenderer({ canvas });

  const fov = 60;
  const aspect = 2; // the canvas default
  const near = 0.1;
  const far = 200;
  const camera = new THREE.PerspectiveCamera(fov, aspect, near, far);
  camera.position.z = 30;

  const scene = new THREE.Scene();
  scene.background = new THREE.Color(0);
  const pickingScene = new THREE.Scene();
  pickingScene.background = new THREE.Color(0);

  // put the camera on a pole (parent it to an object)
  // so we can spin the pole to move the camera around the scene
  const cameraPole = new THREE.Object3D();
  scene.add(cameraPole);
  cameraPole.add(camera);

  function randomNormalizedColor() {
    return Math.random();
  }

  function getRandomInt(n) {
    return Math.floor(Math.random() * n);
  }

  function getCanvasRelativePosition(e) {
    const rect = canvas.getBoundingClientRect();
    return {
      x: e.clientX - rect.left,
      y: e.clientY - rect.top
    };
  }

  const textureLoader = new THREE.TextureLoader();
  const particleTexture =
    "https://raw.githubusercontent.com/mrdoob/three.js/master/examples/textures/sprites/ball.png";

  const vertexShader = `
    attribute float size;
    attribute vec3 customColor;
    attribute vec3 center;

    varying vec3 vColor;
    varying vec2 vUv;

    void main() {
        vColor = customColor;
        vUv = uv;
        vec3 viewOffset = position * size ;
        vec4 mvPosition = modelViewMatrix * vec4(center, 1) + vec4(viewOffset, 0);
        gl_Position = projectionMatrix * mvPosition;
    }
`;

  const fragmentShader = `
    uniform sampler2D texture;
    varying vec3 vColor;
    varying vec2 vUv;

    void main() {
        vec4 tColor = texture2D(texture, vUv);
        if (tColor.a < 0.5) discard;
        gl_FragColor = mix(vec4(vColor.rgb, 1.0), tColor, 0.1);
    }
`;

  const pickFragmentShader = `
    uniform sampler2D texture;
    varying vec3 vColor;
    varying vec2 vUv;

    void main() {
      vec4 tColor = texture2D(texture, vUv);
      if (tColor.a < 0.25) discard;
      gl_FragColor = vec4(vColor.rgb, 1.0);
    }
`;

  const materialSettings = {
    uniforms: {
      texture: {
        type: "t",
        value: textureLoader.load(particleTexture)
      }
    },
    vertexShader: vertexShader,
    fragmentShader: fragmentShader,
    blending: THREE.NormalBlending,
    depthTest: true,
    transparent: false
  };

  const createParticleMaterial = () => {
    const material = new THREE.ShaderMaterial(materialSettings);
    return material;
  };

  const createPickingMaterial = () => {
    const material = new THREE.ShaderMaterial({
      ...materialSettings,
      fragmentShader: pickFragmentShader,
      blending: THREE.NormalBlending
    });
    return material;
  };

  const geometry = new THREE.InstancedBufferGeometry();
  const pickingGeometry = new THREE.InstancedBufferGeometry();
  const colors = [];
  const sizes = [];
  const pickingColors = [];
  const pickingColor = new THREE.Color();
  const centers = [];
  const numSpheres = 30;

  const positions = [
    -0.5, -0.5,
     0.5, -0.5,
    -0.5,  0.5,
    -0.5,  0.5,
     0.5, -0.5,
     0.5,  0.5,
  ];

  const uvs = [
     0, 0,
     1, 0,
     0, 1,
     0, 1,
     1, 0,
     1, 1,
  ];

  for (let i = 0; i < numSpheres; i++) {
    colors[3 * i] = randomNormalizedColor();
    colors[3 * i + 1] = randomNormalizedColor();
    colors[3 * i + 2] = randomNormalizedColor();

    const rgbPickingColor = pickingColor.setHex(i + 1);
    pickingColors[3 * i] = rgbPickingColor.r;
    pickingColors[3 * i + 1] = rgbPickingColor.g;
    pickingColors[3 * i + 2] = rgbPickingColor.b;

    sizes[i] = getRandomInt(5);

    centers[3 * i] = getRandomInt(20);
    centers[3 * i + 1] = getRandomInt(20);
    centers[3 * i + 2] = getRandomInt(20);
  }

  geometry.setAttribute(
    "position",
    new THREE.Float32BufferAttribute(positions, 2)
  );
  geometry.setAttribute(
    "uv",
    new THREE.Float32BufferAttribute(uvs, 2)
  );
  geometry.setAttribute(
    "customColor",
    new THREE.InstancedBufferAttribute(new Float32Array(colors), 3)
  );
  geometry.setAttribute(
    "center",
    new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
  );
  geometry.setAttribute(
    "size",
    new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1));

  const material = createParticleMaterial();
  const points = new THREE.InstancedMesh(geometry, material, numSpheres);

  // setup geometry and material for GPU picking
  pickingGeometry.setAttribute(
    "position",
    new THREE.Float32BufferAttribute(positions, 2)
  );
  pickingGeometry.setAttribute(
    "uv",
    new THREE.Float32BufferAttribute(uvs, 2)
  );
  pickingGeometry.setAttribute(
    "customColor",
    new THREE.InstancedBufferAttribute(new Float32Array(pickingColors), 3)
  );
  pickingGeometry.setAttribute(
    "center",
    new THREE.InstancedBufferAttribute(new Float32Array(centers), 3)
  );
  pickingGeometry.setAttribute(
    "size",
    new THREE.InstancedBufferAttribute(new Float32Array(sizes), 1)
  );

  const pickingMaterial = createPickingMaterial();
  const pickingPoints = new THREE.InstancedMesh(pickingGeometry, pickingMaterial, numSpheres);

  scene.add(points);
  pickingScene.add(pickingPoints);

  function resizeRendererToDisplaySize(renderer) {
    const canvas = renderer.domElement;
    const width = canvas.clientWidth;
    const height = canvas.clientHeight;
    const needResize = canvas.width !== width || canvas.height !== height;
    if (needResize) {
      renderer.setSize(width, height, false);
    }
    return needResize;
  }

  class GPUPickHelper {
    constructor() {
      // create a 1x1 pixel render target
      this.pickingTexture = new THREE.WebGLRenderTarget(1, 1);
      this.pixelBuffer = new Uint8Array(4);
    }
    pick(cssPosition, pickingScene, camera) {
      const { pickingTexture, pixelBuffer } = this;

      // set the view offset to represent just a single pixel under the mouse
      const pixelRatio = renderer.getPixelRatio();
      camera.setViewOffset(
        renderer.getContext().drawingBufferWidth, // full width
        renderer.getContext().drawingBufferHeight, // full top
        (cssPosition.x * pixelRatio) | 0, // rect x
        (cssPosition.y * pixelRatio) | 0, // rect y
        1, // rect width
        1 // rect height
      );
      // render the scene
      renderer.setRenderTarget(pickingTexture);
      renderer.render(pickingScene, camera);
      renderer.setRenderTarget(null);
      // clear the view offset so rendering returns to normal
      camera.clearViewOffset();
      //read the pixel
      renderer.readRenderTargetPixels(
        pickingTexture,
        0, // x
        0, // y
        1, // width
        1, // height
        pixelBuffer
      );

      const id =
        (pixelBuffer[0] << 16) | (pixelBuffer[1] << 8) | pixelBuffer[2];

      infoElem.textContent = `You clicked sphere number ${id}`;

      return id;
    }
  }

  const pickHelper = new GPUPickHelper();

  function render(time) {
    time *= 0.001; // convert to seconds;

    if (resizeRendererToDisplaySize(renderer)) {
      const canvas = renderer.domElement;
      camera.aspect = canvas.clientWidth / canvas.clientHeight;
      camera.updateProjectionMatrix();
    }

    cameraPole.rotation.y = time * 0.1;

    renderer.render(scene, camera);

    requestAnimationFrame(render);
  }
  requestAnimationFrame(render);

  function onClick(e) {
    const pickPosition = getCanvasRelativePosition(e);
    const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
  }

  function onTouch(e) {
    const touch = e.touches[0];
    const pickPosition = getCanvasRelativePosition(touch);
    const pickedID = pickHelper.pick(pickPosition, pickingScene, camera);
  }

  window.addEventListener("mousedown", onClick);
  window.addEventListener("touchstart", onTouch);
}

main();
</script>



回答2:


This is quite a broad topic. In short, both merging and instancing is about reducing the number of draw calls when rendering something.

If you bind your sphere geometry once, but keep re-rendering it, it costs you more to tell your computer to draw it many times, than it takes your computer to compute what it takes to draw it. You end up with the GPU, a powerful parallel processing device, sitting idle.

Obviously, if you create a unique sphere at each point in space, and merge them all, you pay the price of telling the gpu to render once, and it will be busy rendering thousands of your spheres.

However, merging this will increase your memory footprint, and has some overhead when you're actually creating the unique data. Instancing is a built-in clever way of achieving the same effect, at less the memory cost.

I have an article written on this topic.



来源:https://stackoverflow.com/questions/60313368/performantly-render-tens-of-thousands-of-spheres-of-variable-size-color-position

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!