I have the following problem to solve with WebGL:
Imagine a mesh in front of the camera. The mesh is not actually shaded but its \"silhouette\" as a whole is used to
Both approach can work.
The 'projection' one is probably more efficient and straightforward. It work with one single pass. You just need to substitute the classic UVs coordinates by the screen coordinates of vertices, in your vertex shader.
varying vec2 vTexCoord;
void main( void ){
// whatever how gl_Position is compute
gl_Position = uMVP * vec4(aPosition, 1.0);
// vTexCoord = aTexCoord;
// replace the standard UVs by the vertex screen position
vTexCoord = .5 * ( gl_Position.xy / gl_Position.w ) + .5;
}
You still need to tweak thoses texture coordinates to respect screen/texture aspect ratio, scale etc.