3d

SceneKit: advice on reproducing glowing light trail like with Tron light cycles

China☆狼群 提交于 2020-01-13 22:42:09
问题 The goal is to reproduce a light trail similar to the image below in SceneKit. The trail doesn't need to be as detailed, but the idea is to achieve a similar visual effect. We tried using thin cubes with opacity around 0.5. We strung about 200 together and attached them to a node to act as a light trail. That was not performant at all. This other post suggests using particle emitters, but we also need to detect collisions when another object hits the trail. Class documentation says collision

SceneKit: advice on reproducing glowing light trail like with Tron light cycles

不想你离开。 提交于 2020-01-13 22:41:31
问题 The goal is to reproduce a light trail similar to the image below in SceneKit. The trail doesn't need to be as detailed, but the idea is to achieve a similar visual effect. We tried using thin cubes with opacity around 0.5. We strung about 200 together and attached them to a node to act as a light trail. That was not performant at all. This other post suggests using particle emitters, but we also need to detect collisions when another object hits the trail. Class documentation says collision

处理 3d 视频的简单理论基础

不问归期 提交于 2020-01-13 18:57:10
背景 公司产品需要满足一些带有3d功能的应用场景,需要需要懂得如何处理3d信号。之前在调试以前产品的时候,发现处理3d信号的时候,是由2个画面叠加起来的。 导言 3D视频(或3D信号)为什么是两个画面的? 答案如下:人们之所以长两只眼睛,是因为一只眼睛看到的世界不是立体的,您可以闭上一只眼睛双手各持一笔,笔尖离眼睛一尺左右,看看能不能顺利将两笔尖接触?答案是很困难。原因是一只眼睛判断不出景深,当您睁开另一只眼睛时,很容易就能将两只笔尖接触,因为两只眼睛看同一物体的角度不同,两个眼睛生成的画面也不略不同,两个画面经大脑处理后在人的视觉中形成一个三维的立体图像。 3D摄影机之所以两个镜头,就是因为模拟人的两只眼睛拍摄角度略不同的两个画面。播放过程中,这两个不同的画面通过相应的3D显示技术使观众的每只眼睛看一个(3D摄像机的左镜头的画面传入观众左眼,右镜头的画面传入观众右眼),就像观众双眼亲自看到的画面一样,一个立体的画面生成在了观众的视觉系统中。 这就是为什么3D视频都是两个画面的原因,常见的3D视频格式有左右、上下、帧连续、帧封装等。 在3D电视上播放3D视频时,要设置电视机的3D模式(当播放左右格式的3D视频时,设置3D电视的3D模式为左右(有的电视叫并排);播放上下格式的3D视频时式,同理设置3D模式为上下;播放3D蓝光的帧封装格式3D视频时

How to do a space-partitioning of the Utah Teapot?

蹲街弑〆低调 提交于 2020-01-13 16:55:37
问题 Having dealt with converting the Bezier Patches into triangles, I need to do a Binary Space Partition in order to draw the projected triangles using the Painter's Algorithm. I've implemented the algorithm from Wikipedia with much help with the math. But it's making a Charlie Brown tree! That is most of the nodes have one branch completely empty. The whole strategy is all wrong. Since the teapot is essentially spherical, the entire shape is only on one "side" of any particular component

Calculate 3D point coordinates using horizontal and vertical angles and slope distance

杀马特。学长 韩版系。学妹 提交于 2020-01-13 13:05:15
问题 I am trying to learn how to calculate the XYZ coordinates of a point using the XYZ coordinates of an origin point, a horizontal and vertical angle, and 3d distance. I can make the calculations simply by projecting the points onto 2D planes, but is there a more straightforward way to do this in 3D? I am trying to understand how a surveying total station calculates new point locations based on it's measured location, the 3d (slope) distance that it measures to a new point, and the measured

Calculate 3D point coordinates using horizontal and vertical angles and slope distance

≡放荡痞女 提交于 2020-01-13 13:04:09
问题 I am trying to learn how to calculate the XYZ coordinates of a point using the XYZ coordinates of an origin point, a horizontal and vertical angle, and 3d distance. I can make the calculations simply by projecting the points onto 2D planes, but is there a more straightforward way to do this in 3D? I am trying to understand how a surveying total station calculates new point locations based on it's measured location, the 3d (slope) distance that it measures to a new point, and the measured

How do you load Blender files using Assimp?

淺唱寂寞╮ 提交于 2020-01-13 10:37:09
问题 I tried to load Blender file using Assimp library on C++ using the following code, but it fails since it doesn't have any meshes at all. The blender file I am using is the default cube saved using Blender itself. Assimp::Importer importer; const aiScene * scene = importer.ReadFile( path, aiProcessPreset_TargetRealtime_Fast ); if( !scene ) { fprintf( stderr, importer.GetErrorString() ); return false; } const aiMesh * mesh = scene->mMeshes[0]; // Fails here since mMeshes is NULL What am I doing

Plot a 3D bar histogram with python

元气小坏坏 提交于 2020-01-13 07:02:45
问题 I have some x and y data, with which I would like to generate a 3D histogram, with a color gradient (bwr or whatever). I have written a script which plot the interesting values, in between -2 and 2 for both x and y abscesses: import numpy as np import numpy.random import matplotlib.pyplot as plt # To generate some test data x = np.random.randn(500) y = np.random.randn(500) XY = np.stack((x,y),axis=-1) def selection(XY, limitXY=[[-2,+2],[-2,+2]]): XY_select = [] for elt in XY: if elt[0] >

Matlab's slice() function not working as desired

梦想与她 提交于 2020-01-13 05:39:06
问题 I want to plot discrete 2D images at 13 z locations at [4:4:52] using the following lines of code. a=100; [mesh.x,mesh.y,mesh.z] = meshgrid(1:1:100,1:1:100,4:4:52); a_unifdist=0; b_unifdist=10; noise=a_unifdist+(b_unifdist-a_unifdist).*rand(100,100,13); c = (a./mesh.x)+noise; slice(c,1:100,1:100,4:4:52); However, I get 13 continuous plots from 1 till 13 instead of 13 discrete locations as shown below: Could somebody tell me what's my mistake? I want the images to stack at [4:4:52] locations

Trouble Getting Depth Testing To Work With Apple's Metal Graphics API

流过昼夜 提交于 2020-01-13 02:13:40
问题 I'm spending some time in the evenings trying to learn Apple's Metal graphics API. I've run into a frustrating problem and so must be missing something pretty fundamental: I can only get rendered objects to appear on screen when depth testing is disabled, or when the depth function is changed to "Greater". What could possibly be going wrong? Also, what kinds of things can I check in order to debug this problem? Here's what I'm doing: 1) I'm using SDL to create my window. When setting up Metal