depth

Rendering glitch with GL_DEPTH_TEST and transparent textures

左心房为你撑大大i 提交于 2019-12-01 08:18:09
From one angle my shrubs look like this: From the other, they look like this: My theory is that when looking at the shrubs from the first angle, all the blocks behind the shrub have already been drawn, so when it comes to draw the shrub, it just draws them overtop. From the other angle, however, it's basically trying to draw the shrub first, and then when it goes to draw the block behind the shrub, it checks the depth buffer and sees that there's something already blocking the view of the block, so it doesn't render it, causing the navy blue squares (my clear color). I really have no idea how

Rendering glitch with GL_DEPTH_TEST and transparent textures

坚强是说给别人听的谎言 提交于 2019-12-01 07:11:56
问题 From one angle my shrubs look like this: From the other, they look like this: My theory is that when looking at the shrubs from the first angle, all the blocks behind the shrub have already been drawn, so when it comes to draw the shrub, it just draws them overtop. From the other angle, however, it's basically trying to draw the shrub first, and then when it goes to draw the block behind the shrub, it checks the depth buffer and sees that there's something already blocking the view of the

Aligning captured depth and rgb images

让人想犯罪 __ 提交于 2019-12-01 01:34:08
There has been previous questions ( here , here and here ) related to my question, however my question has a different aspect to it, which I have not seen in any of the previously asked questions. I have acquired a dataset for my research using Kinect Depth sensor. This dataset is in the format of .png images for both depth and rgb stream at a specific instant. To give you more idea below are the frames: EDIT: I am adding the edge detection output here. Sobel Edge detection output for: RGB Image Depth Image Now what I am trying to do is align these two frames to give me a combined RGBZ image.

[LC]111题 二叉树的最小深度 (递归)

随声附和 提交于 2019-11-30 19:43:58
①题目 给定一个二叉树,找出其最小深度。 最小深度是从根节点到最近叶子节点的最短路径上的节点数量。 说明: 叶子节点是指没有子节点的节点。 示例: 给定二叉树 [3,9,20,null,null,15,7], 返回它的最小深度 2. ②思路 使用深度优先搜索 ③代码 1 class Solution { 2 public int minDepth(TreeNode root) { 3 if (root == null) { 4 return 0; 5 } 6 if ((root.left == null) && (root.right == null)) { 7 return 1; //当“当前结点”为叶子结点时,返回1,退出本次递归,并且跳过了17行min_depth的自加。 8 } 9 10 int min_depth = Integer.MAX_VALUE; //因为要求最小深度,所以一开始把它设置为最大的int整数,这与530题题解的第三行异曲同工,也与783的低4行类似。 11 if (root.left != null) { 12 min_depth = Math.min(minDepth(root.left), min_depth); 13 } 14 if (root.right != null) { 15 min_depth = Math.min(minDepth

Know the depth of a dictionary

纵饮孤独 提交于 2019-11-30 17:04:58
Supposing we have this dict: d = {'a':1, 'b': {'c':{}}} What would be the most straightforward way of knowing the nesting depth of it? You'll have to traverse the dictionary. You could do so with a queue; the following should be safe from circular references: from collections import deque def depth(d): queue = deque([(id(d), d, 1)]) memo = set() while queue: id_, o, level = queue.popleft() if id_ in memo: continue memo.add(id_) if isinstance(o, dict): queue += ((id(v), v, level + 1) for v in o.values()) return level Note that because we visit all dictionary values in breath-first order, the

Precision of the kinect depth camera

一个人想着一个人 提交于 2019-11-30 10:19:37
问题 How precise is the depth camera in the kinect? range? resolution? noise? Especially I'd like to know: Are there any official specs about it from Microsoft? Are there any scientific papers on the subject? Investigations from TechBlogs? Personal experiments that are easy to reproduce? I'm collecting data for about a day now, but most of the writers don't name their sources and the values seem quite to differ... 回答1: Range: ~ 50 cm to 5 m. Can get closer (~ 40 cm) in parts, but can't have the

Using OpenCV to generate 3d points (assuming frontal parallel configuration)

五迷三道 提交于 2019-11-30 09:51:07
I am currently trying to generate 3D points given stereo image pair in OpenCV. This has been done quite a bit as far as I can search. I know the extrinsic parameters of the stereo setup which I'm going to assume is in frontal parallel configuration (really, it isn't that bad!). I know the focal length, baseline, and I'm going to assume the principal point as the center of the image (I know, I know...). I calculate a psuedo-decent disparity map using StereoSGBM and hand coded the Q matrix following O'Reilly's Learning OpenCV book which specifies: Q = [ 1 0 0 -c_x 0 1 0 -c_y 0 0 0 f 0 0 -1/T_x

Finding the product of a variable number of Ruby arrays

自古美人都是妖i 提交于 2019-11-30 04:58:04
问题 I'm looking to find all combinations of single items from a variable number of arrays. How do I do this in Ruby? Given two arrays, I can use Array.product like this: groups = [] groups[0] = ["hello", "goodbye"] groups[1] = ["world", "everyone"] combinations = groups[0].product(groups[1]) puts combinations.inspect # [["hello", "world"], ["hello", "everyone"], ["goodbye", "world"], ["goodbye", "everyone"]] How could this code work when groups contains a variable number of arrays? 回答1: groups =

Know the depth of a dictionary

岁酱吖の 提交于 2019-11-30 00:22:21
问题 Supposing we have this dict: d = {'a':1, 'b': {'c':{}}} What would be the most straightforward way of knowing the nesting depth of it? 回答1: You'll have to traverse the dictionary. You could do so with a queue; the following should be safe from circular references: from collections import deque def depth(d): queue = deque([(id(d), d, 1)]) memo = set() while queue: id_, o, level = queue.popleft() if id_ in memo: continue memo.add(id_) if isinstance(o, dict): queue += ((id(v), v, level + 1) for

如何训练inception网络

痞子三分冷 提交于 2019-11-29 23:45:29
其实我写的有点害怕,因为我不知道我做的对不对,电脑的GPU不行,只跑出了两个epoch的结果就跑不动了,我也不知道是不是程序真的有问题,嗯,我就是一个傻狗屌丝女。先将inception_v3原来的模型放进来用来获取logits。 from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from VGG16 import inception_utils slim = tf.contrib.slim trunc_normal = lambda stddev: tf.truncated_normal_initializer(0.0, stddev) def inception_v3_base(inputs, final_endpoint='Mixed_7c', min_depth=16, depth_multiplier=1.0, scope=None): end_points = {} if depth_multiplier <= 0: raise ValueError('depth_multiplier is not greater than zero.') depth =