matrix-multiplication

Help me solve this bug with my ray tracer

不羁的心 提交于 2019-12-23 21:50:25
问题 I'm not going to post any code for this question because it would require way too much context, but I shall explain conceptually what I'm doing. I'm building a simple ray tracer that uses affine transformations. What I mean is that I'm intersecting all rays from camera coordinates with generic shapes. The shapes all have associated affine transformations, and the rays are first multiplied by the inverses of these transformations before intersecting with scene objects. So for example, say I

3-D batch matrix multiplication without knowing batch size

痴心易碎 提交于 2019-12-23 21:11:44
问题 I'm currently writing a tensorflow program that requires multiplying a batch of 2-D tensors (a 3-D tensor of shape [None,...] ) with a 2-D matrix W . This requires turning W into a 3-D matrix, which requires knowing the batch size. I have not been able to do this; tf.batch_matmul is no longer usable, x.get_shape().as_list()[0] returns None , which is invalid for a reshaping/tiling operation. Any suggestions? I've seen some people use config.cfg.batch_size , but I don't know what that is. 回答1:

PyTorch - shape of nn.Linear weights

我怕爱的太早我们不能终老 提交于 2019-12-23 18:52:49
问题 Yesterday I came across this question and for the first time noticed that the weights of the linear layer nn.Linear need to be transposed before applying matmul . Code for applying the weights: output = input.matmul(weight.t()) What is the reason for this? Why are the weights not in the transposed shape just from the beginning, so they don't need to be transposed every time before applying the layer? 回答1: I found an answer here: Efficient forward pass in nn.Linear #2159 It seems like there is

matlab/octave - Generalized matrix multiplication

夙愿已清 提交于 2019-12-23 07:27:25
问题 I would like to do a function to generalize matrix multiplication. Basically, it should be able to do the standard matrix multiplication, but it should allow to change the two binary operators product/sum by any other function. The goal is to be as efficient as possible, both in terms of CPU and memory. Of course, it will always be less efficient than A*B, but the operators flexibility is the point here. Here are a few commands I could come up after reading various interesting threads: A =

Broadcasting np.dot vs tf.matmul for tensor-matrix multiplication (Shape must be rank 2 but is rank 3 error)

丶灬走出姿态 提交于 2019-12-23 03:48:11
问题 Let's say I have the following tensors: X = np.zeros((3,201, 340)) Y = np.zeros((340, 28)) Making a dot product of X, Y is successful with numpy, and yields a tensor of shape (3, 201, 28). However with tensorflow I get the following error: Shape must be rank 2 but is rank 3 error ... minimal code example: X = np.zeros((3,201, 340)) Y = np.zeros((340, 28)) print(np.dot(X,Y).shape) # successful (3, 201, 28) tf.matmul(X, Y) # errornous Any idea how to achieve the same result with tensorflow? 回答1

Why did I need to use this statement twice - Matrix Multiplication

耗尽温柔 提交于 2019-12-23 02:34:07
问题 I was developing a program to find matrix multiplication. #include <iostream> using namespace std; int main() { int a=0,b=0,c=0,d=0,e=0; cout<<"Enter the order of the first matrix A \n\nNumber of Rows : "; cin>>a; cout<<"\nNumber of Columns : "; cin>>b; cout<<endl; int matrixA[a][b]; cout<<"Enter the matrix Elements "<<endl; for(int m=0; m<a; m++) { for(int n=0; n<b; n++) { cout<<"A ("<< m+1 <<" , "<<n+1<<" ) ="; cin>>matrixA[m][n]; //cout<<","; } cout<<endl; } //////////////////////////

1-element Array to scalar in Julia

你。 提交于 2019-12-23 01:05:24
问题 Multiplying a row and a column vector, I was expecting the result to be scalar, but it is a 1-dimensional, 1-element Array: julia> [1 2 3] * [4; 5; 6] 1-element Array{Int64,1}: 32 Question 1: What is the rationale behind this? Question 2: Accepting this as a quirk of Julia, I want to convert the 1-element Array into a scalar. Taking the first element with [1] is an option, but not very readable. What is the idiosyncratic way to do this? 回答1: Every expression could be acted on, so you can use

bsxfun implementation in solving a min. optimization task

╄→гoц情女王★ 提交于 2019-12-23 00:50:48
问题 I really need help with this one. I have to matrices L1 and L2 , both are (500x3) of size. First of all, I compute the difference of every element of each column of L1 from L2 as follows: lib1 = bsxfun(@minus, L1(:,1)',L2(:,1)); lib1=lib1(:); lib2 = bsxfun(@minus, L1(:,2)',L2(:,2)); lib2=lib2(:); lib3 = bsxfun(@minus, L1(:,3)',L2(:,3)); lib3=lib3(:); LBR = [lib1 lib2 lib3]; The result is this matrix LBR . Then I have a min -problem to solve: [d,p] = min((LBR(:,1) - var1).^2 + (LBR(:,2) - var2

OpenGL glMatrixMode(GL_PROJECTION) vs glMatrixMode(GL_MODELVIEW)

▼魔方 西西 提交于 2019-12-22 18:02:21
问题 what is the difference between placing glRotatef() after glMatrixMode(GL_PROJECTION); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glRotatef(red, green, blue); and placing glRotatef() after glMatrixMode(GL_MODELVIEW); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glRotatef(red, green, blue); 回答1: From documentation: glMatrixMode() specifies which matrix is the current matrix. GL_MODELVIEW - Applies subsequent matrix operations to the modelview matrix stack. GL_PROJECTION - Applies

why is the time complexity of square matrix multiplication defined as O(n^3)?

跟風遠走 提交于 2019-12-22 06:58:14
问题 I have come across this in multiple sources (online and books) - Running time of square matrix multiplication is O(n^3) for matrices of size nXn. (example - matrix multiplication algorithm time complexity) This statement would indicate that the upper bound on running time of this multiplication process is C.n^3 where C is some constant and n>n0 where n0 is some input beyond which this upper bound holds true. (http://en.wikipedia.org/wiki/Big_O_notation and What is the difference between Θ(n)