Defining a gradient with respect to a subtensor in Theano
I have what is conceptually a simple question about Theano but I haven't been able to find the answer (I'll confess upfront to not really understanding how shared variables work in Theano, despite many hours with the tutorials). I'm trying to implement a "deconvolutional network"; specifically I have a 3-tensor of inputs (each input is a 2D image) and a 4-tensor of codes; for the ith input codes[i] represents a set of codewords which together code for input i. I've been having a lot of trouble figuring out how to do gradient descent on the codewords. Here are the relevant parts of my code: idx