How to reuse computation graph for different inputs?

微笑、不失礼 提交于 2021-01-28 08:22:04

问题


I have my main flow of computation set up that I can train using

train = theano.function(inputs=[x], outputs=[cost], updates=updates)

Similarly, I have a function for predictions

predict = theano.function(inputs=[x], outputs=[output])

Both of these functions accept the input x and send it through the same computation graph.

I would now like to modify things so that when training, I can train using a noisy input, so I have something like

input = get_corrupted_input(self.theano_rng, x, 0.5)

at the beginning of the computations.

But this will also affect my predict function, since its input will get corrupted as well. How can I reuse the same code for train and predict, but only provide the former with the noisy input?


回答1:


You can organise your code like this:

import numpy
import theano
import theano.tensor as tt
import theano.tensor.shared_randomstreams


def get_cost(x, y):
    return tt.mean(tt.sum(tt.sqr(x - y), axis=1))


def get_output(x, w, b_h, b_y):
    h = tt.tanh(tt.dot(x, w) + b_h)
    y = tt.dot(h, w.T) + b_y
    return y


def corrupt_input(x, corruption_level):
    rng = tt.shared_randomstreams.RandomStreams()
    return rng.binomial(size=x.shape, n=1, p=1 - corruption_level,
                        dtype=theano.config.floatX) * x


def compile(input_size, hidden_size, corruption_level, learning_rate):
    x = tt.matrix()
    w = theano.shared(numpy.random.randn(input_size,
                      hidden_size).astype(theano.config.floatX))
    b_h = theano.shared(numpy.zeros(hidden_size, dtype=theano.config.floatX))
    b_y = theano.shared(numpy.zeros(input_size, dtype=theano.config.floatX))
    cost = get_cost(x, get_output(corrupt_input(x, corruption_level), w, b_h, b_y))
    updates = [(p, p - learning_rate * tt.grad(cost, p)) for p in (w, b_h, b_y)]
    train = theano.function(inputs=[x], outputs=cost, updates=updates)
    predict = theano.function(inputs=[x], outputs=get_output(x, w, b_h, b_y))
    return train, predict


def main():
    train, predict = compile(input_size=3, hidden_size=2,
                             corruption_level=0.2, learning_rate=0.01)


main()

Note that get_output is called twice. For the train function it is provided with the corrupted input but for the predict function it is provided with the clean input. get_output needs to contain "the same computation graph" that you talk of. I've just put a tiny autoencoder in there but you can put whatever you want in there.

Assuming the corrupted input has the same shape as the input, the get_output function won't care whether its input is x or the corrupted version of x. So get_output can be shared but need not contain the corruption code.



来源:https://stackoverflow.com/questions/33858990/how-to-reuse-computation-graph-for-different-inputs

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!