Update values of a matrix variable in tensorflow, advanced indexing

こ雲淡風輕ζ 提交于 2019-12-04 06:41:43

问题


I would like to create a function that for every line of a given data X, is applying the softmax function only for some sampled classes, lets say 2, out of K total classes. In simple python the code seems like that:

def softy(X,W, num_samples):
    N = X.shape[0]
    K = W.shape[0]
    S = np.zeros((N,K)) 
    ar_to_sof = np.zeros(num_samples)
    sampled_ind = np.zeros(num_samples, dtype = int)
    for line in range(N):        
        for samp in range(num_samples):
            sampled_ind[samp] = randint(0,K-1)
            ar_to_sof[samp] = np.dot(X[line],np.transpose(W[sampled_ind[samp]])) 
        ar_to_sof = softmax(ar_to_sof)
        S[line][sampled_ind] = ar_to_sof 

    return S

S finally would contain zeros, and non_zero values in the indexes defined for every line by the array "samped_ind". I would like to implement this using Tensorflow. The problem is that it contains "advanced" indexing and i cannot find a way using this library to create that.

I am trying that using this code:

S = tf.Variable(tf.zeros((N,K)))
tfx = tf.placeholder(tf.float32,shape=(None,D))
wsampled = tf.placeholder(tf.float32, shape = (None,D))
ar_to_sof = tf.matmul(tfx,wsampled,transpose_b=True)
softy = tf.nn.softmax(ar_to_sof)
r = tf.random_uniform(shape=(), minval=0,maxval=K, dtype=tf.int32)
...
for line in range(N):
    sampled_ind = tf.constant(value=[sess.run(r),sess.run(r)],dtype= tf.int32)
    Wsampled = sess.run(tf.gather(W,sampled_ind))
    sess.run(softy,feed_dict={tfx:X[line:line+1], wsampled:Wsampled})

Everything works until here, but i cannot find a way to do the update that i want in the matrix S, in python code "S[line][sampled_ind] = ar_to_sof ".

How could i make this work?


回答1:


An answer to my problem was found in the comment of a solution of this problem. Suggests to reshape to 1d vector my matrix S. In that way, the code is working and it looks like:

S = tf.Variable(tf.zeros(shape=(N*K)))
W = tf.Variable(tf.random_uniform((K,D)))
tfx = tf.placeholder(tf.float32,shape=(None,D))
sampled_ind = tf.random_uniform(dtype=tf.int32, minval=0, maxval=K-1, shape=[num_samps])
ar_to_sof = tf.matmul(tfx,tf.gather(W,sampled_ind),transpose_b=True)
updates = tf.reshape(tf.nn.softmax(ar_to_sof),shape=(num_samps,))
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for line in range(N):
    inds_new = sampled_ind + line*K
    sess.run(tf.scatter_update(S,inds_new,updates), feed_dict={tfx: X[line:line+1]})

S = tf.reshape(S,shape=(N,K))

That returns the result that i was expecting. The problem now is that this implementation is too slow. Much slower than the numpy version. Maybe is the for loop. Any suggestions?



来源:https://stackoverflow.com/questions/40568572/update-values-of-a-matrix-variable-in-tensorflow-advanced-indexing

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!