Keras Custom Layer 2D input -> 2D output

不想你离开。 提交于 2020-01-02 05:21:09

问题


I have an 2D input (or 3D if one consider the number of samples) and I want to apply a keras layer that would take this input and outputs another 2D matrix. So, for example, if I have an input with size (ExV), the learning weight matrix would be (SxE) and the output (SxV). Can I do this with Dense layer?

EDIT (Nassim request):

The first layer is doing nothing. It's just to give an input to Lambda layer:

from keras.models import Sequential
from keras.layers.core import Reshape,Lambda
from keras import backend as K
from keras.models import Model

input_sample = [
[[1,2,3,4,5],[6,7,8,9,10],[11,12,13,14,15],[16,17,18,19,20]]
,[[21,22,23,24,25],[26,27,28,29,30],[31,32,33,34,35],[36,37,38,39,40]]
,[[41,42,43,44,45],[46,47,48,49,50],[51,52,53,54,55],[56,57,58,59,60]]
]


model = Sequential()
model.add(Reshape((4,5), input_shape=(4,5)))
model.add(Lambda(lambda x: K.transpose(x)))
intermediate_layer_model = Model(input=model.input,output=model.layers[0].output)
print "First layer:"
print intermediate_layer_model.predict(input_sample)
print ""
print "Second layer:"
intermediate_layer_model = Model(input=model.input,output=model.layers[1].output)
print intermediate_layer_model.predict(input_sample)

回答1:


It depends on what you want to do. Is it 2D because it's a sequence? Then LSTM are made for that and will return a sequence if desired size if you set return_sequence=True.

CNN's can also work on 2D inputs and will output something of variable size depending on the number of kernels you use.

Otherwise you can reshape it to a (E x V, ) 1D tensor, use a Dense layer with SxV dimension and reshape the output to a (S,V) 2D tensor...

I can not help you more, we need to know your usecase :-) there are too many possibilities with neural networks.

EDIT :

You can use TimeDistributed(Dense(S)). If your input has a shape (E,V), you reshape to (V,E) to have V as the "time dimension". Then you apply TimeDistributed(Dense(S)) which will be a dense layer with weights (ExS), the output will have the shape (V,S) so you can reshape it to (S,V).

Does that make what you want ? The TimeDistributed() layer will apply the same Dense(S) layer to every V lines of your input with shared weights.

EDIT 2:

After looking at the code of keras backend, it turns out that to use the transpose from tensorflow with 'permutation patterns' option available, you need to use K.permute_dimensions(x,pattern). The batch dimension must be included. In your case :

Lambda(lambda x: K.permute_dimensions(x,[0,2,1]))

K.transpose(x) uses the same function internally (for tf backend) but permutations is set to the default value which is [n,n-1,...,0].




回答2:


What you want is probably autoencoder.

https://blog.keras.io/building-autoencoders-in-keras.html



来源:https://stackoverflow.com/questions/42154792/keras-custom-layer-2d-input-2d-output

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!