Keras Custom Layer 2D input -> 2D output

心不动则不痛 提交于 2019-12-05 13:31:22

It depends on what you want to do. Is it 2D because it's a sequence? Then LSTM are made for that and will return a sequence if desired size if you set return_sequence=True.

CNN's can also work on 2D inputs and will output something of variable size depending on the number of kernels you use.

Otherwise you can reshape it to a (E x V, ) 1D tensor, use a Dense layer with SxV dimension and reshape the output to a (S,V) 2D tensor...

I can not help you more, we need to know your usecase :-) there are too many possibilities with neural networks.

EDIT :

You can use TimeDistributed(Dense(S)). If your input has a shape (E,V), you reshape to (V,E) to have V as the "time dimension". Then you apply TimeDistributed(Dense(S)) which will be a dense layer with weights (ExS), the output will have the shape (V,S) so you can reshape it to (S,V).

Does that make what you want ? The TimeDistributed() layer will apply the same Dense(S) layer to every V lines of your input with shared weights.

EDIT 2:

After looking at the code of keras backend, it turns out that to use the transpose from tensorflow with 'permutation patterns' option available, you need to use K.permute_dimensions(x,pattern). The batch dimension must be included. In your case :

Lambda(lambda x: K.permute_dimensions(x,[0,2,1]))

K.transpose(x) uses the same function internally (for tf backend) but permutations is set to the default value which is [n,n-1,...,0].

What you want is probably autoencoder.

https://blog.keras.io/building-autoencoders-in-keras.html

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!