encoder-decoder

Model inputs must come from `tf.keras.Input` …, they cannot be the output of a previous non-Input layer

∥☆過路亽.° 提交于 2020-12-29 18:21:01
问题 I'm using Python 3.7.7. and Tensorflow 2.1.0. I have a pre-trained U-Net network, and I want to get its encoder and its decoder . In the following picture: You can see a convolutional encoder-decoder architecture. I want to get the encoder part, that is, the layers that appears on the left of the image: And the decoder part: I get the U-Net model from this function: def get_unet_uncompiled(img_shape = (200,200,1)): inputs = Input(shape=img_shape) conv1 = Conv2D(64, (5, 5), activation='relu',

Model inputs must come from `tf.keras.Input` …, they cannot be the output of a previous non-Input layer

ⅰ亾dé卋堺 提交于 2020-12-29 18:10:57
问题 I'm using Python 3.7.7. and Tensorflow 2.1.0. I have a pre-trained U-Net network, and I want to get its encoder and its decoder . In the following picture: You can see a convolutional encoder-decoder architecture. I want to get the encoder part, that is, the layers that appears on the left of the image: And the decoder part: I get the U-Net model from this function: def get_unet_uncompiled(img_shape = (200,200,1)): inputs = Input(shape=img_shape) conv1 = Conv2D(64, (5, 5), activation='relu',

How to model the data for sequence to sequence prediction with only one feature

左心房为你撑大大i 提交于 2020-02-07 05:22:16
问题 I have 9000 sequences each of length 200, only one feature. #data.shape= (9000,200,1) I want to predict the sequence of length 200 based on input sequence of length 190. X is input sequence of length 190, and Y is output sequence of length 200. X = np.delete(data,slice(50,60),1) # shape of X = (9000,190,1) Y = data.copy() # shape of Y = (9000,200,1) My question is based on the tutorial Encoder-Decoder Model for Sequence-to-Sequence Prediction and on existing stackoverflow question seq2seq

PyTorch: DecoderRNN: RuntimeError: input must have 3 dimensions, got 2

末鹿安然 提交于 2019-12-11 03:23:01
问题 I am building a DecoderRNN using PyTorch (This is an image-caption decoder): class DecoderRNN(nn.Module): def __init__(self, embed_size, hidden_size, vocab_size): super(DecoderRNN, self).__init__() self.hidden_size = hidden_size self.gru = nn.GRU(embed_size, hidden_size, hidden_size) self.softmax = nn.LogSoftmax(dim=1) def forward(self, features, captions): print (features.shape) print (captions.shape) output, hidden = self.gru(features, captions) output = self.softmax(self.out(output[0]))

YUV420 to BGR image from pixel pointers

拜拜、爱过 提交于 2019-12-02 00:05:38
I am capturing raw output from a decoder which is YUV420. I have got three pointers: Y(1920*1080), U(960*540) and V(960*540) separately. I want to save the image as JPEG using OpenCV. I tried using cvtcolor of opencv cv::Mat i_image(cv::Size(columns, rows), CV_8UC3, dataBuffer); cv::Mat i_image_BGR(cv::Size(columns, rows), CV_8UC3); cvtColor(i_image, i_image_BGR, cv::COLOR_YCrCb2BGR); cv::imwrite("/data/data/org.myproject.debug/files/pic1.jpg", i_image_BGR); But, here is the output image which is saved: Can someone please suggest what is the proper way of saving the image? YUV Binary files for

YUV420 to BGR image from pixel pointers

回眸只為那壹抹淺笑 提交于 2019-12-01 22:17:28
问题 I am capturing raw output from a decoder which is YUV420. I have got three pointers: Y(1920*1080), U(960*540) and V(960*540) separately. I want to save the image as JPEG using OpenCV. I tried using cvtcolor of opencv cv::Mat i_image(cv::Size(columns, rows), CV_8UC3, dataBuffer); cv::Mat i_image_BGR(cv::Size(columns, rows), CV_8UC3); cvtColor(i_image, i_image_BGR, cv::COLOR_YCrCb2BGR); cv::imwrite("/data/data/org.myproject.debug/files/pic1.jpg", i_image_BGR); But, here is the output image

Multilayer Seq2Seq model with LSTM in Keras

▼魔方 西西 提交于 2019-11-30 07:17:44
I was making a seq2seq model in keras. I had built single layer encoder and decoder and they were working fine. But now I want to extend it to multi layer encoder and decoder. I am building it using Keras Functional API. Training:- Code for encoder:- encoder_input=Input(shape=(None,vec_dimension)) encoder_lstm=LSTM(vec_dimension,return_state=True,return_sequences=True)(encoder_input) encoder_lstm=LSTM(vec_dimension,return_state=True)(encoder_lstm) encoder_output,encoder_h,encoder_c=encoder_lstm Code for decoder:- encoder_state=[encoder_h,encoder_c] decoder_input=Input(shape=(None,vec_dimension

Multilayer Seq2Seq model with LSTM in Keras

六眼飞鱼酱① 提交于 2019-11-29 09:14:31
问题 I was making a seq2seq model in keras. I had built single layer encoder and decoder and they were working fine. But now I want to extend it to multi layer encoder and decoder. I am building it using Keras Functional API. Training:- Code for encoder:- encoder_input=Input(shape=(None,vec_dimension)) encoder_lstm=LSTM(vec_dimension,return_state=True,return_sequences=True)(encoder_input) encoder_lstm=LSTM(vec_dimension,return_state=True)(encoder_lstm) encoder_output,encoder_h,encoder_c=encoder