autoencoder

Keras - Autoencoder accuracy stuck on zero

限于喜欢 提交于 2021-02-18 03:21:41
问题 I'm trying to detect fraud using autoencoder and Keras. I've written the following code as a Notebook: import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) from sklearn.preprocessing import StandardScaler from keras.layers import Input, Dense from keras.models import Model import matplotlib.pyplot as plt data = pd.read_csv('../input/creditcard.csv') data['normAmount'] = StandardScaler().fit_transform(data['Amount'].values.reshape(-1, 1))

Split autoencoder on encoder and decoder keras

只愿长相守 提交于 2021-02-16 14:48:10
问题 I am trying to create an autoencoder for: Train the model Split encoder and decoder Visualise compressed data (encoder) Use arbitrary compressed data to get the output (decoder) from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D from keras.models import Model from keras import backend as K from keras.datasets import mnist import numpy as np (x_train, _), (x_test, _) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_train = x_train[:100,:,:,] x_test = x

How to handle errors in arrays and sequence?

左心房为你撑大大i 提交于 2021-02-11 18:21:32
问题 I am trying to find the similarity of documents. Referring to this github repo: https://github.com/s4sarath/Deep-Learning-Projects/blob/master/variational_text_inference/model_evaluation.ipynb When I run this code: batch_size = 100 H_20_grp_nws = [] batch_data = A.get_batch(batch_size) batch_id = 0 for batch_ in batch_data: batch_id += 1 collected_data = [chunks for chunks in batch_] batch_xs , mask_xs , mask_negative = A._bag_of_words(collected_data) feed_dict = {vae.X: batch_xs , vae

How to design a shared weight, multi input/output Auto-Encoder network?

自古美人都是妖i 提交于 2021-02-08 07:47:09
问题 I have two different types of images (camera image and it's corresponding sketch). The goal of the network is to find the similarity between both images. The network consists of a single encoder and a single decoder. The motivation behind the single encoder-decoder is to share the weights between them. input_img = Input(shape=(img_width,img_height, channels)) def encoder(input_img): # Photo-Encoder Code pe = Conv2D(96, kernel_size=11, strides=(4,4), padding = 'SAME')(left_input) # (?, 64, 64,

How to design a shared weight, multi input/output Auto-Encoder network?

て烟熏妆下的殇ゞ 提交于 2021-02-08 07:44:56
问题 I have two different types of images (camera image and it's corresponding sketch). The goal of the network is to find the similarity between both images. The network consists of a single encoder and a single decoder. The motivation behind the single encoder-decoder is to share the weights between them. input_img = Input(shape=(img_width,img_height, channels)) def encoder(input_img): # Photo-Encoder Code pe = Conv2D(96, kernel_size=11, strides=(4,4), padding = 'SAME')(left_input) # (?, 64, 64,

LSTM Autoencoder problems

守給你的承諾、 提交于 2021-02-06 16:14:30
问题 TLDR: Autoencoder underfits timeseries reconstruction and just predicts average value. Question Set-up: Here is a summary of my attempt at a sequence-to-sequence autoencoder. This image was taken from this paper: https://arxiv.org/pdf/1607.00148.pdf Encoder: Standard LSTM layer. Input sequence is encoded in the final hidden state. Decoder: LSTM Cell (I think!). Reconstruct the sequence one element at a time, starting with the last element x[N] . Decoder algorithm is as follows for a sequence

LSTM Autoencoder problems

拜拜、爱过 提交于 2021-02-06 16:07:07
问题 TLDR: Autoencoder underfits timeseries reconstruction and just predicts average value. Question Set-up: Here is a summary of my attempt at a sequence-to-sequence autoencoder. This image was taken from this paper: https://arxiv.org/pdf/1607.00148.pdf Encoder: Standard LSTM layer. Input sequence is encoded in the final hidden state. Decoder: LSTM Cell (I think!). Reconstruct the sequence one element at a time, starting with the last element x[N] . Decoder algorithm is as follows for a sequence

LSTM Autoencoder problems

≯℡__Kan透↙ 提交于 2021-02-06 16:06:04
问题 TLDR: Autoencoder underfits timeseries reconstruction and just predicts average value. Question Set-up: Here is a summary of my attempt at a sequence-to-sequence autoencoder. This image was taken from this paper: https://arxiv.org/pdf/1607.00148.pdf Encoder: Standard LSTM layer. Input sequence is encoded in the final hidden state. Decoder: LSTM Cell (I think!). Reconstruct the sequence one element at a time, starting with the last element x[N] . Decoder algorithm is as follows for a sequence

LSTM Autoencoder problems

旧巷老猫 提交于 2021-02-06 16:05:10
问题 TLDR: Autoencoder underfits timeseries reconstruction and just predicts average value. Question Set-up: Here is a summary of my attempt at a sequence-to-sequence autoencoder. This image was taken from this paper: https://arxiv.org/pdf/1607.00148.pdf Encoder: Standard LSTM layer. Input sequence is encoded in the final hidden state. Decoder: LSTM Cell (I think!). Reconstruct the sequence one element at a time, starting with the last element x[N] . Decoder algorithm is as follows for a sequence

During creating VAE model throws exception “you should implement a `call` method.”

扶醉桌前 提交于 2021-01-29 17:42:57
问题 I want to create VAE(variational autoencoder). During model creating it throws exception. When subclassing the Model class, you should implement a call method. I am using Tensorflow 2.0 def vae(): models ={} def apply_bn_and_dropout(x): return l.Dropout(dropout_rate)(l.BatchNormalization()(x)) input_image = l.Input(batch_shape=(batch_size,28,28,1)) x = l.Flatten()(input_image) x = l.Dense(256,activation="relu")(x) x = apply_bn_and_dropout(x) x = l.Dense(128,activation="relu")(x) x = apply_bn